Showing results for appi2020 2011va 2011va api documentation appi2020 2011va

“Personality vs. Personalization” in AI Systems: Specific Uses and Concrete Risks (Part 2)
[…] may lead to more intimate inferences. Systems with agentic capabilities that act on user preferences (e.g., shopping assistants) may have access to tools (e.g., querying databases, making API calls, interacting with web browsers, and accessing file systems) enabling them to obtain more real-time information about individuals. For example, some agents may take screenshots of […]

AI Regulation in Latin America: Overview and Emerging Trends in Key Proposals
[…] addition, most bills include specific obligations for entities operating “high-risk” systems, such as performing comprehensive risk assessments and ethical evaluations; assuring data quality and bias detection; extensive documentation and record-keeping obligations; and guiding users on the intended use, accuracy, and robustness of these systems. Brazil’s bill indicates the competent authority will have discretion to […]

A Price to Pay: U.S. Lawmaker Efforts to Regulate Algorithmic and Data-Driven Pricing
“Algorithmic pricing,” “surveillance pricing,” “dynamic pricing”: in states across the U.S., lawmakers are introducing legislation to regulate a range of practices that use large amounts of data and algorithms to routinely inform decisions about the prices and products offered to consumers. These bills—targeting what this analysis collectively calls “data-driven pricing”—follow the Federal Trade Commission (FTC)’s […]

Balancing Innovation and Oversight: Regulatory Sandboxes as a Tool for AI Governance
Thanks to Marlene Smith for her research contributions. As policymakers worldwide seek to support beneficial uses of artificial intelligence (AI), many are exploring the concept of “regulatory sandboxes.” Broadly speaking, regulatory sandboxes are legal oversight frameworks that offer participating organizations the opportunity to experiment with emerging technologies within a controlled environment, usually combining regulatory oversight […]

Nature of Data in Pre-Trained Large Language Models
The following is a guest post to the FPF blog by Yeong Zee Kin, the Chief Executive of the Singapore Academy of Law and FPF Senior Fellow. The guest blog reflects the opinion of the author only. Guest blog posts do not necessarily reflect the views of FPF. The phenomenon of memorisation has fomented significant […]

Brazil’s ANPD Preliminary Study on Generative AI highlights the dual nature of data protection law: balancing rights with technological innovation
[…] in the model. According to the CGTP, “since pre-trained models can be considered a reflection of the database used for training, the popularization of the creation of APIs (Application Programming Interfaces) that adopt foundational models such as pre-trained LLMs, brings a new challenge. Sharing models tends to involve the data that is mathematically present […]

Amendments to the Montana Consumer Data Privacy Act Bring Big Changes to Big Sky Country
[…] online service’s purpose, the categories of personal data processed, and the processing purposes. Data protection assessments should be reviewed “as necessary” to account for material changes, and documentation should be retained for either 3 years after the processing operations cease, or the date on which the controller ceases offering the online service, whichever is […]

The Curse of Dimensionality: De-identification Challenges in the Sharing of Highly Dimensional Datasets
[…] the design of APIs (Application Programming Interfaces), which can act as critical shields between raw data and external access. Re-identification attempts can be partially mitigated at the API level through strict query limits, access controls, auditing mechanisms, and purpose restrictions—complementing the privacy-enhancing technologies discussed throughout this paper. These architectural choices embed ethical values and […]

FPF and OneTrust publish the Updated Guide on Conformity Assessments under the EU AI Act
[…] and after being placed on the market and throughout their use. The CA should be understood as a framework of assessments (both technical and non-technical), requirements, and documentation obligations. The provider should assess whether the AI system poses a high risk and identify both known and potential risks as part of their risk management […]

South Korea’s New AI Framework Act: A Balancing Act Between Innovation and Regulation
On 21 January 2025, South Korea became the first jurisdiction in the Asia-Pacific (APAC) region to adopt comprehensive artificial intelligence (AI) legislation. Taking effect on 22 January 2026, the Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness (AI Framework Act or simply, Act) introduces specific obligations for “high-impact” AI systems […]