Showing results for lancing interior designer repair amp amp
FPF-2025-Sponsorship-Prospectus
[…] conference and to meet with start-ups, regulators, and academics. Sponsors have a unique opportunity to participate in the week-long festivities during various professional and social gatherings, ensuring ample time for engaging conversations. Delegates have included CPOs of Apple, Google, eBay, Microsoft, TransUnion, and more. $3,500–$7,500 Varying levels of sponsorship available AUDIENCE Invite-only FPF Member […]
Minding Mindful Machines: AI Agents and Data Protection Considerations
We are now in 2025, the year of AI agents. Leading large language model (LLM) developers (including OpenAI, Google, Anthropic) have released early versions of technologies described as “AI agents.” Unlike earlier automated systems and even LLMs, these systems go beyond previous technology by having autonomy over how to achieve complex, multi-step tasks, such as […]
Potential Harms of Automated Decision-making Charts
[…] developers to ensure proper management and oversight of AI tools » Data methods to ensure proxies are not used for protected classes, and training data does not amplify historical bias » Human rights impact assessments to assess the impact on fundamental rights that the system may produce » Privacy impact assessments to ensure that […]
Potential Harms And Mitigation Practices for Automated Decision-making and Generative AI
[…] developers to ensure proper management and oversight of AI tools » Data methods to ensure proxies are not used for protected classes, and training data does not amplify historical bias » Human rights impact assessments to assess the impact on fundamental rights that the system may produce » Privacy impact assessments to ensure that […]
Potential Harms And Mitigation Practices for Automated Decision-making and Generative AI
FPF has updated our 2017 resource (“Distilling the Harms of Automated Decision-making”) with the goal of identifying and categorizing a broad range of potential harms that may result from automated decision-making, including heightened harms related to generative AI (GenAI), and potential mitigation practices. FPF reviewed leading books, articles, and other literature on the topic of […]
FPF AI Harms Charts Only R2
[…] developers to ensure proper management and oversight of AI tools » Data methods to ensure proxies are not used for protected classes, and training data does not amplify historical bias » Human rights impact assessments to assess the impact on fundamental rights that the system may produce » Privacy impact assessments to ensure that […]
FPF AI Harms R5
[…] developers to ensure proper management and oversight of AI tools » Data methods to ensure proxies are not used for protected classes, and training data does not amplify historical bias » Human rights impact assessments to assess the impact on fundamental rights that the system may produce » Privacy impact assessments to ensure that […]
FPF-2025-Sponsorship-Prospectus
[…] conference and to meet with start-ups, regulators, and academics. Sponsors have a unique opportunity to participate in the week-long festivities during various professional and social gatherings, ensuring ample time for engaging conversations. Delegates have included CPOs of Apple, Google, eBay, Microsoft, TransUnion, and more. $3,500–$7,500 Varying levels of sponsorship available AUDIENCE Invite-only FPF Member […]
FPF Releases Report on the Adoption of Privacy Enhancing Technologies by State Education Agencies
The Future of Privacy Forum (FPF) released a landscape analysis of the adoption of Privacy Enhancing Technologies (PETs) by State Education Agencies (SEAs). As agencies face increasing pressure to leverage sensitive student and institutional data for analysis and research, PETs offer a unique potential solution as they are advanced technologies designed to protect data privacy […]
FPF-PPPM-2025-Digest
[…] developing and using AI systems in both consumer and public sector settings to proactively identify and address bias or discrimination that those AI systems may reflect or amplify. Central to this effort is the complex and sensitive task of obtaining demographic data to measure fairness and bias within and surrounding these systems. This report […]