An AI-based computer system can gather data and use that data to make decisions or solve problems – using algorithms to perform tasks that, if done by a human, would be said to require intelligence. The benefits created by AI and machine learning (ML) systems for better health care, safer transportation, and greater efficiencies across the globe are already happening. But the increased amounts of data and computing power that enable sophisticated AI and ML models raise questions about the privacy impacts, ethical consequences, fairness, and real world harms if the systems are not designed and managed responsibly. FPF works with commercial, academic, and civil society supporters and partners to develop best practices for managing risk in AI and ML and assess whether historical data protection practices such as fairness, accountability, and transparency are sufficient to answer the ethical questions they raise. FPF’s work on AI and ML is led by Brenda Leong.
Featured
The Spectrum of Artificial Intelligence – An Infographic Tool
FPF has just completed its newest infographic educational tool, The Spectrum of Artificial Intelligence. AI is the computerized ability to perform tasks commonly associated with human intelligence, including reasoning, discovering patterns […]
Understanding Blockchain: A Review of FPF’s Oct. 29th Digital Data Flows Masterclass
Authors: Hunter Dorwart, Stacey Gray, Brenda Leong, Jake van der Laan, Matthias Artzt, and Rob van Eijk On 29 October 2020, Vrije Universiteit Brussel (VUB and Future of Privacy Forum […]
FPF Submits Feedback and Comments on UNICEF’s Draft Policy Guidance on AI for Children
Last week, FPF submitted feedback and comments to the United Nations Children’s Fund (UNICEF) on the Draft Policy Guidance on Artificial Intelligence (AI) for Children, which seeks “to promote children’s […]
Five Top of Mind Data Protection Recommendations for Brain-Computer Interfaces
By Jeremy Greenberg, [email protected] and Katelyn Ringrose [email protected] Key FPF-curated background resources – policy & regulatory documents, academic papers, and technical analyses regarding brain-computer interfaces are available here. Recently, […]
FPF Webinar Explores the Future of Privacy-Preserving Machine Learning
On June 8, FPF hosted a webinar, Privacy Preserving Machine Learning: New Research on Data and Model Privacy. Co-hosted by the FPF Artificial Intelligence Working Group and the Applied Privacy […]
TEN QUESTIONS ON AI RISK
Artificial intelligence and machine learning (AI/ML) generate significant value when used responsibly – and are the subject of growing investment for exactly these reasons. But AI/ML can also amplify organizations’ exposure to potential vulnerabilities, ranging from fairness and security issues to regulatory fines and reputational harm.
Artificial Intelligence and the COVID-19 Pandemic
By Brenda Leong and Dr. Sara Jordan Machine learning-based technologies are playing a substantial role in the response to the COVID-19 pandemic. Experts are using machine learning to study the […]
FPF Submits Written Statement to the U.S. House Committee on Financial Services Task Force on AI
This week, Future of Privacy Forum (FPF) Senior Counsel and Director of AI & Ethics Brenda Leong submitted a written statement on the use of artificial intelligence and machine learning-based […]
Takeaways from the Understanding Machine Learning Masterclass
Yesterday, the Future of Privacy Forum provided bespoke training on machine learning as a side event during the Computers, Privacy and Data Protection Conference (CPDP2020) in Brussels. The Understanding Machine Learning masterclass is a training aimed […]
FPF Director of AI & Ethics Testifies Before Congress on Facial Recognition
FPF’s Brenda Leong calls on policymakers to balance privacy and ethical risks, and establish “opt-in” consent standard to protect consumer privacy.