An AI-based computer system can gather data and use that data to make decisions or solve problems – using algorithms to perform tasks that, if done by a human, would be said to require intelligence. The benefits created by AI and machine learning (ML) systems for better health care, safer transportation, and greater efficiencies across the globe are already happening. But the increased amounts of data and computing power that enable sophisticated AI and ML models raise questions about the privacy impacts, ethical consequences, fairness, and real world harms if the systems are not designed and managed responsibly. FPF works with commercial, academic, and civil society supporters and partners to develop best practices for managing risk in AI and ML and assess whether historical data protection practices such as fairness, accountability, and transparency are sufficient to answer the ethical questions they raise. FPF’s work on AI and ML is led by Brenda Leong.
FPF has just completed its newest infographic educational tool, The Spectrum of Artificial Intelligence. AI is the computerized ability to perform tasks commonly associated with human intelligence, including reasoning, discovering patterns […]
Authors: Hunter Dorwart, Stacey Gray, Brenda Leong, Jake van der Laan, Matthias Artzt, and Rob van Eijk On 29 October 2020, Vrije Universiteit Brussel (VUB and Future of Privacy Forum […]
Last week, FPF submitted feedback and comments to the United Nations Children’s Fund (UNICEF) on the Draft Policy Guidance on Artificial Intelligence (AI) for Children, which seeks “to promote children’s […]
On June 8, FPF hosted a webinar, Privacy Preserving Machine Learning: New Research on Data and Model Privacy. Co-hosted by the FPF Artificial Intelligence Working Group and the Applied Privacy […]
Artificial intelligence and machine learning (AI/ML) generate significant value when used responsibly – and are the subject of growing investment for exactly these reasons. But AI/ML can also amplify organizations’ exposure to potential vulnerabilities, ranging from fairness and security issues to regulatory fines and reputational harm.
This week, Future of Privacy Forum (FPF) Senior Counsel and Director of AI & Ethics Brenda Leong submitted a written statement on the use of artificial intelligence and machine learning-based […]
Yesterday, the Future of Privacy Forum provided bespoke training on machine learning as a side event during the Computers, Privacy and Data Protection Conference (CPDP2020) in Brussels. The Understanding Machine Learning masterclass is a training aimed […]
FPF’s Brenda Leong calls on policymakers to balance privacy and ethical risks, and establish “opt-in” consent standard to protect consumer privacy.