An AI-based computer system can gather data and use that data to make decisions or solve problems – using algorithms to perform tasks that, if done by a human, would be said to require intelligence. The benefits created by AI and machine learning (ML) systems for better health care, safer transportation, and greater efficiencies across the globe are already happening. But the increased amounts of data and computing power that enable sophisticated AI and ML models raise questions about the privacy impacts, ethical consequences, fairness, and real world harms if the systems are not designed and managed responsibly. FPF works with commercial, academic, and civil society supporters and partners to develop best practices for managing risk in AI and ML and assess whether historical data protection practices such as fairness, accountability, and transparency are sufficient to answer the ethical questions they raise. FPF’s work on AI and ML is led by Brenda Leong.
In legislatures across the United States, state lawmakers are introducing proposals to govern the uses of automated decision-making systems (ADS) in record numbers. In contrast to comprehensive privacy bills that would regulate collection and use of personal information, automated decision-making system (ADS) bills in 2021 specifically seek to address increasing concerns about racial bias or […]
On Wed., April 14th, FPF hosted an expert panel discussion on “AI Out Loud: Representation in Data for Voice-Activated Devices, Assistants.” FPF’s Senior Counsel and Director of AI and Ethics, Brenda Leong, moderated the panel featuring Anne Toth, the Director of Alexa Trust, Amazon; Irina Raicu, Internet Ethics Program Director, Markkula Center for Applied Ethics, […]
Last week, on April 8, 2021, FPF’s Dr. Sara Jordan testified before the California House Committee on Privacy and Consumer Protection on AB-13 (Public contracts: automated decision systems). The legislation passed out of committee (9 Ayes, 0 Noes) and was re-referred to the Committee on Appropriations. The bill would regulate state procurement, use, and development […]
FPF has just completed its newest infographic educational tool, The Spectrum of Artificial Intelligence. AI is the computerized ability to perform tasks commonly associated with human intelligence, including reasoning, discovering patterns and meaning, generalizing knowledge across spheres of application, and learning from experience. The growth of AI-based systems in recent years has garnered much attention, particularly […]
Authors: Hunter Dorwart, Stacey Gray, Brenda Leong, Jake van der Laan, Matthias Artzt, and Rob van Eijk On 29 October 2020, Vrije Universiteit Brussel (VUB and Future of Privacy Forum (FPF) hosted the eight Digital Data Flows Masterclass. The masterclass on blockchain technology completes the VUB-FPF Digital Data Flows Masterclass series. The most recent masterclass explored […]
Last week, FPF submitted feedback and comments to the United Nations Children’s Fund (UNICEF) on the Draft Policy Guidance on Artificial Intelligence (AI) for Children, which seeks “to promote children’s rights in government and private sector AI policies and practices, and to raise awareness of how AI systems can uphold or undermine children’s rights.” The […]
By Jeremy Greenberg, [email protected] and Katelyn Ringrose [email protected] Key FPF-curated background resources – policy & regulatory documents, academic papers, and technical analyses regarding brain-computer interfaces are available here. Recently, Elon Musk livestreamed an update for Neuralink—his startup centered around creating brain-computer interfaces (BCIs). BCIs are an umbrella term for devices that detect, amplify, and translate […]
On June 8, FPF hosted a webinar, Privacy Preserving Machine Learning: New Research on Data and Model Privacy. Co-hosted by the FPF Artificial Intelligence Working Group and the Applied Privacy Research Coordination Network, an NSF project run by FPF, the webinar explored how machine learning models as well as data fed into machine learning models […]
Artificial intelligence and machine learning (AI/ML) generate significant value when used responsibly – and are the subject of growing investment for exactly these reasons. But AI/ML can also amplify organizations’ exposure to potential vulnerabilities, ranging from fairness and security issues to regulatory fines and reputational harm.
By Brenda Leong and Dr. Sara Jordan Machine learning-based technologies are playing a substantial role in the response to the COVID-19 pandemic. Experts are using machine learning to study the virus, test potential treatments, diagnose individuals, analyze the public health impacts, and more. Below, we describe some of the leading efforts and identify data protection […]