An AI-based computer system can gather data and use that data to make decisions or solve problems – using algorithms to perform tasks that, if done by a human, would be said to require intelligence. The benefits created by AI and machine learning (ML) systems for better health care, safer transportation, and greater efficiencies across the globe are already happening. But the increased amounts of data and computing power that enable sophisticated AI and ML models raise questions about the privacy impacts, ethical consequences, fairness, and real world harms if the systems are not designed and managed responsibly. FPF works with commercial, academic, and civil society supporters and partners to develop best practices for managing risk in AI and ML and assess whether historical data protection practices such as fairness, accountability, and transparency are sufficient to answer the ethical questions they raise. FPF’s work on AI and ML is led by Brenda Leong.
Organizations must lead with privacy and ethics when researching and implementing neurotechnology: FPF and IBM Live event and report release
A New FPF and IBM Report and Live Event Explores Questions About Transparency, Consent, Security, and Accuracy of Data The Future of Privacy Forum (FPF) and the IBM Policy Lab released recommendations for promoting privacy and mitigating risks associated with neurotechnology, specifically with brain-computer interface (BCI). The new report provides developers and policymakers with actionable […]
There are many different uses of the term “data sharing” to describe a relationship between parties who share data from one organization to another organization for a new purpose. Some uses of the term data sharing are related to academic and scientific research purposes, and some are related to transfer of data for commercial or government purposes. ..it is imperative that we are more precise which forms of sharing we are referencing so that the interests of the parties are adequately considered, and the various risks and benefits are appropriately contextualized and managed.
Lawyers are trained to respond to risks that threaten the market position or operating capital of their clients. However, when it comes to AI, it can be difficult for lawyers to provide the best guidance without some basic technical knowledge. This article shares some key insights from our shared experiences to help lawyers feel more at ease responding to AI questions when they arise.
BCIs are computer-based systems that directly record, process, analyze, or modulate human brain activity in the form of neurodata that is then translated into an output command from human to machine. Neurodata is data generated by the nervous system, composed of the electrical activities between neurons or proxies of this activity. When neurodata is linked, or reasonably linkable, to an individual, it is personal neurodata.
This paper outlines the spectrum of AI technology, from rules-based and symbolic AI to advanced, developing forms of neural networks, and seeks to put them in the context of other sciences and disciplines, as well as emphasize the importance of security, user interface, and other design factors.
Digital identity systems vary in complexity. At its most basic, a digital ID would simply recreate a physical ID in a digital format, whereasa fully integrated digital identity system would provide a platform for a complete wallet and verification process, usable both online and in the physical world.
In legislatures across the United States, state lawmakers are introducing proposals to govern the uses of automated decision-making systems (ADS) in record numbers. In contrast to comprehensive privacy bills that would regulate collection and use of personal information, automated decision-making system (ADS) bills in 2021 specifically seek to address increasing concerns about racial bias or […]
On Wed., April 14th, FPF hosted an expert panel discussion on “AI Out Loud: Representation in Data for Voice-Activated Devices, Assistants.” FPF’s Senior Counsel and Director of AI and Ethics, Brenda Leong, moderated the panel featuring Anne Toth, the Director of Alexa Trust, Amazon; Irina Raicu, Internet Ethics Program Director, Markkula Center for Applied Ethics, […]
Last week, on April 8, 2021, FPF’s Dr. Sara Jordan testified before the California House Committee on Privacy and Consumer Protection on AB-13 (Public contracts: automated decision systems). The legislation passed out of committee (9 Ayes, 0 Noes) and was re-referred to the Committee on Appropriations. The bill would regulate state procurement, use, and development […]
UPDATED August 3, 2021. FPF released the white paper, The Spectrum of AI: Companion to the FPF AI Infographic, to provide additional detail and analysis for use of this Infographic tool as an educational resources for policymakers or regulators. FPF has just completed its newest infographic educational tool, The Spectrum of Artificial Intelligence. AI is the […]