FPF Launches Effort to Advance Privacy-Enhancing Technologies in Support of AI Executive Order, Convenes Experts, and Meets With White House
FPF’s Research Coordination Network will support developing and deploying Privacy-Enhancing Technologies (PETs) for socially beneficial data sharing and analytics. JULY 9, 2024 — Today, the Future of Privacy Forum (FPF) is launching the Privacy-Enhancing Technologies (PETs) Research Coordination Network (RCN) with a virtual convening of diverse experts alongside a high-level, in-person workshop with key stakeholders […]
AI Forward: FPF’s Annual DC Privacy Forum Explores Intersection of Privacy and AI
The Future of Privacy Forum (FPF) hosted its inaugural DC Privacy Forum: AI Forward on Wednesday, June 5th. Industry experts, policymakers, civil society, and academics explored the intersection of data, privacy, and AI. In Washington, DC’s southwest Waterfront at the InterContinental, participants joined in person for a full-day program consisting of keynote panels, AI talks, […]
FPF Awarded NSF and DOE Grants to Advance White House Executive Order on Artificial Intelligence
The Future of Privacy Forum (FPF) has been awarded grants by the National Science Foundation (NSF) and the Department of Energy (DOE) to support FPF’s establishment of a Research Coordination Network (RCN) for Privacy-Preserving Data and Analytics. FPF’s work will support the development and deployment of Privacy Enhancing Technologies (PETs) for socially beneficial data sharing […]
Five Things Lawyers Need to Know About AI
Lawyers are trained to respond to risks that threaten the market position or operating capital of their clients. However, when it comes to AI, it can be difficult for lawyers to provide the best guidance without some basic technical knowledge. This article shares some key insights from our shared experiences to help lawyers feel more at ease responding to AI questions when they arise.
Brain-Computer Interfaces: Privacy and Ethical Considerations for the Connected Mind
BCIs are computer-based systems that directly record, process, analyze, or modulate human brain activity in the form of neurodata that is then translated into an output command from human to machine. Neurodata is data generated by the nervous system, composed of the electrical activities between neurons or proxies of this activity. When neurodata is linked, or reasonably linkable, to an individual, it is personal neurodata.
Automated Decision-Making Systems: Considerations for State Policymakers
In legislatures across the United States, state lawmakers are introducing proposals to govern the uses of automated decision-making systems (ADS) in record numbers. In contrast to comprehensive privacy bills that would regulate collection and use of personal information, automated decision-making system (ADS) bills in 2021 specifically seek to address increasing concerns about racial bias or […]
Digital Deep Fakes
The media has recently labeled manipulated videos of people “deepfakes,” a portmanteau of “deep learning” and “fake,” on the assumption that AI-based software is behind them all. But the technology behind video manipulation is not all based on deep learning (or any form of AI), and what are lumped together as deepfakes actually differ depending on the particular technology used. So while the example videos above were all doctored in some way, they were not all altered using the same technological tools, and the risks they pose – particularly as to being identifiable as fake – may vary.
FPF Letter to NY State Legislature
On Friday, June 14, FPF submitted a letter to the New York State Assembly and Senate supporting a well-crafted moratorium on facial recognition systems for security uses in public schools.
Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making
Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.