An AI-based computer system can gather data and use that data to make decisions or solve problems – using algorithms to perform tasks that, if done by a human, would be said to require intelligence. The benefits created by AI and machine learning (ML) systems for better health care, safer transportation, and greater efficiencies across the globe are already happening. But the increased amounts of data and computing power that enable sophisticated AI and ML models raise questions about the privacy impacts, ethical consequences, fairness, and real world harms if the systems are not designed and managed responsibly. FPF works with commercial, academic, and civil society supporters and partners to develop best practices for managing risk in AI and ML and assess whether historical data protection practices such as fairness, accountability, and transparency are sufficient to answer the ethical questions they raise.
Featured
A Blueprint for the Future: White House and States Issue Guidelines on AI and Generative AI
Since July 2023, eight U.S. states (California, Kansas, New Jersey, Oklahoma, Oregon, Pennsylvania, Virginia, and Wisconsin) and the White House have published executive orders (EOs) to support the responsible and ethical use of artificial intelligence (AI) systems, including generative AI. In response to the evolving AI landscape, these directives signal a growing recognition of the […]
FPF Statement on Biden-Harris AI Executive Order
The Biden-Harris AI plan is incredibly comprehensive, with a whole of government approach and with an impact beyond government agencies. Although the executive order focuses on the government’s use of AI, the influence on the private sector will be profound due to the extensive requirements for government vendors, worker surveillance, education and housing priorities, the […]
India’s new Intermediary & Digital Media Rules: Expanding the Boundaries of Executive Power in Digital Regulation
The majority of these provisions were unanticipated, resulting in a raft of petitions filed in High Courts across the country challenging the validity of the various aspects of the Rules, including with regard to their constitutionality.
Automated Decision-Making Systems: Considerations for State Policymakers
In legislatures across the United States, state lawmakers are introducing proposals to govern the uses of automated decision-making systems (ADS) in record numbers. In contrast to comprehensive privacy bills that would regulate collection and use of personal information, automated decision-making system (ADS) bills in 2021 specifically seek to address increasing concerns about racial bias or […]
FPF Testifies on Automated Decision System Legislation in California
Last week, on April 8, 2021, FPF’s Dr. Sara Jordan testified before the California House Committee on Privacy and Consumer Protection on AB-13 (Public contracts: automated decision systems). The legislation passed out of committee (9 Ayes, 0 Noes) and was re-referred to the Committee on Appropriations. The bill would regulate state procurement, use, and development […]
Calls for Regulation on Facial Recognition Technology
We look forward to working with Microsoft, others in industry, and policymakers to “create policies, processes, and tools” to make responsible use of Facial Recognition technology a reality.