About AI and Machine Learning

The development and implementation of AI and Machine Learning-based systems are evolving and spreading in all areas of life at an increasing pace. This practical reality had generated world-wide discussions to consider the broad ethical, legal, and policy questions raised, in both in the design and implementation of these systems.

Read more

Many academics, advocates, and industry officials have long been considering the ethics, privacy, and governance issues surrounding increased data collection and use, but the increased amounts of data and computing power that enable the sophisticated AI and ML models in place and under development raise even more questions about responsible design and management.

Future of Privacy Forum AI and ML Projects

At FPF, we are facilitating awareness, collaboration, and sharing of best practices and guidance to our members regarding work in this area, from both the technical, and policy-focused perspectives. This began with the 2017 Brussels Privacy Symposium focused on the theme of “AI and Privacy,” with its associated Call for Papers, many of which have been, and will be published over the coming months in IEEE’s Journal of Security and Privacy.

In addition, we were invited to join of the newly established Partnership on AI, which has as one of its goals to “develop and share best-practice methods and approaches in the research, development, testing, and fielding of AI technologies.”

In evaluating the need for new awareness of how algorithms work as the underlying building block of these technologies, we explored the nature of harms that might result from automated systems in our December 2017 publication of  Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making.

In June 2018, FPF released a paper written jointly with Immuta, “Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models” to provide a strategic guide for governing the legal, privacy, and ethical risks associated with this technology.

Are fairness, accountability, and transparency (“FAT”) and other historical data protection practices sufficient to evolve the framework for AI and ML in a useful way for its new contexts in our everyday lives?


What is AI? How Does ML Work?

AI and ML are broad categories with somewhat imprecise, unsettled, and evolving descriptions.

Achieving a basic, technical level of understanding of how AI and ML work is critical for legal and policy officers to incorporate the unique demands of these systems into their governance models.  These resources range from news articles to on-line technical training to provide the desired level of overview.

Ethics, Governance and Compliance

The legal and regulatory landscape for AI and ML systems is changing rapidly. The list of resources here reflects the leading thinking from academics, regulatory agencies, and on-going projects and studies to provide the best guidance to commercial and public entities on implementing AI into their products and services.

Leading Academic Publications

Leading academics around the world are focused on the ethical, theoretical, and practical challenges that AI and ML pose – whether in commercial, social, or legal settings – and considering everything from biased algorithms to robot rights. Here is a collection of many of the leading papers with summaries of their themes.

What's Happening: AI and Machine Learning

Top Story

June 26, 2018 | Melanie E. Bates

Immuta and the Future of Privacy Forum Release First-Ever Risk Management Framework for AI and Machine Learning  

College Park, MD – June 26, 2018 – Immuta and the Future of Privacy Forum (FPF) today announced the first-ever framework for practitioners to manage risk in artificial intelligence (AI) and machine learning (ML) models. Their joint whitepaper, Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models, provides business executives, data scientists, and compliance professionals with a strategic guide for governing the legal, privacy, and ethical risks associated with this technology.

Read More
Artificial Intelligence, Machine Learning, and Ethical Applications
Top Story

September 20, 2017 | John Verdi

Artificial Intelligence, Machine Learning, and Ethical Applications

On September 25, 2017, the Future of Privacy Forum and the Information Accountability Foundation will co-host an official side event at the International Conference of Data Protection Commissioners. The event follows IAF’s publication of Artificial Intelligence, Ethics and Enhanced Data Stewardship, and FPF’s curation of leading research highlighting the privacy challenges posed by artificial intelligence.

Read More