Many academics, advocates, and industry officials have long been considering the ethics, privacy, and governance issues surrounding increased data collection and use, but the increased amounts of data and computing power that enable the sophisticated AI and ML models in place and under development raise even more questions about responsible design and management.
Future of Privacy Forum AI and ML Projects
In October 2018, we released the Privacy Expert’s Guide to AI and Machine Learning, to explain the technological basics of AI and ML systems at a level of understanding useful for non-programmers, and addresses certain privacy challenges associated with the implementation of new and existing ML-based products and services.
- AI; General and Narrow
- Machine Learning; Supervised, Unsupervised, Reinforcement
- Regression, Classification, Decision Trees, Clustering
- Neural Networks
In September, 2018, FPF published the infographic Understanding Facial Detection, Characterization, and Recognition Technologies along with Privacy Principles for Facial Recognition Technology in Consumer Applications.
These resources will help businesses and policymakers better understand and evaluate the growing use of face-based biometric technology systems when used for consumer applications. Facial recognition technology can help users organize and label photos, improve online services for visually impaired users, and help stores and stadiums better serve customers. At the same time, the technology often involves the collection and use of sensitive biometric data, requiring careful assessment of the data protection issues raised. Understanding the technology and building trust are necessary to maximize the benefits and minimize the risks.
At FPF, we have been facilitating awareness, collaboration, and sharing of best practices and guidance to our members regarding work in this area, from both the technical, and policy-focused perspectives. This began with the 2017 Brussels Privacy Symposium focused on the theme of “AI and Privacy,” with its associated Call for Papers, many of which have been, and will be published over the coming months in IEEE’s Journal of Security and Privacy.
In addition, we were invited to join of the newly established Partnership on AI, which has as one of its goals to “develop and share best-practice methods and approaches in the research, development, testing, and fielding of AI technologies.”
In evaluating the need for new awareness of how algorithms work as the underlying building block of these technologies, we explored the nature of harms that might result from automated systems in our December 2017 publication of Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making.
In June 2018, FPF released a paper written jointly with Immuta, “Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models” to provide a strategic guide for governing the legal, privacy, and ethical risks associated with this technology.
We also provided an analysis of the European Commission’s AI Strategy. The European Commission published a Communication on “Artificial Intelligence for Europe” on April 24th 2018. It highlights the transformative nature of AI technology for the world and it calls for the EU to lead the way in the approach of developing AI on a fundamental rights framework. AI for good and for all is the motto the Commission proposes. The Communication could be summed up as announcing concrete funding for research projects, clear social goals and more thinking about everything else.
Are fairness, accountability, and transparency (“FAT”) and other historical data protection practices sufficient to evolve the framework for AI and ML in a useful way for its new contexts in our everyday lives? The continuously updated resources links and articles on the AI and Machine Learning page highlight our ongoing efforts to address these issues.