Many academics, advocates, and industry officials have long been considering the ethics, privacy, and governance issues surrounding increased data collection and use, but the increased amounts of data and computing power that enable the sophisticated AI and ML models in place and under development raise even more questions about responsible design and management.
Future of Privacy Forum AI and ML Projects
At FPF, we are facilitating awareness, collaboration, and sharing of best practices and guidance to our members regarding work in this area, from both the technical, and policy-focused perspectives. This began with the 2017 Brussels Privacy Symposium focused on the theme of “AI and Privacy,” with its associated Call for Papers, many of which have been, and will be published over the coming months in IEEE’s Journal of Security and Privacy.
In addition, we were invited to join of the newly established Partnership on AI, which has as one of its goals to “develop and share best-practice methods and approaches in the research, development, testing, and fielding of AI technologies.”
In evaluating the need for new awareness of how algorithms work as the underlying building block of these technologies, we explored the nature of harms that might result from automated systems in our December 2017 publication of Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making.
In June 2018, FPF released a paper written jointly with Immuta, “Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models” to provide a strategic guide for governing the legal, privacy, and ethical risks associated with this technology.
Are fairness, accountability, and transparency (“FAT”) and other historical data protection practices sufficient to evolve the framework for AI and ML in a useful way for its new contexts in our everyday lives?