Understanding AI and its underlying algorithmic processes presents new challenges for privacy officers and others responsible for data governance in companies ranging from retailers to cloud service providers. In the absence of targeted legal or regulatory obligations, AI poses new ethical and practical challenges for companies that strive to maximize consumer benefits while preventing potential harms.
Posts by Brenda Leong
algoaware has released the first public version of the State of the Art Report, open for peer review. The report includes a comprehensive explanation of the key concepts of algorithmic decision-making, a summary of the academic debate and its most pressing issues, as well as an overview of the most recent and relevant initiatives and policy actions of the civil society as well as of national and international governing bodies.
We look forward to working with Microsoft, others in industry, and policymakers to “create policies, processes, and tools” to make responsible use of Facial Recognition technology a reality.
Today, FPF announces the release of The Privacy Expert’s Guide to AI and Machine Learning. This guide explains the technological basics of AI and ML systems at a level of understanding useful for non-programmers, and addresses certain privacy challenges associated with the implementation of new and existing ML-based products and services.
These resources will help businesses and policymakers better understand and evaluate the growing use of face-based biometric technology systems when used for consumer applications. Facial recognition technology can help users organize and label photos, improve online services for visually impaired users, and help stores and stadiums better serve customers. At the same time, the technology often involves the collection and use of sensitive biometric data, requiring careful assessment of the data protection issues raised. Understanding the technology and building trust are necessary to maximize the benefits and minimize the risks.
FPF has convened a leading group of its members to consider priority areas for technologies and companies to address ML privacy and ethics concerns. Our AI and Machine Learning Working Group, composed of FPF member companies with an interest in AI and Machine Learning privacy and data management challenges, meets monthly to discuss various relevant issues regarding new updates, hear from experts regarding AI in the EU and under GDPR, the occurrence and defense against bias, and other timely topics.
Beyond Explainability aims to provide a template for effectively managing this risk in practice, with the goal of providing lawyers, compliance personnel, data scientists, and engineers a framework to safely create, deploy, and maintain ML, and to enable effective communication between these distinct organizational perspectives.
Today, the Partnership on AI announced a new group of key stakeholders who will work with the Partnership’s Board of Directors to define and advance a shared vision of artificial intelligence that benefits people and society. The Future of Privacy Forum is proud to join this organization and help drive this important work forward.
The Future of Privacy Forum and the Brussels Privacy Hub of the Vrije Universiteit Brussel (VUB) are partnering with IEEE Security & Privacy in a call for papers focused on AI Ethics: The Privacy Challenge. Selected papers will be featured at The Brussels Privacy Symposium, an academic program jointly presented by the Brussels Privacy Hub of the VUB and FPF’s National Science Foundation supported Research Coordination Network.
Yesterday, Congress introduced the Email Privacy Act (H.R. 387), which would update protections in the Electronic Communications Act (ECPA) to take account of citizens’ evolving use of technology and better align the law with consumers’ reasonable expectations of privacy in the contents of their email communications.