AI Ethics: The Privacy Challenge
BRUSSELS PRIVACY SYMPOSIUM
AND CALL FOR PAPERS
AI ETHICS: THE PRIVACY CHALLENGE
The Future of Privacy Forum and the Brussels Privacy Hub of the Vrije Universiteit Brussel
are partnering with IEEE Security & Privacy in a call for papers on
AI Ethics: The Privacy Challenge
6 NOVEMBER, 2017
Brussels Privacy Symposium • Vrije Universiteit Brussel • Pleinlaan 5, 1050, Brussel
Researchers are encouraged to submit interdisciplinary works in law and policy, computer science and engineering, social studies, and economics for publication in a special issue of IEEE Security & Privacy. Authors of selected submissions will be invited to present their work in a workshop, which will be hosted by the VUB on 6 November 2017, in Brussels, Belgium.
This year’s event follows up on the 2016 Brussels Privacy Symposium regarding Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization. For 2017, this event will focus on privacy issues surrounding Artificial Intelligence. Enhancing efficiency, increasing safety, improving accuracy, and reducing negative externalities are just some of AI’s key benefits. However, AI also presents risks of opaque decision making, biased algorithms, security and safety vulnerabilities, and upending labor markets. In particular, AI and machine learning challenge traditional notions of privacy and data protection, including individual control, transparency, access, and data minimization. On content and social platforms, it can lead to narrowcasting, discrimination, and filter bubbles.
A group of industry leaders recently established a partnership to study and formulate best practices on AI technologies. Last year, the White House issued a report titled Preparing for the Future of Artificial Intelligence and announced a National Artificial Intelligence Research and Development Strategic Plan, laying out a strategic vision for federally funded AI research and development. These efforts seek to reconcile the tremendous opportunities that machine learning, human–machine teaming, automation, and algorithmic decision making promise in enhanced safety, efficiency gains, and improvements in quality of life, with the legal and ethical issues that these new capabilities present for democratic institutions, human autonomy, and the very fabric of our society.
Papers and Symposium discussion will address the following issues:
• Privacy values in design
• Algorithmic due process and accountability
• Fairness and equity in automated decision making
• Accountable machines
• Formalizing definitions of privacy fairness and equity
• Societal implications of autonomous experimentation
• Deploying machine learning and AI to enhance privacy
• Cybersafety and privacy