Artificial Intelligence, Machine Learning, and Ethical Applications
FPF and IAF to Host Event at 39th International Conference of Data Protection and Privacy Commissioners Discussing Key Technologies and Impacts for Privacy, Data Protection, and Responsible Information Practices
On September 25, 2017, the Future of Privacy Forum and the Information Accountability Foundation will co-host an official side event at the International Conference of Data Protection Commissioners in Hong Kong. The event follows IAF’s publication of Artificial Intelligence, Ethics and Enhanced Data Stewardship and associated blog, and FPF’s curation of leading research highlighting the privacy challenges posed by artificial intelligence. The presentations and discussion are an excellent opportunity to learn how AI works, why it matters for data protection frameworks, and to discuss the implications of algorithmic systems that interact with people, learn with little or no human intervention, and make decisions that matter to individuals.
Technologists have long used algorithms to manipulate data. Programmers can create software that analyzes information based on rules and logic, performing tasks that range from ranking web sites for a search engine to identifying which photos include images of the same individual. Typically, software performs this analysis based on criteria selected and prioritized by human engineers and data scientists. Recent advances in machine learning and artificial intelligence support the creation of algorithmic software that can, with limited or no human intervention, internally modify its processing and criteria based on data.
Machine learning techniques can help hone algorithmic analysis and improve results. However, reduced human direction means that AI can do unexpected things. It also means that data protection safeguards should ensure that algorithmic decisions are lawful and ethical – a challenge when specific algorithmic criteria may be opaque or not practical to analyze. Increasingly, technologists and policymakers are grappling with hard questions about how machine learning works, how AI technologies can ethically interact with individuals, and how human biases might be reduced or amplified by algorithms that employ logic but lack human intuition.
On September 25, 2017, FPF and IAF will bring together technologists, policymakers, and privacy experts to discuss:
- How machine learning and artificial intelligence work;
- How these emerging technologies can support better outcomes for users of online services, patients with mental health conditions, and systems designed to combat bias;
- The challenges and implications raised by machine learning and artificial intelligence in the context of efforts to support legal, fair, and just outcomes for individuals; and
- How these emerging technologies can be ethically employed, particularly in circumstances when artificial intelligence is used to interact with people or make decisions that impact individuals.
Presenters include:
- Rich Caruana, Senior Researcher, Microsoft
- Stan Crosley, IAF Senior Strategist
- Andy Chun, Associate Professor, Department of Computer Science, and Former Chief Information Officer, City University of Hong Kong
- Yeung Zee Kin – Deputy Data Protection Commissioner, Singapore
- Sheila Colclasure, Chief Privacy Officer and Global Executive for Privacy and Public Policy, Acxiom
- Peter Cullen, IAF Executive Strategist
- John Verdi, FPF Vice President of Policy
The event will be held from 3:30pm – 5:00pm (15:30 – 17:00) in Kowloon Room II (M/F) of the conference venue in Hong Kong. Registration is not required. For more information, please contact John Verdi at [email protected] or Peter Cullen at [email protected]. Please also look out for other side events from our colleagues at IAPP, Nymity, and OneTrust.