AI Out Loud: Representation in Data for Voice-Activated Devices, Assistants, and Systems

FREE April 14 @ 1:00pm - 2:00pm (EDT)

Overview

Artificial intelligence, especially that based on machine learning systems, is being used in more products and services around us than ever before. These programs can reflect increased accuracy and expand opportunities, particularly in the area of voice-activated systems and the digital assistants available on mobile devices and in-home products. But there are social and demographic groups that are better represented in the design process, datasets, and user contexts than others. How do we make the AI we use every day better, reflect who we all are, and serve a diverse group of communities’ needs equally well?

In this panel, we will look at voice-activated systems, at home, on mobile devices, and in cars or other commercial applications, to consider how design choices, data collection, and ethics evaluations can affect bias, fairness and accessibility concerns. Join FPF as we seek to find ways to bridge the conversational and developmental gaps between policy professionals and technologists, as well as educating consumers and communities about the advancing capabilities and associated potential risks of voice-activated technologies spreading through more applications in our everyday lives.

You can view a recording of the webinar here

Speakers

Brenda Leong

Senior Counsel and Director of Artificial Intelligence and Ethics , Future of Privacy Forum

Brenda Leong, CIPP/US, is Senior Counsel and Director of Artificial Intelligence and Ethics at the Future of Privacy Forum. She oversees development of privacy analysis of AI and Machine Learning technologies, manages the FPF portfolio on biometrics and digital identity, particularly facial recognition, along with the ethics challenges of these emerging systems. She works on industry standards and collaboration on privacy and responsible data management, by partnering with stakeholders and advocates to reach practical solutions for consumer and commercial data uses. Prior to working at FPF, Brenda served in the U.S. Air Force, including policy and legislative affairs work from the Pentagon and the U.S. Department of State. She is a 2014 graduate of George Mason University School of Law.

Anne Toth

Director, Alexa Trust, Amazon

Anne Toth is a Director on the Alexa Trust team at Amazon, focusing on accessibility, privacy and deepening customer trust in Alexa-enabled devices.

Prior to joining Amazon, Toth was the Head of Technology Policy & Partnerships at the World Economic Forum as well as a member of the founding leadership team at the Centre for the Fourth Industrial Revolution. Earlier in her career, Toth worked at Slack, Google and Yahoo! in various roles related to people, privacy and policy. While at Yahoo! for more than a decade, Toth served as Chief Trust Officer and championed the effort to prioritize work on accessibility for Yahoo!’s products and partnerships, leading to the staffing and creation of Yahoo!’s Accessibility Lab.

Toth is currently a Board Member for the Cloudera Foundation. She previously served as a Board Member for the Future of Privacy Forum, an Advisory Council Member for the Center for Democracy & Technology, as well as the Vice-Chair of the Board of Directors for Save the Bay, an organization dedicated to preserving and restoring San Francisco’s Bay.

Irina Raicu

Director, Internet Ethics Program , Markkula Center for Applied Ethics, Santa Clara University

Irina Raicu is the director of the Internet Ethics Program at the Center. She is a Certified Information Privacy Professional (U.S.) and was formerly an attorney in private practice. Her work addresses a wide variety of issues, ranging from online privacy to net neutrality, from data ethics to social media’s impact on friendship and family, from the digital divide to the ethics of encryption, and from the ethics of artificial intelligence to the right to be forgotten. She holds a J.D. degree from Santa Clara University’s School of Law, as well as a bachelor’s degree in English from U.C. Berkeley and a master’s degree in English and American Literature from San Jose State University.

Her writing has appeared in a variety of publications, including The Atlantic, U.S.A. TodayMarketWatchSlate, the Huffington Post, the San Jose Mercury News, the San Francisco Chronicle, and Recode.

Raicu is a member of the Partnership on AI’s Working Group on Fair, Transparent, and Accountable AI. In collaboration with the staff of the High Tech Law Institute, Raicu manages  the ongoing “IT, Ethics, and Law” lecture series, which has brought to campus speakers such as journalist Julia Angwin, ethicists Luciano Floridi and Patrick Lin, and then-FTC commissioner Julie Brill.

She tweets at @IEthics and is the primary contributor to the blog Internet Ethics: Views from Silicon Valley.

As a teenager, Raicu came to the U.S. with her family as a refugee; her background informs her interest in the Internet as a tool whose use has profound ethical implications worldwide.

Susan Gonzales

CEO, AIandYou

Susan Gonzales, CEO, brings over 20 years of experience in technology, community outreach and policy to AIandYou.  Susan spent most of her career in senior policy roles leading community outreach to diverse populations, creating partnerships and educating influencers about technology.  She is a former Director at Facebook where she created and led Community Engagement for the company on the policy team.  She also served in leadership roles at Comcast, tech start-ups, and consumer goods companies.  Susan launched AIandYou when she realized a chasm existed between the AI ecosystem and communities of color.  She created a global, bi-lingual platform of accessible and easy to understand information about AI.  Susan lives in the San Francisco Bay Area. www.aiandyou.org

Additional Resources

Academic Papers or Reports:

“What We Can’t Measure, We Can’t Understand”: Challenges to Demographic Data Procurement in the Pursuit of Fairness, M. Andrus, E. Spitzer, J. Brown, A. Xiang

On the Legal Compatibility of Fairness Definitions, A. Xiang, I. D. Jaji

Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI, Sandra Wachter, Brent Mittelstadt, Chris Russell

Bias in Word Embeddings, O. Papakyriakopoulos, J.C.M. Serrano, S. Hegelich, F. Marco

Considerations for a More Ethical Approach to Data in AI: On Data Representation and Infrastructure, A. Baird, B. Schuller

Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?, K. Holstein, et al

A Benchmarking of IBM, Google, and Wit Automatic Speech Recognition Systems, I. Maglogiannis, et al.

Usability of Automatic Speech Recognition Systems for Individuals with Speech Disorders, M. Jefferson, 2019

Investigating the Accessibility of Voice Assistants with Impaired Users, F. Masina, et al

Do you understand the words that are comin outta my mouth? Voice assistant comprehension of medication names, A. Palanica et al

BembaSpeech, Sikasote and Anastasopoulos

Pre-training on high-resource speech recognition improved low-resource speech to text translation. Bansal et al.

News:

Researchers Find High Error Rates in Commercial Speech Recognition Systems, Kyle Wiggers, VentureBeat, October 2020

Does Word Error Rate Matter, H. Chen, SmartAction, January 2021

Voice Recognition Still Has Significant Race and Gender Biases, J. Bajorek, May 2019

https://www.wired.com/story/india-smartphones-cheap-data-giving-women-voice/

Study Finds That Even the Best Speech Recognition Systems Exhibit Bias, K. Wiggers, April 2021

Speech Recognition Tech is Yet Another Example of Bias, C. Lopez, July 2020

If AI is going to be the world’s doctor, it needs better textbooks, D. Gershgorn, September 2018

Alexa, Do I Have COVID-19?, E. Anthes, September 2020

Third Party Test Shows Leap Forward in Voice ID Accuracy, Biometric Update, Mar 2021

 

Other:

Voice Technology – Statistics and Facts, H. Tankovska, August 2020

https://www.scu.edu/ethics/internet-ethics-blog/mirror-mirror-on-the-floor-whos-the-fairest-of-them-all-/)

AI Can’t Detect Our Emotions, E. Selinger and L. Stark, 2021