5 Highlights from FPF’s “AI Out Loud” Expert Panel
On Wed., April 14th, FPF hosted an expert panel discussion on “AI Out Loud: Representation in Data for Voice-Activated Devices, Assistants.” FPF’s Senior Counsel and Director of AI and Ethics, Brenda Leong, moderated the panel featuring Anne Toth, the Director of Alexa Trust, Amazon; Irina Raicu, Internet Ethics Program Director, Markkula Center for Applied Ethics, Santa Clara University, and Susan Gonzales, CEO, AIandYou.
The panel discussed voice-activated systems, such as at home, on mobile devices, and in cars or other commercial applications, to consider how design choices, data collection, and ethics evaluations can affect bias, fairness, and accessibility concerns. This technology offers many benefits and opportunities for quality of life–accessibility for young/aging or disabled populations, convenience, and interactivity across devices and services. But it also carries specific risks including privacy concerns, responsible data management frameworks, legal compliance, and equity and fairness values.
Here are 5 key highlights from “AI Out Loud”:
- Irina Raicu pointed out the need for improvements to design and development processes to ensure inclusiveness, equity, accessibility, and safety for users of these systems. She recommended including all stakeholders to share how these technologies directly impact them. She also pointed out the need for caution on new applications of these systems, such as for emotion detection or medical diagnosis, until the supporting research is strong enough to justify such uses.
- Susan Gonzales pointed out that the technology behind these systems still faces significant accuracy challenges. A Stanford study found some error rates almost twice as high for blacks as whites. In general, word error rates, the most common metric for evaluating these systems, show lower accuracy for those with strong accents, speaking a second language, with heavy dialects, and in many cases, across age and gender.
- The potential harms caused by inaccuracies can vary with context and use case. While poor song recommendations or inaccurate recipe ingredients are relatively low impact, mistakes for those asking about medication, or relying on voice assistants for access to personal accounts and services might carry greater repercussions. Those most dependent on these systems may also be those most at risk for poor results. Ethical standards demand that reliability be sufficiently high for all users.
- Anne Toth pointed to the significant advances in accuracy and representation in recent years, as more people engage with these devices in a broader variety of contexts. She confirmed Amazon’s commitment to continuous improvement based on the increased, and more diverse, amounts of voice data available, while also prioritizing personal privacy, and personal access and control by users over their data.
- To ensure fairness, inclusiveness, and accessibility in designing these technologies, designers and developers must address diversity at all stages from inception to launch. Companies should collaborate with advocacy groups, civil society, and academia to seek outcomes that provide equitable services to all potential users.
Watch the expert panel on FPF’s YouTube Channel and visit our events page for upcoming opportunities.