FPF Training Program:
Building a Responsible AI Program

March 5 | 11:30AM-1:30PM ET


FPF’s Training Program provides an in-depth understanding of today’s most pressing privacy and data protection topics. FPF staff experts design the sessions for professionals who develop policies for their organizations, work with clients on complex privacy issues, or those interested in emerging privacy topics.

In the Future of Privacy Forum’s training on Building a Responsible AI Program, participants will learn about the most used frameworks for AI governance and appropriate governance intervention points in the AI development lifecycle. From the basics of how to add AI governance to existing data governance structures to scaling AI transparency & accountability assessments, this course will provide you with the knowledge to build and scale a program to meet existing and upcoming regulatory requirements.

Participants will gain an understanding of:

  • The current state of Responsible & Trustworthy AI Frameworks and how to choose what for your situation
  • How to approach the key RAI components: fairness, transparency, privacy, safety & security, accountability
  • Where are the best assessment and intervention points in the machine learning and AI lifecycle
  • From piloting to scaling implementation,  how to assess the effectiveness of a Responsible AI Program
  • Successful approaches to company & employee RAI training

After completing each training course, you will receive a digital badge from Credly that can be shared on your professional network as a mark of the skills you’ve acquired.

Cancellation Policy

Cancellations will be honored, minus our vendor’s processing fee, up to 3 days prior to the session. For cancellations after that date, we will honor the registration for the next scheduled date of this session or an alternate FPF training class.

FPF Faculty

Emily McReynolds

AI & Data Policy Expert

Emily has worked in data protection, machine learning & AI, across academia, civil society, and in the tech industry. At Meta, on the AI Policy team in the Privacy & Data Policy group, she led stakeholder engagement on responsible AI and co-authored documentation projects System Cards, Method Cards, and data collection/labeling for AI with Casual Conversations v2. Before Meta at Microsoft, she led end-to-end data strategy from developing a dataset risk framework to the implementation of Responsible AI Standard at Microsoft Research. During her years as the program director for the University of Washington’s Tech Policy Lab, an interdisciplinary collaboration across the CS, Information, and Law schools, she co-led projects on augmented reality, driverless cars, and Toys That Listen. Emily went to graduate school planning to work on tech policy and previously taught people to use computers back when there were still floppy disks.