Future of Privacy Forum and Leading Companies Release Best Practices for AI in Employment Relationships
Expert Working Group Focused on AI in Employment Launches Best Practices that Promote Non-Discrimination, Human Oversight, Transparency, and Additional Protections.
Today, the Future of Privacy Forum (FPF), with ADP, Indeed, LinkedIn, and Workday — leading hiring and employment software developers — released Best Practices for AI and Workplace Assessment Technologies. The Best Practices guide makes key recommendations for organizations as they develop, deploy, or increasingly rely on artificial intelligence (AI) tools in their hiring and employment decisions.
Organizations are incorporating AI tools into their hiring and employment practices at an unprecedented rate. When guided by a framework centered on responsible and ethical use, AI hiring tools can help match candidates with relevant opportunities and inform organizations’ decisions about who to recruit, hire, and promote. However, AI tools present risks that, if not addressed, can impact job candidates and hiring organizations and pose challenges for regulators and other stakeholders.
FPF and the AI working group recommend:
- Developers and deployers should have clearly defined responsibilities regarding AI hiring tools’ operation and oversight;
- Organizations should not secretly use AI tools to hire, terminate, and take other actions that have consequential impacts;
- AI hiring tools should be tested to ensure they are fit for their intended purposes and assessed for bias;
- AI tools should not be used in a manner that harmfully discriminates, and organizations should implement anti-discrimination protections that go beyond laws and regulations as needed;
- Organizations should not use facial characterization and emotion inference technologies in the hiring process absent public disclosures supporting the tools’ efficacy, fairness, and fitness for purpose;
- Organizations should implement AI governance frameworks informed by the NIST AI Risk Management Framework;
- Organizations should not claim that AI hiring tools are “bias-free;” and
- AI hiring tools should be designed and operated with informed human oversight and engagement.
“When properly designed and utilized, AI must process vast amounts of personal data fairly and ethically, keeping in mind the legal obligations organizations have to those with disabilities and people from underrepresented, marginalized and multi-marginalized communities. This is why developers and deployers of AI in the employment context should use these Best Practices to show their commitment to ethical, responsible, and human-centered AI tools in compliance with civil rights, employment and privacy laws.”
Amber Ezzell, FPF Policy Counsel
“The intersection between hiring, employment, and AI tools presents complex opportunities and challenges for organizations, particularly concerning issues of equity and fairness in the workplace. Our Best Practices will guide U.S. companies as they create and use AI technologies that impact workers, ensuring that they address key issues regarding non-discrimination, responsible AI governance, transparency, data security and privacy, human oversight, and alternative review procedures.”
John Verdi, Senior Vice President of Policy at FPF
Leading policy frameworks, including the NIST’s AI Risk Management Framework (AI RMF), Civil Rights Principles for Hiring Assessment Technologies, the Data and Trust Alliance’s initiative Algorithmic Safety: Mitigating Bias in Workforce Decisions, and more, helped inform the Best Practices guide.
“AI tools can help candidates discover and describe their skills and find new opportunities that match their experience. The Best Practices assist organizations in instituting guardrails around using AI systems responsibly and ethically.”
Jack Berkowitz, ADP’s Chief Data Officer
“The use of automated technology in the workplace can result in better matches for both job seekers and employers, increased access to diverse candidates and a broader pool of applicants, and greater access to hiring tools for small to mid-sized businesses. These Best Practices provide concrete guidance for using the tools responsibly.”
Trey Causey, Indeed’s Head of Responsible AI
“We know that a responsible and principled approach to AI can lead to more transparency and better matching of job seeker skills to employer needs. The Best Practices are a real step forward and reflect the accountability needed to ensure these technologies continue to power opportunity for all members of the global workforce.”
Sue Duke, LinkedIn’s VP of Global Public Policy
“Since 2019, Workday has partnered with government officials and thought leaders like the Future of Privacy Forum to advance smart safeguards that cultivate trust and drive responsible AI. We’re proud to have co-developed these Best Practices, which offer policymakers a roadmap to responsible AI in the workplace and call on other organizations to join us in endorsing them.”
Chandler Morse, Workday’s Vice President of Public Policy
While existing anti-discrimination laws can apply to the use of AI tools for hiring, the AI governance field is still maturing. FPF’s Best Practices engages the broader AI governance field in the ethical use and development of AI for employment. The guide may also be updated to reflect developing AI regulatory requirements, frameworks, and technical standards.