FPF Releases Generative AI Internal Policy Checklist To Guide Development of Policies to Promote Responsible Employee Use of Generative AI Tools
Today, the Future of Privacy Forum (FPF) releases the Generative AI for Organizational Use: Internal Policy Checklist. With the proliferation of employee use of generative AI tools, this checklist provides organizations with a powerful tool to help revise their internal policies and procedures to ensure that employees are using generative AI in a way that mitigates data, security, and privacy risks, respects intellectual property rights, and preserves consumer trust.
The Checklist draws from a series of consultations with practitioners and experts from over 30 cross-sector companies and organizations to understand current and anticipatory employee use of generative AI tools, benefits and harms, AI governance, and measures taken to protect company data and infrastructure. The conversations focused on any generative AI guidelines, policies, and procedures that had been implemented to govern employees’ use of generative AI tools.
From those discussions, we learned that organizations have broadly varied use cases for generative AI and, therefore, significant variation in generative AI policies. Some organizations have enacted outright bans for generative AI tools without prior approval, while others have created restrictions for the use of generative AI, and still, others have yet to develop express policies and procedures on employee use of generative AI. The Internal Policy Checklist for Generative AI is intended to serve as a guidance document no matter what stage of the process an organization is in. It may be used as a starting point to help kick off the development of internal generative AI policies or as a final check to ensure an organization has provided comprehensive and robust guidelines for their teams.
Click here to view the Checklist.
“It is imperative that both organizations and their employees understand the benefits and risks of generative AI tools, and that organizations have appropriate safeguards in place to support responsive and ethical use,” said Amber Ezzell, AI policy counsel at FPF and author of the checklist. “Employee use of generative AI tools is inevitable and may bring new and unexpected benefits to employers as employees find ways to be more productive and creative in even the most mundane tasks. Developing thoughtful generative AI policies is essential to ensure you’re well prepared for the changing way of work.”
The Checklist provides guidance in four areas:
- Use in Compliance with Existing Laws and Policies for Data Protection & Security. Designated teams or individuals should revisit internal policies and procedures to ensure that they account for planned or permitted uses of generative AI. Employees must understand that relevant current or pending legal obligations apply to the use of new tools.
- Employee Training and Education. Identified personnel should inform employees of the implications and consequences of using generative AI tools in the workplace, including providing training and resources on responsible use, risk, ethics, and bias. Designated leads should provide employees with regular reminders of legal, regulatory, and ethical obligations.
- Employee Use Disclosure. Organizations should provide employees with clear guidance on when and whether to use organizational accounts for generative AI tools, as well as policies regarding permitted and prohibited uses of those tools in the workplace. Designated leads should communicate norms around documenting use and disclosing when generative AI tools are used.
- Outputs of Generative AI. Systems should be implemented to remind employees to verify outputs of generative AI, including for issues regarding accuracy, timeliness, bias, or possible infringement of intellectual property rights. Organizations should determine whether and to what extent compensation should be provided to those whose intellectual property is implicated by generative AI outputs. When generative AI is used for coding, appropriate personnel should check and validate outputs for security vulnerabilities.
For more information, please contact FPF Policy Counsel Amber Ezzell at [email protected] or [email protected].