A Blueprint for the Future: White House and States Issue Guidelines on AI and Generative AI
Since July 2023, eight U.S. states (California, Kansas, New Jersey, Oklahoma, Oregon, Pennsylvania, Virginia, and Wisconsin) and the White House have published executive orders (EOs) to support the responsible and ethical use of artificial intelligence (AI) systems, including generative AI. In response to the evolving AI landscape, these directives signal a growing recognition of the rapid pace of AI development and the need to manage potential risk to individuals’ data and mitigate algorithmic discrimination against marginalized communities.
FPF has released a new comparison chart that summarizes and compares U.S. state and federal EOs and discusses how they fit into the broader context of AI and privacy.
In addition to the state governments, several cities (e.g., Boston, San Jose, and Seattle) have also issued guidelines on generative AI use that seek to recognize the opportunities of AI while mitigating bias, privacy, and cybersecurity risks. In contrast, other jurisdictions, such as Maine, have issued a moratorium on state generative AI use while they perform a holistic risk assessment.
Although each of the state and federal EOs on AI and generative AI has a different scope, at minimum, most charge agencies with the creation of a task force to study AI and offer recommendations.
Here are some overarching takeaways from our analysis of all of the EOs:
1. The White House and California Issued the Most Prescriptive EOs
Of the U.S. state and federal EOs analyzed, the White House requires the heaviest lift. The White House EO mandates dozens of reports and next steps for federal agencies, including the creation of guidance and standards for AI auditing, generative AI authentication, and privacy-enhancing technologies (PETs).
Similarly, of the state EOs, California is the most prescriptive and includes a number of specific mandates and reports tailored to different agencies, such as the creation of procurement guidelines, assessments on the effect of generative AI on infrastructure, and research on the impact of generative AI on marginalized communities.
2. Most State EOs Focus on “Generative AI”
Several state governments, such as California, Kansas, New Jersey, Pennsylvania, and Wisconsin, only focus on generative AI – how the technology should be used by state agencies, the risks it carries, and how it may affect their state industries and workforce. Oklahoma, Oregon, and Virginia take a broader stance and cover generative AI as well as broader types of AI systems in their EOs. Kansas and Pennsylvania are the only two states to explicitly define generative AI.
The White House EO represents an amalgam of the state EOs, as it defines generative AI (similar to Kansas and Pennsylvania) and also broadly covers different types of AI systems (similar to Oklahoma and Virginia).
3. Varying Approaches to Agencies’ Roles
The White House EO charges certain agencies with authority to create binding guidelines and standards for government actors. In contrast, rather than creating new task forces or boards, Kansas and Virginia charge state agencies to study AI technology and provide general recommendations. New Jersey and Wisconsin, two states with less rigorous EOs, emphasize that their task forces serve solely advisory roles. Oklahoma and the White House are the only EOs to require each agency to appoint an individual on their team to become an AI and generative AI expert.
4. Impact to Industry
While these EOs are primarily focused on government use of emerging AI systems, there are major requirements contained in many of them that may have consequential effects on industry.
- Procurement Requirements: Companies selling certain AI products and services to government entities will need to satisfy new baseline procurement standards.
- Enforcement: Agency-created standards and policies may inform government regulators’ perspectives on AI compliance with data privacy, security, civil rights, and consumer protection laws, particularly given the forthcoming standard setting activity directed by the White House EO.
- Influence on Legislation: As mentioned in California’s EO and the White House EO’s accompanying fact sheet, key actors in state and federal executive agencies will work with policymakers to pursue legislative approaches to support the development of responsible AI by the private sector.
These EOs represent a watershed moment for AI system users, developers, and regulators alike. Over the next few years, increased government action in this area will lead to new requirements and opportunities that will have lasting implications for both the public and private sector.