FPF Joins the NIST Artificial Intelligence Safety Consortium

The Future of Privacy Forum (FPF) is collaborating with the National Institute of Standards and Technology (NIST) in the U.S. Artificial Intelligence Safety Institute Consortium to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world.

This initiative will help prepare the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies.

“As an organization that has been at the forefront of responsible data practices for more than a decade, FPF is honored to be included in the list of influential and diverse stakeholders involved in the U.S. AI Safety Institute Consortium assembled by the National Institute of Standards and Technology. We look forward to contributing to the development of safe and trustworthy AI that is a force for societal good.” 

Jules Polonetsky, CEO, FPF

The consortium includes more than 200 member companies and organizations that are on the frontlines of creating and using the most advanced AI systems and hardware, the nation’s largest companies and most innovative startups, civil society and academic teams that are building the foundational understanding of how AI can and will transform our society, and representatives of professions with deep engagement in AI’s use today.

The consortium will be housed under the U.S. AI Safety Institute (USAISI) and will contribute to priority actions outlined in President Biden’s landmark Executive Order, including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content. Additional information on this Consortium can be found here.

The Garden State Joins the Comprehensive Privacy Grove

On January 16, 2024, Governor Murphy signed S332 into law, making New Jersey the thirteenth U.S. State to adopt a comprehensive privacy law to govern the collection, use, and transfer of personal data. S332 endured a long and circuitous route to enactment, having been introduced in January 2022 and amended six times before being passed by both chambers during the waning hours of New Jersey’s legislative session. The law will take effect on January 15, 2025. S332 bears a strong resemblance to other laws following the Washington Privacy Act (WPA) framework, particularly those passed in Delaware, Oregon, and Colorado. Nevertheless, S332 diverges from existing privacy frameworks in several significant ways. In this blog we highlight eight unique, ambiguous, or otherwise notable provisions that set S332 apart in the U.S. privacy landscape.

1. Private Right of Action Confusion

One ongoing controversy regarding S332 is whether the law could provide the basis for a private right of action. S332 specifies that the New Jersey Attorney General has “sole and exclusive authority” to enforce a violation of S332 and that nothing in the law shall be construed as providing the basis for a private right of action for violations of S332. A late amendment removed language stating that S332 should not be construed as providing the basis for a private right of action “under any other law.” Industry members raised concerns that the removal of this language opens up the possibility of private lawsuits by tying alleged violations of the law to causes of action under other laws. In his signing statement, Governor Murphy attempted to assuage industry fears by noting that “nothing in this bill expressly establishes such a private right of action” and “this bill does not create a private right of action under this law or under any other law.” Some industry members remain unconvinced, however, and continue to advocate for clarifying amendments.

2. Data Protection Assessments Prior to Processing

New Jersey joins the majority of state privacy laws in requiring that controllers conduct a data protection assessment (DPA) for any data processing activity that “presents a heightened risk of harm to a consumer.” New Jersey is notable, however, for explicitly requiring that the DPA occur before initiating any such high risk processing activities. Prior to New Jersey, only the Colorado Privacy Act’s implementing regulations required that DPAs occur prior to initiating processing. Following the NetChoice v. Bonta litigation, which saw California’s Age-Appropriate Design Code Act preliminarily enjoined, this requirement could raise First Amendment concerns if it is interpreted as a prior restraint on speech.

3. Thresholds for Applicability

S332 is notable for not including a revenue threshold in its applicability provisions. The law applies to controllers that control or process the personal data of either (a) at least 100,000 New Jersey residents annually, or (b) at least 25,000 New Jersey residents annually and the controller derives revenue from the sale of personal data. Prong (b) differs from the majority of existing privacy frameworks, which tend to require that the controller derive at least a certain percentage of revenue from personal data sales (e.g., 25%) to be covered. This is another similarity between S332 and the Colorado Privacy Act, which sets the same thresholds. 

The carve outs in S332 are similar to those in the Delaware Personal Data Privacy Act. S332 includes data-level exemptions for protected health information subject to the Health Insurance Portability and Accountability Act (HIPAA) and “personal data collected, processed, sold, or disclosed by a consumer reporting agency” insofar as those processing activities are compliant with the Fair Credit Reporting Act (FCRA). With respect to the financial industry, S332 joins the majority of states by providing entity-level and data-level exemptions for financial institutions and their affiliates subject to Title V of the Gramm-Leach-Bliley Act (GLBA). Notably, however, S332 does not contain exemptions for nonprofits, higher education institutions, or personal data regulated by the Family Educational Rights and Privacy Act (FERPA).

4. Rulemaking

New Jersey becomes just the third state, after California and Colorado, to provide for rulemaking in its comprehensive privacy law. The Act charges the Director of the Division of Consumer Affairs in the Department of Law and Public Safety with promulgating rules and regulations necessary to effectuate the purposes of S332. This provision includes no details on the timeframe or substance of rulemaking, other than that the New Jersey Administrative Procedure Act applies. As the rulemaking process unfolds, this could be a valuable opportunity for stakeholders to seek clarity on some of S332’s ambiguous provisions.

5. Ambiguity on Authorized Agents and UOOMs

New Jersey joins Colorado, Connecticut, Delaware, Montana, Oregon, and Texas in allowing an individual to designate an authorized agent to exercise the individual’s right to opt out of processing for certain purposes. S332’s authorized agent provision has two ambiguities. First, subsection 8(a) specifies that an individual can designate an authorized agent to “act on the consumer’s behalf to opt out of the processing and sale of the consumer’s personal data.” (Emphasis added.) As written, this provision would create a broad opt-out right with respect to all processing, distinct from the explicitly established opt-out rights in the bill. It is more likely that this provision is intended to be limited to opting-out of processing for the purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal or similarly significant effects. The second ambiguity is the qualifier that an individual can use an authorized agent designated using technology to opt-out of profiling only “when such technology exists.” It is not clear who or what determines the availability of such technology.

S332 also joins California, Colorado, Connecticut, Montana, Oregon, and Delaware in requiring that controllers allow individuals to opt-out of the processing of personal data for targeted advertising or the sale of personal data on a default basis through a universal opt-out mechanism (UOOM). Designed to reduce the burden on individuals’ attempting to exercise opt-out rights, UOOMs encompass a range of tools providing individuals with the ability to configure their devices to automatically exercise opt out rights through a preference signal when interacting with a controller through a desktop or mobile application. S332’s statutory requirements for a UOOM, however, are ambiguous and inconsistent with those in existing privacy frameworks. Specifically, one requirement is that a UOOM cannot “make use of a default setting that opts-in a consumer to the processing or sale of personal data.” (Emphasis added.) This is clearly inconsistent with the purpose of a universal opt-out mechanism, which is to opt individuals out of such processing.

6. Adolescent Privacy

S332 continues and builds upon a trend of increased privacy protections for adolescents (while legislating around the existing, largely preemptive COPPA regime for individuals 12 and under). For individuals whom the controller actually knows are 13-16 years old or willfully disregards their age, the controller must obtain consent from the teens before processing their personal data for the purposes of targeted advertising, sale, or profiling in furtherance of decisions that produce legal or similarly significant effects. Several states have iterated on adolescent privacy protection in recent years by requiring consent for these processing purposes. Delaware raised the bar when it required such consent for individuals aged 13 through 17, but it did not extend the opt-in consent requirement to profiling. Oregon was the first state to include profiling in the opt-in consent requirement, but its age range was slightly narrow at 13 through 15. New Jersey is unique and arguably goes the furthest by extending the opt-in consent requirement to cover individuals aged 13 through 16 and extending this requirement to profiling in furtherance of decisions that produce legal or similarly significant effects.

7. Expansive Definitions of Sensitive Data and Biometric Data

S332’s definitions of sensitive data and biometric data (which require opt-in consent to process) continue and build upon trends seen in stronger iterations of the WPA framework. S332’s definition of sensitive data includes additional categories seen in a minority of existing privacy frameworks, such as “status as transgender or non-binary” and “sex life.” 

S332’s definition of sensitive data also goes beyond the other WPA-style laws in two ways. First, the coverage of health data is slightly expanded to include mental or physical health treatment (in addition to condition or diagnosis). Second, sensitive data also includes “financial information,” which it specifies “shall include a consumer’s account number, account log-in, financial account, or credit or debit card number, in combination with any required security code, access code, or password that would permit access to a consumer’s financial account.” This category is new to the non-California laws.

The definition of biometric data is also broader than in most of the WPA-style laws, which consistently define biometric data as “data generated by automatic measurements of an individual’s biological characteristics.” S332, in contrast, defines biometric data as “data generated by automatic or technological processing, measurements, or analysis of an individual’s biological, physical, or behavioral characteristics,” and it explicitly includes facial mapping, facial geometry, and facial templates in its list of examples. This language is similar to the definitions of biometric data and biometric identifiers in the Colorado Privacy Act Rules.

8. Expanded Right to Delete

Finally, S332 provides an expanded right to delete with respect to third party data, first observed in Delaware. When a controller has lawfully obtained an individual’s personal data from a third party and the individual submits a deletion request, the controller must either (a) retain a record of the deletion request and the “minimum data necessary” to ensure that the individual’s personal data remains deleted and not use that retained information for any other purpose, or (b) delete such data. This is different from the majority of states, which instead allow a controller that obtains personal data from third party sources to respond to a deletion request by retaining such data but opting the individual out of processing activities that are not subject to a statutory exemption (such as fraud prevention or cybersecurity monitoring).

FPF Announces International Technology Policy Expert as New Head of Artificial Intelligence

FPF has appointed international technology policy expert Anne J. Flanagan as Vice President for Artificial Intelligence (AI). In this new role, Anne will lead the privacy organization’s portfolio of projects exploring the data flows driving algorithmic and AI products and services, their opportunities and risks, and the ethical and responsible development of this technology.

Anne joins FPF with almost 20 years of experience in international strategic technology governance and development. She has a proven track record of bringing together stakeholders worldwide, including businesses, governments, academics, and civil society organizations, to co-design policy frameworks that address our time’s most intractable technology policy issues.

“Anne is a true leader of efforts to establish policies and standards for emerging technologies,” said Jules Polonetsky, CEO of FPF. “The vast amounts of data that enable AI and the myriad uses are creating some of the most exciting opportunities for progress, but also some of the gravest risks the world has faced. We’re eager for Anne to build on FPF’s extensive current portfolio of AI projects and open up new initiatives.”

As Deputy Head of Division for Telecommunications Policy & Regulation at the Department of Communications, Climate Action, and Environment in Ireland, Anne was responsible for developing Ireland’s technical policy positions and diplomatic strategy regarding EU legislation on telecommunications, digital infrastructure, and data. She represented Ireland in the EU Digital Single Market Strategic Group at the European Commission and the Working Party on Telecommunications and Information Society at the Council of the European Union. Anne also played a crucial role in the EU’s early approach to AI governance, contributing to the foundational work on the EU’s Digital Single Market. 

Since moving to the U.S. in 2019, Anne has held several senior positions in technology policy, including at the World Economic Forum’s Centre for the Fourth Industrial Revolution and, most recently, Reality Labs Policy at Meta Platforms Inc. In all of these senior roles, her research and expertise has helped technology business leaders shape responsible and sustainable technology development.

 “I have seen global leaders, from governments to CEOs, struggle with developing AI in an ethical and responsible manner,” said Flanagan. “This is complicated by the unprecedented speed in AI innovation and an intersection with other emerging technologies and policy issues. As we think about managing AI, human centricity needs to be at the forefront of any approach, and therefore, the importance of data stewardship becomes vital. I’m excited for this opportunity at such a distinguished organization as the Future of Privacy Forum, where these concerns are already front and center. I look forward to working towards building sustainable and trustworthy policy solutions with diverse stakeholders globally.” 

Since 2015, FPF has worked with corporate, civil society, and policy stakeholders to develop best practices for managing risks posed by AI and has worked to assess whether data protection practices such as fairness, accountability, and transparency are sufficient to answer the ethical questions they raise. More recently, FPF explored the challenges and responsible applications regarding AI in the workplace with its 2023 Best Practices for AI and Workplace Assessment Technologies and updated its 2020 report, The Spectrum of Artificial Intelligence and accompanying Spectrum of Artificial Intelligence Infographic. Additional FPF AI projects include Automated Decision-making Under the GDPRGenerative AI for Organizational Use: Internal Policy Checklist, Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making and more. 

Anne holds a Masters in Economics and Political Science from Trinity College Dublin, a Masters in International Relations from Dublin City University, and a Masters of Business Administration from Trinity College Dublin. A former appointee to the UK Government’s International Data Transfers Expert Council, Anne is also a Member of the Board of Advisors of the Innovation Value Institute (IVI) at Maynooth University and a recognized Woman Leader in Data and AI at WLDA.tech.

7 Essential Tips to Protect Your Privacy in 2024

Today, almost everything we do online involves companies collecting personal information about us. Personal data is collected and used for various reasons – like when you use social media, shop online, redeem digital coupons at the store, or browse the internet. 

Sometimes, information is collected about you by one company and then shared or sold to another. While data collection can benefit both you and businesses – like connecting with friends, getting directions, or sales promotions – it can also be used in invasive ways unless you take control.

You can protect your personal data and information in many ways and control how it is shared and used. On this Data Privacy Day or Data Protection Day in Europe, recognized annually on January 28 to mark the anniversary of Convention 108, the first binding international treaty to protect personal data, the Future of Privacy Forum (FPF) and other organizations are raising awareness and promoting best practices for data privacy. 

FPF is partnering with Snap Inc. to provide a privacy-themed Snapchat filter to spread awareness of the importance of data privacy to your networks. Share the pictures you took using our interactive lens on social media using the hashtag #FPFDataPrivacyDay2024.

Here are 7 quick, easy steps you can take to better protect your privacy online and when using your mobile device.

1. Check Your Privacy Settings on Social Media

Many social media sites include options on how to tailor your privacy settings to limit how data is collected or used. Snap provides privacy options that control who can contact you and many other options. Start with the Snap Privacy Center to review your settings. You can find those choices here.

Snap also provides options for you to view any data they have collected about you, including account information and your search history. Downloading your data allows you to view what information has been collected and modify your settings accordingly. 

Instagram allows you to manage various privacy settings, including who has access to your posts, who can comment on or like your posts, and manage what happens to posts after you delete them. You can view and change your settings here.

TikTok allows you to decide between public and private accounts, allows you to change your personalized ad settings, and more. You can check your settings here.

Twitter/X allows you to manage what information you allow other people on the platform to see and lets you choose your ad preferences. Check your settings here.

Facebook provides a range of privacy settings that can be found here.

In addition, you can check the privacy and security settings for other popular applications such as BeReal and Pinterest here. Be sure to also check your privacy settings if you have a profile on a popular dating app such as Bumble, Hinge, or Tinder.

What other social media apps do you use often? Check to see which settings they provide!

2. Limit Sharing of Location Data

Most social media apps and websites will ask for access to your location data. Do they need it for some obvious reason, like helping you with directions, showing your nearby friends, or perhaps a store location you’re looking for? If not, feel free to opt-out of location data. Be aware that location data is often used to personalize ads and recommendations based on locations you have recently visited. Allowing access to location services may also permit sharing of location information with third parties.

To check the location permissions allowed for apps on an iPhone or Android, follow the below steps.

iPhone

Android

3. Keep Your Devices & Apps Up to Date

Keeping software current and up to date is the only way to ensure your device is protected against the latest software vulnerabilities. Installing the latest security software, web browsers, and operating systems is the best way to protect against various online threats. By enabling automatic updates on your devices, you can be sure that your apps and operating systems are always up to date. 

Users can check the status of their operating systems in the settings app. 

For iPhone users, navigate to “Software Update,” and for Android devices, look for the “Security” page in settings.

4. Use a Password Manager

Utilizing a strong and secure password for each web-based account helps ensure your personal data and information are protected from unauthorized use. Remembering passwords for every account can be difficult, and using a password manager can help. Password managers save passwords as you create and log in to your accounts, often alerting you of duplicates and suggesting the creation of a stronger password. 

For example, if you use an Apple product when signing up for new accounts and services, you can allow your iPhone, Mac, or iPad to generate strong passwords and safely store them in iCloud Keychain for later access. Some of the best third-party password managers can be found here.

5. Enable Two-Factor Authentication

Two-factor authentication adds an additional layer of protection to your accounts. The first authentication is the standard username and password combination used for years. The second factor is a text message or email with a code sent to a personal device. This added step makes it harder for malicious actors to access your accounts. Two-factor authentication only adds a few seconds to your day but can save you from the headache and harm that comes from compromised accounts. To be even safer, use an authenticator app as your second factor. 

Remember to adjust your settings regularly, staying on top of any privacy changes and updates made on the web applications you use daily. Protect your data by being intentional about what you post online and encouraging others to look at the information they may share. By adjusting your settings and making changes to your web accounts and devices, you can better maintain the security and privacy of your personal data.

6. Use End-to-End Encryption for Secure Messaging

Using applications with secure end-to-end encryption, such as Signal and ProtonMail, ensures that only you and the intended recipient can read your messages. Other applications such as WhatsApp and Telegram are also end-to-end encrypted, though be sure to update your settings in Telegram as messages are not encrypted by default.

As many of us share sensitive information with our families and friends, it’s critical to be mindful of how our personal information is shared and who has access to it. 

What better time to reassess our data practices and think about this important topic than during Data Privacy Day?

7. Turning off Personalized Ads

Take control of how companies use your personal information to advertise to you by going into the settings of your applications. See below for how-to guides with quick, step-by-step instructions to turn off ad personalization for popular apps you may be using: 

If you’re interested in learning more about one of the topics discussed here or other issues driving the future of privacy, sign up for our monthly briefing, check out one of our upcoming events, or follow us on Twitter, LinkedIn, or Instagram

FPF brings together some of the top minds in privacy to discuss how we can all benefit from the insights gained from data while respecting the individual right to privacy.

Identifying Privacy Risks and Implementing Best Practices for Body-Related Data in Immersive Technologies

As organizations develop more immersive technologies, and rely on the collection, use, and transferring of body-related data, they need to ensure their data practices not only maintain legal compliance, but also more fulsomely protect people’s privacy. To guide organizations as they develop their body-related data practices, the Future of Privacy Forum created the Risk Framework for Body-Related Data in Immersive Technologies. This framework serves as a straightforward, practical guide for organizations to analyze the unique risks associated with body-related data, particularly in immersive environments, and to institute data practices that earn the public’s trust. Developed in consultation with privacy experts and grounded in the experiences of organizations working in the immersive technology space, the framework is also useful for organizations that handle body-related data in other contexts. This post will build on our previous blog post where we discussed the importance of understanding an organization’s data practices and evaluating legal obligations. In this post we will focus on identifying the risks data practices raise and implementing best practices to mitigate these risks.

I. Identifying and assessing risk to individuals, communities, and society

Beyond legal compliance, leading organizations also should seek to ensure their products, services, and other uses of body-related data are fair, ethical, and responsible. Body-related data, and particularly the aggregation of this data, can give those with access to it significant insight into an individual’s personal life and thoughts. These insights include not just an individual’s unique ID, but potentially their emotions, characteristics, behaviors, desires, and more. As such, it is important for safeguards to prevent harmful uses of body-related data. Proactively identifying the risks their data handling raises will help organizations determine which best practices are most appropriate. 

As demonstrated in the chart below, privacy harms may stem from particular types of data being used or handled in particular ways, or transferred to particular parties. Organizations should consider the factors related to data type and data handling that impact the risks associated with their data practices.

immersive tech blog chart 1 1

When assessing the risks their data practices raise, organizations should ask themselves questions including:

II. Implementing relevant best practices

There are a number of legal, technical, and policy safeguards that can help organizations maintain statutory and regulatory compliance, minimize privacy risks, and ensure that immersive technologies are used fairly, ethically, and responsibly. These best practices should be implemented in a way that is intentional—adopted as appropriate given an organization’s data practices and associated risks; comprehensive—touching all parts of the data lifecycle and addressing all relevant risks; and collaborative—developed in consultation with multidisciplinary teams within an organization including stakeholders from legal, product, engineering, privacy, and trust and safety.

The chart below summarizes some of the major best practices organizations can apply to body-related data, as well as specific recommendations for each.

immersive tech blog chart 2

It is critical to note that no single best practice stands alone, and instead the contemplation of best practices should be considered comprehensively and implemented together as part of a coherent strategy. In addition, any strategy and practices must be evaluated on an ongoing basis as technology, data practices, and regulations change.

As organizations grapple with the privacy risks that body-related data raises, risk-based approaches to evaluating data practices can help organizations ensure they are not just compliant but also that they value privacy. FPF’s Risk Framework for Body-Related Data in Immersive Technologies serves as a starting point for organizations that collect, use, or transfer body-related data to develop best practices that prioritize user privacy. As technologies become more immersive, the unique considerations raised in this framework will be relevant for a growing number of organizations and the virtual experiences they create. Organizations can use this framework as a guide as they examine, develop, and refine their data practices.

This Year’s Must-Read Privacy Papers to be Honored at Washington, D.C. Event

The Future of Privacy Forum’s 14th Annual Privacy Papers for Policymakers Award Recognizes Influential Privacy Research

Today, the Future of Privacy Forum (FPF) — a global non-profit focused on data protection headquartered in Washington, D.C. — announced the winners of its 14th annual Privacy Papers for Policymakers (PPPM) Awards.

The PPPM Awards recognize leading U.S. and international privacy scholarship that is relevant to policymakers in the U.S. Congress, federal agencies, and international data protection authorities. Nine winning papers, two honorable mentions, two student submissions, and a student honorable mention were selected by a diverse group of leading academics, advocates, and industry privacy professionals from FPF’s Advisory Board.

Award winners will have the unique opportunity to showcase their papers. Authors of U.S. focused papers will present their work at the Privacy Papers for Policymakers ceremony on February 27, 2024, in Washington, D.C. Winning papers with an international focus will be presented at a virtual event on March 1, 2024.

“Academic scholarship is an essential resource for legislators and regulators around the world who are grappling with the increasingly complex uses of personal data. Thoughtful policymakers will benefit from the deep analysis and independent thinking provided by these essential publications.” – FPF CEO Jules Polonetsky

FPF’s 2023 Privacy Papers for Policymakers Award winners are:

In addition to the winning papers, FPF selected for Honorable Mentions: The After Party: Cynical Resignation In Adtech’s Pivot to Privacy by Lee McGuigan, University of North Carolina at Chapel Hill; Sarah Myers West, AI Now Institute; Ido Sivan-Sevilla, College of Information Studies, University of Maryland; and Patrick Parham, College of Information Studies, University of Maryland; and Epsilon-Differential Privacy, and a Two-step Test for Quantifying Reidentification Risk by Nathan Reitinger and Amol Deshpande of the University of Maryland.

FPF also selected two papers for the Student Paper Award: The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government by Arushi Gupta, Stanford University; Victor Y. Wu, Stanford University; Helen Webley-Brown, Massachusetts Institute of Technology; Jennifer King, Stanford University; and Daniel E. Ho, Stanford Law School; and Estimating Incidental Collection in Foreign Intelligence Surveillance: Large-Scale Multiparty Private Set Intersection with Union and Sum by Anunay Kulshrestha and Jonathan Mayer of Princeton University. A Student Paper Honorable Mention went to Ditching “DNA on Demand”: A Harms-Centered Approach to Safeguarding Privacy Interests Against DNA Collection and Use by Law Enforcement by Emma Kenny-Pessia, J.D. Candidate at Washington University in St. Louis School of Law.

In reviewing the submissions, these winning papers were awarded based on the strength of their research and proposed policy solutions for policymakers and regulators in the U.S. and abroad.

The Privacy Papers for Policymakers event will be held on February 27, 2024, in Washington, D.C., exact location to be announced. The event is free and open to the public.

Explaining the Crosswalk Between Singapore’s AI Verify Testing Framework and The U.S. NIST AI Risk Management Framework

On October 13, 2023, Singapore’s Infocomm Media Development Authority (IMDA) and the U.S.’s National Institute of Standards and Technology (NIST) published a “Crosswalk” of IMDA’s AI Verify testing framework and NIST’s AI Risk Management Framework (AI RMF). Developed under the aegis of the Singapore–U.S. Partnership for Growth and Innovation, the Crosswalk is a mapping document that guides users on how adopting one framework can be used to meet the criteria of the other. Similar to other crosswalk initiatives that NIST has done with other leading AI frameworks (such as with the ISO/IEC FDIS 23894 and the proposed EU AI Act, OECD Recommendation on AI, Executive Order 13960 and the Blueprint for an AI Bill of Rights), this Crosswalk aims to harmonize “international AI governance frameworks to reduce industry’s cost to meet multiple requirements.”

The aim of this blog post is to provide further clarity on the Crosswalk and what it means for organizations developing and deploying AI systems. The blog post is structured into four parts. 

AI Verify – Singapore’s AI governance testing framework and toolkit

AI Verify is an AI governance testing framework and toolkit launched by the IMDA and the Personal Data Protection Commission of Singapore (PDPC). First announced in May 2022, AI Verify enables organizations to conduct a voluntary self-assessment of their AI systems through a combination of technical tests and process-based checks. In turn, this allows companies who use AI Verify to objectively and verifiably demonstrate to stakeholders their responsible and trustworthy deployment of AI systems.

At the outset, there are several key characteristics of AI Verify that users should be mindful of. 

AI Verify comprises two parts: (1) a Testing Framework, which references 11 internationally-accepted AI ethics and governance principles grouped into 5 pillars; and (2) a Toolkit that organizations can use to execute technical tests and to record process checks from the Testing Framework. The 5 pillars and 11 principles under the Testing Framework are:

  1. Transparency on the use of AI and AI systems
    1. Principle  1 – Transparency: Providing appropriate information to individuals impacted by AI systems
  1. Understanding how an AI model reaches a decision
    1. Principle 2 – Explainability: Understanding and interpreting the decisions and output of an AI system
    2. Principle 3 – Repeatability/reproducibility: Ensuring consistency in AI output by being able to replicate an AI system, either internally or through a third party
  1. Ensuring safety and resilience of the AI system
    1. Principle 4 – Safety: Ensuring safety by conducting impact/risk assessments, and ensuring that known risks have been identified / mitigated
    2. Principle 5 – Security: Ensuring the cyber-security of AI systems
    3. Principle 6 – Robustness: Ensuring that the AI system can still function despite unexpected input
  2. Ensuring Fairness
    1. Principle 7 – Fairness: Avoiding unintended bias, ensuring that the AI system makes the same decision even if a certain attribute is changed, and ensuring that the data used to train the model is representative
    2. Principle 8 – Data governance: Ensuring the source and quality of data by adopting good data governance practices when training AI models
  3. Ensuring proper (human) management and oversight of the AI system
    1. Principle 9 – Accountability: Ensuring proper management oversight during AI system development
    2. Principle 10 – Human agency and oversight: Ensuring that the AI system is designed in a way that will not diminish the ability of humans to make decisions
    3. Principle 11 – Inclusive growth, societal and environmental well-being: Ensuring beneficial outcomes for people and the planet.

As mentioned earlier, FPF’s previous blog post on AI Verify provides more detail on the objectives and mechanics of AI Verify’s Testing Framework and Toolkit. This summary merely sets the context for readers to better appreciate how the Crosswalk document should be understood.

AI Risk Management Framework – U.S. NIST’s industry-agnostic voluntary guidance on managing AI risks

The AI RMF was issued by NIST in January 2023. Currently in its first version, the goal of the AI RMF is “to offer a resource to organizations designing, developing, deploying or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.”

The AI RMF underscores the perspective that responsible AI risk management tools can assist organizations in cultivating public trust in AI technologies. Intended to be sector-agnostic, the AI RMF is voluntary, flexible, structured (in that it provides taxonomies of risks), measurable and “rights-focused”. The AI RMF outlines mechanisms and processes for measuring and managing AI systems and provides guidance on measuring accuracy.

The AI RMF itself is broken into two parts. The first part outlines various risks presented by AI. The second part provides a framework for considering and managing those risks, with a particular focus on stakeholders involved in the testing, evaluation, verification and validation processes throughout the lifecycle of an AI system.

The AI RMF outlines several AI-related risks

The AI RMF outlines the following risks presented by AI: (1) Harm to people – e.g. harm to an individual’s civil liberties, rights, physical or psychological safety or economic opportunity; (2) Harm to organizations – e.g. harm to an organization’s reputation and business operations; and (3) Harm to an ecosystem – e.g. harm to the global financial system or supply chain. It also notes that AI risk management presents unique challenges for organizations, including system transparency, lack of uniform methods or benchmarks, varying levels of risk tolerance and prioritization, and integration of risk management into organizational policies and procedures. 

The AI RMF also provides a framework for considering and managing AI-related risks

The “core” of the AI RMF contains a framework for considering and managing these risks. It comprises four functions: “Govern”, “Map”, “Measure”, and “Manage.” These provide organizations and individuals with specific recommended actions and outcomes to manage AI risks.

The AI RMF also comes with an accompanying “playbook” that provides additional recommendations and actionable steps for organizations. Notably, NIST has already produced “crosswalks” to ISO/IEC standards, the proposed EU AI Act, and the US Executive Order on Trustworthy AI.

The Crosswalk is a mapping document that guides users on how adopting one framework can be used to meet the criteria of the other

To observers familiar with AI governance documentation, it should be apparent that there is complementarity between both frameworks. For instance, the AI Verify framework contains processes that would overlap with the RMF framework for managing AI risks. Both frameworks also adopt risk-based approaches and aim to strike a pragmatic balance between promoting innovation and managing risks.

Similar to other crosswalk initiatives that NIST has already done with other frameworks, this Crosswalk is aimed at harmonizing international AI governance frameworks to reduce fragmentation, facilitate ease of adoption, and reduce industry costs in meeting multiple requirements. Insiders have noted that at the time when the AI Verify framework was released in 2022, NIST was in the midst of organizing public workgroups for the development of the RMF. From there, the IMDA and NIST began to work together, with a common goal of jointly developing the Crosswalk to meet different industry requirements.

Understanding the methodology of the Crosswalk

Under the Crosswalk, AI Verify’s testable criteria and processes are mapped to the AI RMF’s categories within the Govern, Map, Measure and Manage functions. Specifically, the Crosswalk first lists the individual categories and subcategories under the aforementioned four functions. As these 4 core functions address individual governance/trustworthiness characteristics (such as safety, accountability and transparency, explainability and fairness) collectively, the second column of the Crosswalk – which denotes the AI Verify Testing Framework – sets out the individual principle, testable criteria, and process and/or technical test that correlates to the relevant core function under the AI RMF. 

A point worth noting is that the mapping is not “one-to-one”; each NIST AI RMF category may have multiple equivalents. Thus, for instance, AI Verify’s Process 9.1.1 for Accountability (indicated in the Crosswalk as “Accountability 9.1.1”) appears for both “Govern 4” and “Govern 5” under the AI RMF. This is to reflect the differences in nature of both documents – while the AI RMF is a risk management framework for the development and use of AI, AI Verify is a testing framework to assess the performance of an AI system and the practices associated with the development and use of this system. To achieve this mapping, the IMDA and NIST have had to compare both frameworks at a granular level – down to individual elements within the AI Verify Testing Framework – to achieve alignment. This can be seen from the Annex below, which sets out for comparison the “crosswalked” elements, as well as identifies the individual testable criteria and processes in the AI Verify Testing Framework. 

Other aspects of understanding the Crosswalk document are set out below (in a Q&A format):

The Crosswalk shows that practical international cooperation in AI governance and regulation is possible 

The global picture on AI regulation and governance is shifting rapidly. Since the burst of activity around the development of AI ethical principles and frameworks in the late 2010s, the landscape is becoming increasingly complex. 

It is now defined within the broad strokes of  the development of AI-specific regulation (in the form of legislation, such as the proposed EU AI Act, Canada’s AI and Data Act or Brazil’s AI Bill), the negotiation of an international Treaty on AI under the aegis of the Council of Europe, executive action putting the onus on government bodies when contracting AI systems (with President’s Biden Executive Order as chief example), the provision of AI-specific governance frameworks as self-regulation, and guidance by regulators (such as Data Protection Authorities issuing guidance on how providers and deployers of AI systems can rely on personal data respecting data protection laws). This varied landscape leaves little room for a coherent global approach to govern a quintessentially borderless technology. 

In this context, the Crosswalk as a government-to-government effort shows that it is possible to find a common language between prima facie different self-regulatory AI governance frameworks, paving the way to interoperability or a cross-border interchangeable use of frameworks. Its practical relevance for organizations active both in the US and Singapore cannot be overstated. 

The Crosswalk also provides a model for future crosswalks or similar mapping initiatives that will support a more coherent approach to AI governance across borders, potentially opening the path for more instances of meaningful and practical international cooperation in this space.   

Annex: Crosswork Combined with Description from Individual Elements of the AI Verify Process Checklist

Regu(AI)ting Health: Lessons for Navigating the Complex Code of AI and Healthcare Regulations

Authors: Stephanie Wong, Amber Ezzell, & Felicity Slater

As an increasing number of organizations utilize artificial intelligence (“AI”) in their patient-facing services, health organizations are seizing the opportunity to take advantage of the new wave of AI-powered tools. Policymakers, from United States (“U.S.”) government agencies to the White House, have taken heed of this trend, leading to a flurry of agency actions impacting the intersection of health and AI, from enforcement actions and binding rules to advisory options and other, less formal guidance. The result has been a rapidly changing regulatory environment for health organizations deploying artificial intelligence. Below are five key lessons from these actions for organizations, advocates, and other stakeholders seeking to ensure that AI-driven health services are developed and deployed in a lawful and trustworthy manner.

Lesson 1: AI potential in healthcare has evolved exponentially

While AI has been a part of healthcare conversations for decades, recent technological developments have seen exponential growth in potential applications across healthcare professionals and specialties requiring response and regulation of use and application of AI in healthcare. 

The Department of Health and Human Services (“HHS”) is the central authority for health sector regulations in the United States. HHS’ Office for Civil Rights (“OCR”) is responsible for enforcement of the preeminent federal health privacy regulatory framework, the Health Insurance Portability and Accountability Act (HIPAA) Privacy, Security, and Breach Notification Rules (“Privacy Rule”). A major goal of the Privacy Rule is to properly protect individuals’ personal health information while allowing for the flow of health data that is necessary to provide quality health care. 

In 2023, OCR stated that HIPAA-regulated entities should analyze AI tools as they do other novel technologies; organizations should “determine the potential risks and vulnerabilities to electronic protected health information before adding any new technology into their organization.” While not a broad endorsement of health AI, OCR’s statement suggests that AI has a place in the regulated healthcare sector.

The Food and Drug Administration (“FDA”) has taken an even more optimistic approach toward the use of AI. Also an agency within HHS, the FDA is responsible for ensuring the safety, efficacy, and quality of various pharmacological and medical products used in clinical health treatments and monitoring. In 2023, the FDA published a discussion paper intended to facilitate discussion with stakeholders on the use of AI in drug development. Drug discovery is the complex process of identifying and developing new medications or drugs to treat medical conditions and diseases. Before drugs can be marketed to the public for patient use, they must go through multiple stages of research, testing, and development. This entire process can take around 10 to 15 years, or sometimes longer. According to the discussion paper, the FDA strives to “facilitate innovation while safeguarding public health” and plans to develop a “flexible risk-based regulatory framework that promotes innovation and protects patient safety.”

Lesson 2: Different uses of data may implicate different regulatory structures

While there can be uncertainty regarding whether particular data, such IP address data collected by a consumer-facing website, is covered by HIPAA, HHS and the Federal Trade Commission (“FTC”) have made clear that they are working together to ensure organizations protect sensitive health information. In particular, failure to establish proper agreements or safeguards between covered entities and AI vendors can constitute a violation of the HIPAA Privacy Rule when patient health information is shared without patient consent for purposes other than treatment, payment, and healthcare operations

However, some data collected by HIPAA-covered entities may not be classified as protected health information (“PHI”) and could be permissibly shared outside HIPAA’s regulatory scope. Examples include data collected by healthcare scheduling apps, wearables devices, and health IoT devices. In these circumstances, the FTC could exercise oversight. The FTC is increasingly focused on enforcement actions involving health privacy and potential bias and has historically enforced laws prohibiting bias and discrimination, including the Fair Credit Reporting Act (“FCRA”) and the Equal Credit Opportunity Act (“ECOA”). In 2021, the FTC underscored the importance of ensuring that AI tools avoid discrimination and called for AI to be used “truthfully, fairly, and equitably,” recommending that AI should do “more good than harm” to avoid violating the FTC’s “unfairness” prong of Section 5 of the FTC Act.

Lesson 3: What’s (guidance in the) past is prologue (to enforcement)

While guidance may not always be a precursor to enforcement, it is a good indicator of an agency’s priorities. For instance, in late 2021, the FTC issued a statement on the Health Breach Notification Rule, followed by two posts in January 2022 (1, 2). The FTC then applied the Health Breach Notification Rule (HBNR) for the first and second time in 2023 enforcement actions. 

The FTC has recently honed in on both the health industry and AI. Agency officials published ten blog posts covering AI topics in 2023 alone, including an article instructing businesses to ensure the accuracy and verifiability of advertising around AI in products. In April 2023, the FTC issued a joint statement with the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the Equal Employment Opportunity Commission (EEOC) expressing its intent to prioritize enforcement against discrimination and bias in automated decision-making systems. 

The agency has separately been working on enforcement in the health sector, applying the unfairness prong of its authority to cases where the Commission has found that a company’s privacy practices substantially injured consumers in a manner that did not outweigh the countervailing benefits. This focus resulted in major settlements against health companies, including GoodRx and BetterHelp, where the combined total fine neared $10 million. In July, the FTC published a blog post summarizing lessons from its recent enforcement actions in the health sector, underscoring that “health privacy is a top priority” for the agency.

Lesson 4: Responsibility is the name of the game

Responsible use has been the key concept for policymakers looking to be proactive in establishing positive norms for the use of AI in the healthcare arena. In 2022, the White House Office of Science and Technology Policy (OSTP) published the Blueprint for an AI Bill of Rights (“Blueprint”) to support the development of policies and practices that protect and promote civil rights in the development, deployment, and governance of automated systems. In highlighting AI in the health sector, the Blueprint hopes to set up federal agencies and offices to serve as responsible stewards of AI use for the nation. In 2023, the OSTP also updated the National AI Research and Development (R&D) Plan to advance the deployment of responsible AI, which is likely to influence health research. The Plan is intended to facilitate the study and development of AI while also maintaining privacy and security and preventing inequity.

Expanding on the Blueprint, on October 30, 2023, the Biden Administration released its Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (“EO”). The EO aims to establish new standards for the responsible use, development, and procurement of AI systems across the federal government. Among other directives, the EO directs the Secretary of HHS to establish an “HHS AI Taskforce” in order to create a strategic plan for the responsible use and deployment of AI in the healthcare context. The EO specifies that this strategic plan must establish principles to guide the use of AI as part of the delivery of healthcare, assess the safety and performance of AI systems in the healthcare context, and integrate equity principles and privacy, security and safety standards into the development of healthcare AI systems.

The EO also directs the HHS Secretary to create an AI Safety program to centrally track, catalog, and analyze clinical errors produced by the use of AI in healthcare environments; create and circulate informal guidance to advise on how to avoid these harms from recurring; and develop a strategy for regulating the use of AI and AI-tools for drug-development. The Fact Sheet circulated prior to the release of the EO emphasizes that, “irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing” and discusses expanded grants for AI research in “vital areas,” including healthcare.

On November 1, 2023, the Office of Management and Budget (“OBM”) released for public comment a draft policy on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” intended to help implement the AI EO. The OMB guidance, which would govern federal agencies as well as their contractors, would create special requirements for what it deems “rights-impacting” AI, a designation that would encompass AI that “control[s] or meaningfully influence[s]” the outcomes of health and health insurance-related decision-making. These include the requirements for AI impact assessments, testing against real-world conditions, independent evaluation, ongoing monitoring, human training “human in the loop” decision-making, and notice and documentation.

Finally, the National Institute of Standards and Technology (“NIST”) also focused on responsible AI in 2023 with the release of the Artificial Intelligence Risk Management Framework (“AI RMF”). The AI RMF is meant to serve as a “resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” The AI RMF provides concrete examples on how to frame risks in various contexts, such as potential harm to people, organizations, or an ecosystem. In addition, prior NIST risk management frameworks have provided the basis for legislative and regulatory models, meaning it may have increased importance for regulated entities in the future.

Lesson 5: Focus and keep eyes on the road ahead

AI regulation is a moving target with significant developments expected in the coming years. For instance, OSTP’s Blueprint for an AI Bill of Rights has already been used to inform state policymakers, with legislators both highlighting and incorporating its requirements into legislative proposals. The Blueprints’ five outlined principles aim to: (i) ensure safety and effectiveness; (ii) safeguard against discrimination; (iii) uphold data privacy; (iv) provide notice and explanation; and (v) enable human review or control. These principles are likely to continue to appear and to inform future health-related AI legislation.

In 2022, the FDA’s Center for Devices and Radiological Health (CDRH) released “Clinical Decision Support Software Guidance for Industry and Food and Drug Administration Staff,” which recommends that certain AI tools be regulated by the FDA under its authority to oversee clinical decision support software. Elsewhere, the FDA has noted that its traditional pathways for medical device regulations were not designed to be applied to AI and that the agency is looking to update its current processes. In 2021, CDRH issued a draft “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan”, which introduces a framework to manage risks to patients in a controlled manner. The Action Plan includes specific instruction on data management, including a commitment to transparency on how AI technologies interact with people, ongoing performance monitoring, and updates to the FDA on any changes made to the software as a medical device. Manufacturers of medical devices can expect the FDA to play a vital role in the regulation of AI in certain medical devices and drug discovery.

Conclusion

The legislative and regulatory environment governing AI in the U.S. is actively evolving, with the regulation of the healthcare industry emerging as a key priority for regulators across the federal government. Although the implementation and development of AI into healthcare activities may provide significant benefits, organizations must recognize and mitigate privacy, discrimination, and other risks associated with its use. AI developers are calling for the regulation of AI to reduce existential risks and prevent significant global harm, which may help create clearer standards and expectations for AI developers and developers navigating the resources coming from federal agencies. By prioritizing the development and deployment of safe and trustworthy AI systems, as well as following federal guidance and standards for privacy and security, the healthcare industry can harness the power of AI to ethically and responsibly improve patient care, outcomes, and overall well-being.

Regu(AI)ting Health: Lessons for Navigating the Complex Code of AI and Healthcare Regulations

Authors: Stephanie Wong, Amber Ezzell, & Felicity Slater

As an increasing number of organizations utilize artificial intelligence (“AI”) in their patient-facing services, health organizations are seizing the opportunity to take advantage of the new wave of AI-powered tools. Policymakers, from United States (“U.S.”) government agencies to the White House, have taken heed of this trend, leading to a flurry of agency actions impacting the intersection of health and AI, from enforcement actions and binding rules to advisory options and other, less formal guidance. The result has been a rapidly changing regulatory environment for health organizations deploying artificial intelligence. Below are five key lessons from these actions for organizations, advocates, and other stakeholders seeking to ensure that AI-driven health services are developed and deployed in a lawful and trustworthy manner.

Lesson 1: AI potential in healthcare has evolved exponentially

While AI has been a part of healthcare conversations for decades, recent technological developments have seen exponential growth in potential applications across healthcare professionals and specialties requiring response and regulation of use and application of AI in healthcare. 

The Department of Health and Human Services (“HHS”) is the central authority for health sector regulations in the United States. HHS’ Office for Civil Rights (“OCR”) is responsible for enforcement of the preeminent federal health privacy regulatory framework, the Health Insurance Portability and Accountability Act (HIPAA) Privacy, Security, and Breach Notification Rules (“Privacy Rule”). A major goal of the Privacy Rule is to properly protect individuals’ personal health information while allowing for the flow of health data that is necessary to provide quality health care. 

In 2023, OCR stated that HIPAA-regulated entities should analyze AI tools as they do other novel technologies; organizations should “determine the potential risks and vulnerabilities to electronic protected health information before adding any new technology into their organization.” While not a broad endorsement of health AI, OCR’s statement suggests that AI has a place in the regulated healthcare sector.

The Food and Drug Administration (“FDA”) has taken an even more optimistic approach toward the use of AI. Also an agency within HHS, the FDA is responsible for ensuring the safety, efficacy, and quality of various pharmacological and medical products used in clinical health treatments and monitoring. In 2023, the FDA published a discussion paper intended to facilitate discussion with stakeholders on the use of AI in drug development. Drug discovery is the complex process of identifying and developing new medications or drugs to treat medical conditions and diseases. Before drugs can be marketed to the public for patient use, they must go through multiple stages of research, testing, and development. This entire process can take around 10 to 15 years, or sometimes longer. According to the discussion paper, the FDA strives to “facilitate innovation while safeguarding public health” and plans to develop a “flexible risk-based regulatory framework that promotes innovation and protects patient safety.”

Lesson 2: Different uses of data may implicate different regulatory structures

While there can be uncertainty regarding whether particular data, such IP address data collected by a consumer-facing website, is covered by HIPAA, HHS and the Federal Trade Commission (“FTC”) have made clear that they are working together to ensure organizations protect sensitive health information. In particular, failure to establish proper agreements or safeguards between covered entities and AI vendors can constitute a violation of the HIPAA Privacy Rule when patient health information is shared without patient consent for purposes other than treatment, payment, and healthcare operations

However, some data collected by HIPAA-covered entities may not be classified as protected health information (“PHI”) and could be permissibly shared outside HIPAA’s regulatory scope. Examples include data collected by healthcare scheduling apps, wearables devices, and health IoT devices. In these circumstances, the FTC could exercise oversight. The FTC is increasingly focused on enforcement actions involving health privacy and potential bias and has historically enforced laws prohibiting bias and discrimination, including the Fair Credit Reporting Act (“FCRA”) and the Equal Credit Opportunity Act (“ECOA”). In 2021, the FTC underscored the importance of ensuring that AI tools avoid discrimination and called for AI to be used “truthfully, fairly, and equitably,” recommending that AI should do “more good than harm” to avoid violating the FTC’s “unfairness” prong of Section 5 of the FTC Act.

Lesson 3: What’s (guidance in the) past is prologue (to enforcement)

While guidance may not always be a precursor to enforcement, it is a good indicator of an agency’s priorities. For instance, in late 2021, the FTC issued a statement on the Health Breach Notification Rule, followed by two posts in January 2022 (1, 2). The FTC then applied the Health Breach Notification Rule (HBNR) for the first and second time in 2023 enforcement actions. 

The FTC has recently honed in on both the health industry and AI. Agency officials published ten blog posts covering AI topics in 2023 alone, including an article instructing businesses to ensure the accuracy and verifiability of advertising around AI in products. In April 2023, the FTC issued a joint statement with the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the Equal Employment Opportunity Commission (EEOC) expressing its intent to prioritize enforcement against discrimination and bias in automated decision-making systems. 

The agency has separately been working on enforcement in the health sector, applying the unfairness prong of its authority to cases where the Commission has found that a company’s privacy practices substantially injured consumers in a manner that did not outweigh the countervailing benefits. This focus resulted in major settlements against health companies, including GoodRx and BetterHelp, where the combined total fine neared $10 million. In July, the FTC published a blog post summarizing lessons from its recent enforcement actions in the health sector, underscoring that “health privacy is a top priority” for the agency.

Lesson 4: Responsibility is the name of the game

Responsible use has been the key concept for policymakers looking to be proactive in establishing positive norms for the use of AI in the healthcare arena. In 2022, the White House Office of Science and Technology Policy (OSTP) published the Blueprint for an AI Bill of Rights (“Blueprint”) to support the development of policies and practices that protect and promote civil rights in the development, deployment, and governance of automated systems. In highlighting AI in the health sector, the Blueprint hopes to set up federal agencies and offices to serve as responsible stewards of AI use for the nation. In 2023, the OSTP also updated the National AI Research and Development (R&D) Plan to advance the deployment of responsible AI, which is likely to influence health research. The Plan is intended to facilitate the study and development of AI while also maintaining privacy and security and preventing inequity.

Expanding on the Blueprint, on October 30, 2023, the Biden Administration released its Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (“EO”). The EO aims to establish new standards for the responsible use, development, and procurement of AI systems across the federal government. Among other directives, the EO directs the Secretary of HHS to establish an “HHS AI Taskforce” in order to create a strategic plan for the responsible use and deployment of AI in the healthcare context. The EO specifies that this strategic plan must establish principles to guide the use of AI as part of the delivery of healthcare, assess the safety and performance of AI systems in the healthcare context, and integrate equity principles and privacy, security and safety standards into the development of healthcare AI systems.

The EO also directs the HHS Secretary to create an AI Safety program to centrally track, catalog, and analyze clinical errors produced by the use of AI in healthcare environments; create and circulate informal guidance to advise on how to avoid these harms from recurring; and develop a strategy for regulating the use of AI and AI-tools for drug-development. The Fact Sheet circulated prior to the release of the EO emphasizes that, “irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing” and discusses expanded grants for AI research in “vital areas,” including healthcare.

On November 1, 2023, the Office of Management and Budget (“OBM”) released for public comment a draft policy on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” intended to help implement the AI EO. The OMB guidance, which would govern federal agencies as well as their contractors, would create special requirements for what it deems “rights-impacting” AI, a designation that would encompass AI that “control[s] or meaningfully influence[s]” the outcomes of health and health insurance-related decision-making. These include the requirements for AI impact assessments, testing against real-world conditions, independent evaluation, ongoing monitoring, human training “human in the loop” decision-making, and notice and documentation.

Finally, the National Institute of Standards and Technology (“NIST”) also focused on responsible AI in 2023 with the release of the Artificial Intelligence Risk Management Framework (“AI RMF”). The AI RMF is meant to serve as a “resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” The AI RMF provides concrete examples on how to frame risks in various contexts, such as potential harm to people, organizations, or an ecosystem. In addition, prior NIST risk management frameworks have provided the basis for legislative and regulatory models, meaning it may have increased importance for regulated entities in the future.

Lesson 5: Focus and keep eyes on the road ahead

AI regulation is a moving target with significant developments expected in the coming years. For instance, OSTP’s Blueprint for an AI Bill of Rights has already been used to inform state policymakers, with legislators both highlighting and incorporating its requirements into legislative proposals. The Blueprints’ five outlined principles aim to: (i) ensure safety and effectiveness; (ii) safeguard against discrimination; (iii) uphold data privacy; (iv) provide notice and explanation; and (v) enable human review or control. These principles are likely to continue to appear and to inform future health-related AI legislation.

In 2022, the FDA’s Center for Devices and Radiological Health (CDRH) released “Clinical Decision Support Software Guidance for Industry and Food and Drug Administration Staff,” which recommends that certain AI tools be regulated by the FDA under its authority to oversee clinical decision support software. Elsewhere, the FDA has noted that its traditional pathways for medical device regulations were not designed to be applied to AI and that the agency is looking to update its current processes. In 2021, CDRH issued a draft “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan”, which introduces a framework to manage risks to patients in a controlled manner. The Action Plan includes specific instruction on data management, including a commitment to transparency on how AI technologies interact with people, ongoing performance monitoring, and updates to the FDA on any changes made to the software as a medical device. Manufacturers of medical devices can expect the FDA to play a vital role in the regulation of AI in certain medical devices and drug discovery.

Conclusion

The legislative and regulatory environment governing AI in the U.S. is actively evolving, with the regulation of the healthcare industry emerging as a key priority for regulators across the federal government. Although the implementation and development of AI into healthcare activities may provide significant benefits, organizations must recognize and mitigate privacy, discrimination, and other risks associated with its use. AI developers are calling for the regulation of AI to reduce existential risks and prevent significant global harm, which may help create clearer standards and expectations for AI developers and developers navigating the resources coming from federal agencies. By prioritizing the development and deployment of safe and trustworthy AI systems, as well as following federal guidance and standards for privacy and security, the healthcare industry can harness the power of AI to ethically and responsibly improve patient care, outcomes, and overall well-being.

FPF Files Comments with the Consumer Financial Protection Bureau Regarding Personal Financial Data Rights

On December 21st, 2023, the Future of Privacy Forum filed comments with the Consumer Financial Protection Bureau (CFPB) in response to the notice of proposed rulemaking (NPRM) regarding personal financial data rights. FPF’s comments focus on promoting privacy as a core tenet in the U.S. open banking ecosystem in order to protect individuals’ personal information while enhancing user trust.

Read our comments here.

This NPRM is the latest milestone in the Bureau’s multi-year effort to create a regulatory framework for open banking in the U.S. using its Section 1033 authority. Section 1033 was passed as part of the Consumer Financial Protection Act (CFPA) of 2010 and it governs access to a person’s data held by a consumer financial services provider. The CFPB’s proposed rule requires data providers, such as banks, card issuers, and digital wallets, to share certain kinds of consumer financial data (e.g., transactions information and account balance) with authorized third parties at the consumer’s request. As the CFPB sets out, “[t]his proposed rule aims to . . . push for greater efficiency and reliability of data access across the industry to reduce industry costs, facilitate greater competition, and support the development of beneficial products and services.”1

In our submission, FPF provides several recommendations to the CFPB, including:

  1. Encouraging the development of industry standards for third party privacy rules and data provider denials of access requests; 
  2. Supporting an opt-in standard and use of de-identified data, while providing an approach for high-risk uses; 
  3. Clarifying an approach to address ‘dark patterns’ to discourage consumer manipulation;
  4. Strengthening the phase-out of and directly prohibiting third parties from engaging in screen scraping of data from online consumer accounts; and
  5. Harmonizing various privacy rules that result in numerous and different notices and choices.

FPF’s comments are the culmination of over a year of meetings with key stakeholders in the open banking ecosystem. Both build upon earlier recommendations that FPF made in response to the Bureau’s “Outline of Proposal and Alternatives Under Considerations for the Personal Financial Data Rights Rulemaking,” which was a prerequisite to the NPRM. Last year, FPF also released an infographic, “Open Banking And The Customer Experience,” visualizing the U.S. open banking ecosystem and the challenges affecting it, which are also addressed in FPF’s latest comment.

1Required Rulemaking on Personal Financial Data Rights, 88 Fed. Reg. 74796, 74843 (Oct. 31, 2023).