Regu(AI)ting Health: Lessons for Navigating the Complex Code of AI and Healthcare Regulations
Authors: Stephanie Wong, Amber Ezzell, & Felicity Slater
As an increasing number of organizations utilize artificial intelligence (“AI”) in their patient-facing services, health organizations are seizing the opportunity to take advantage of the new wave of AI-powered tools. Policymakers, from United States (“U.S.”) government agencies to the White House, have taken heed of this trend, leading to a flurry of agency actions impacting the intersection of health and AI, from enforcement actions and binding rules to advisory options and other, less formal guidance. The result has been a rapidly changing regulatory environment for health organizations deploying artificial intelligence. Below are five key lessons from these actions for organizations, advocates, and other stakeholders seeking to ensure that AI-driven health services are developed and deployed in a lawful and trustworthy manner.
Lesson 1: AI potential in healthcare has evolved exponentially
While AI has been a part of healthcare conversations for decades, recent technological developments have seen exponential growth in potential applications across healthcare professionals and specialties requiring response and regulation of use and application of AI in healthcare.
The Department of Health and Human Services (“HHS”) is the central authority for health sector regulations in the United States. HHS’ Office for Civil Rights (“OCR”) is responsible for enforcement of the preeminent federal health privacy regulatory framework, the Health Insurance Portability and Accountability Act (HIPAA) Privacy, Security, and Breach Notification Rules (“Privacy Rule”). A major goal of the Privacy Rule is to properly protect individuals’ personal health information while allowing for the flow of health data that is necessary to provide quality health care.
In 2023, OCR stated that HIPAA-regulated entities should analyze AI tools as they do other novel technologies; organizations should “determine the potential risks and vulnerabilities to electronic protected health information before adding any new technology into their organization.” While not a broad endorsement of health AI, OCR’s statement suggests that AI has a place in the regulated healthcare sector.
The Food and Drug Administration (“FDA”) has taken an even more optimistic approach toward the use of AI. Also an agency within HHS, the FDA is responsible for ensuring the safety, efficacy, and quality of various pharmacological and medical products used in clinical health treatments and monitoring. In 2023, the FDA published a discussion paper intended to facilitate discussion with stakeholders on the use of AI in drug development. Drug discovery is the complex process of identifying and developing new medications or drugs to treat medical conditions and diseases. Before drugs can be marketed to the public for patient use, they must go through multiple stages of research, testing, and development. This entire process can take around 10 to 15 years, or sometimes longer. According to the discussion paper, the FDA strives to “facilitate innovation while safeguarding public health” and plans to develop a “flexible risk-based regulatory framework that promotes innovation and protects patient safety.”
Lesson 2: Different uses of data may implicate different regulatory structures
While there can be uncertainty regarding whether particular data, such IP address data collected by a consumer-facing website, is covered by HIPAA, HHS and the Federal Trade Commission (“FTC”) have made clear that they are working together to ensure organizations protect sensitive health information. In particular, failure to establish proper agreements or safeguards between covered entities and AI vendors can constitute a violation of the HIPAA Privacy Rule when patient health information is shared without patient consent for purposes other than treatment, payment, and healthcare operations.
However, some data collected by HIPAA-covered entities may not be classified as protected health information (“PHI”) and could be permissibly shared outside HIPAA’s regulatory scope. Examples include data collected by healthcare scheduling apps, wearables devices, and health IoT devices. In these circumstances, the FTC could exercise oversight. The FTC is increasingly focused on enforcement actions involving health privacy and potential bias and has historically enforced laws prohibiting bias and discrimination, including the Fair Credit Reporting Act (“FCRA”) and the Equal Credit Opportunity Act (“ECOA”). In 2021, the FTC underscored the importance of ensuring that AI tools avoid discrimination and called for AI to be used “truthfully, fairly, and equitably,” recommending that AI should do “more good than harm” to avoid violating the FTC’s “unfairness” prong of Section 5 of the FTC Act.
Lesson 3: What’s (guidance in the) past is prologue (to enforcement)
While guidance may not always be a precursor to enforcement, it is a good indicator of an agency’s priorities. For instance, in late 2021, the FTC issued a statement on the Health Breach Notification Rule, followed by two posts in January 2022 (1, 2). The FTC then applied the Health Breach Notification Rule (HBNR) for the first and second time in 2023 enforcement actions.
The FTC has recently honed in on both the health industry and AI. Agency officials published ten blog posts covering AI topics in 2023 alone, including an article instructing businesses to ensure the accuracy and verifiability of advertising around AI in products. In April 2023, the FTC issued a joint statement with the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the Equal Employment Opportunity Commission (EEOC) expressing its intent to prioritize enforcement against discrimination and bias in automated decision-making systems.
The agency has separately been working on enforcement in the health sector, applying the unfairness prong of its authority to cases where the Commission has found that a company’s privacy practices substantially injured consumers in a manner that did not outweigh the countervailing benefits. This focus resulted in major settlements against health companies, including GoodRx and BetterHelp, where the combined total fine neared $10 million. In July, the FTC published a blog post summarizing lessons from its recent enforcement actions in the health sector, underscoring that “health privacy is a top priority” for the agency.
Lesson 4: Responsibility is the name of the game
Responsible use has been the key concept for policymakers looking to be proactive in establishing positive norms for the use of AI in the healthcare arena. In 2022, the White House Office of Science and Technology Policy (OSTP) published the Blueprint for an AI Bill of Rights (“Blueprint”) to support the development of policies and practices that protect and promote civil rights in the development, deployment, and governance of automated systems. In highlighting AI in the health sector, the Blueprint hopes to set up federal agencies and offices to serve as responsible stewards of AI use for the nation. In 2023, the OSTP also updated the National AI Research and Development (R&D) Plan to advance the deployment of responsible AI, which is likely to influence health research. The Plan is intended to facilitate the study and development of AI while also maintaining privacy and security and preventing inequity.
Expanding on the Blueprint, on October 30, 2023, the Biden Administration released its Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (“EO”). The EO aims to establish new standards for the responsible use, development, and procurement of AI systems across the federal government. Among other directives, the EO directs the Secretary of HHS to establish an “HHS AI Taskforce” in order to create a strategic plan for the responsible use and deployment of AI in the healthcare context. The EO specifies that this strategic plan must establish principles to guide the use of AI as part of the delivery of healthcare, assess the safety and performance of AI systems in the healthcare context, and integrate equity principles and privacy, security and safety standards into the development of healthcare AI systems.
The EO also directs the HHS Secretary to create an AI Safety program to centrally track, catalog, and analyze clinical errors produced by the use of AI in healthcare environments; create and circulate informal guidance to advise on how to avoid these harms from recurring; and develop a strategy for regulating the use of AI and AI-tools for drug-development. The Fact Sheet circulated prior to the release of the EO emphasizes that, “irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing” and discusses expanded grants for AI research in “vital areas,” including healthcare.
On November 1, 2023, the Office of Management and Budget (“OBM”) released for public comment a draft policy on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” intended to help implement the AI EO. The OMB guidance, which would govern federal agencies as well as their contractors, would create special requirements for what it deems “rights-impacting” AI, a designation that would encompass AI that “control[s] or meaningfully influence[s]” the outcomes of health and health insurance-related decision-making. These include the requirements for AI impact assessments, testing against real-world conditions, independent evaluation, ongoing monitoring, human training “human in the loop” decision-making, and notice and documentation.
Finally, the National Institute of Standards and Technology (“NIST”) also focused on responsible AI in 2023 with the release of the Artificial Intelligence Risk Management Framework (“AI RMF”). The AI RMF is meant to serve as a “resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” The AI RMF provides concrete examples on how to frame risks in various contexts, such as potential harm to people, organizations, or an ecosystem. In addition, prior NIST risk management frameworks have provided the basis for legislative and regulatory models, meaning it may have increased importance for regulated entities in the future.
Lesson 5: Focus and keep eyes on the road ahead
AI regulation is a moving target with significant developments expected in the coming years. For instance, OSTP’s Blueprint for an AI Bill of Rights has already been used to inform state policymakers, with legislators both highlighting and incorporating its requirements into legislative proposals. The Blueprints’ five outlined principles aim to: (i) ensure safety and effectiveness; (ii) safeguard against discrimination; (iii) uphold data privacy; (iv) provide notice and explanation; and (v) enable human review or control. These principles are likely to continue to appear and to inform future health-related AI legislation.
In 2022, the FDA’s Center for Devices and Radiological Health (CDRH) released “Clinical Decision Support Software Guidance for Industry and Food and Drug Administration Staff,” which recommends that certain AI tools be regulated by the FDA under its authority to oversee clinical decision support software. Elsewhere, the FDA has noted that its traditional pathways for medical device regulations were not designed to be applied to AI and that the agency is looking to update its current processes. In 2021, CDRH issued a draft “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan”, which introduces a framework to manage risks to patients in a controlled manner. The Action Plan includes specific instruction on data management, including a commitment to transparency on how AI technologies interact with people, ongoing performance monitoring, and updates to the FDA on any changes made to the software as a medical device. Manufacturers of medical devices can expect the FDA to play a vital role in the regulation of AI in certain medical devices and drug discovery.
The legislative and regulatory environment governing AI in the U.S. is actively evolving, with the regulation of the healthcare industry emerging as a key priority for regulators across the federal government. Although the implementation and development of AI into healthcare activities may provide significant benefits, organizations must recognize and mitigate privacy, discrimination, and other risks associated with its use. AI developers are calling for the regulation of AI to reduce existential risks and prevent significant global harm, which may help create clearer standards and expectations for AI developers and developers navigating the resources coming from federal agencies. By prioritizing the development and deployment of safe and trustworthy AI systems, as well as following federal guidance and standards for privacy and security, the healthcare industry can harness the power of AI to ethically and responsibly improve patient care, outcomes, and overall well-being.