Artificial Intelligence (AI) in Health Care

Artificial Intelligence (AI) in Health Care

December 30, 2024 (R48319)
Jump to Main Text of Report

Summary

The use of artificial intelligence (AI) in health care settings has grown over time in part due to recent increases in health care data availability and innovations in big data analytical methods. AI encompasses multiple technologies and is variably defined, with some disagreement at times among stakeholders about which technologies even qualify as AI. Machine learning (ML) techniques, including deep learning and neural networks, underpin many AI tools used in health care. Some of the more widely used AI techniques and applications in health care include natural language processing, which refers to the use of rule-based or ML approaches to understanding the structure and meaning of human language; rule-based expert systems; physical robots; and robotic process automation. These technologies may affect a wide range of health care stakeholders, including AI developers, researchers, health systems, payers, patients, and federal agencies.

AI technologies have generally become more complex over time, progressing from rule-based expert systems to generative AI models. Within health care, the use of AI broadly falls within three categories: diagnosis and treatment, patient engagement and adherence with treatment plans, and administrative applications. While AI technologies have the potential to improve health care, they may also introduce novel challenges and exacerbate existing ones if not properly overseen.

There have been a variety of federal efforts undertaken to address the use of AI in health care, including by executive orders and agency actions. Agencies within the U.S. Department of Health and Human Services (HHS) that have been active in developing relevant regulations include the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology (ASTP/ONC), the U.S. Food and Drug Administration (FDA), the Office for Civil Rights (OCR), and the Centers for Medicare & Medicaid Services (CMS).

These regulatory efforts in part attempt to address challenges that may arise when AI is used in health care settings, including issues related to trust in AI technology, data access, bias, lack of transparency, privacy, scaling and integration, liability, regulatory harmonization, and environmental impact. AI is a rapidly developing and changing technology; if AI is to be used safely and effectively in health care settings, ongoing oversight and regulatory efforts may need to be addressed by Congress and relevant federal regulatory bodies. This report highlights the challenges and questions that Congress may wish to consider.


Introduction

The application of artificial intelligence (AI) in health care settings has become more pervasive due to recent increases in the availability of health care data and innovations in big data analytical methods. When properly trained on relevant clinical data, AI technologies may identify pertinent information from voluminous data, potentially allowing for the rapid improvement of clinical decisionmaking. Certain AI techniques and technologies may also have the ability to "learn" and self-correct using new data, ultimately improving algorithmic accuracy in response to feedback. The use of properly trained AI tools in health care could provide many benefits, including reducing medical errors, improving diagnostics, and streamlining administrative functions. For these benefits to be realized, AI technologies must be trained on data representative of the populations and tasks for which the AI tool is intended. Training data may derive from a variety of clinical practices, including screenings, diagnoses, and treatment plans. Types of training data in these settings include patient demographics, medical notes, electronic health records (EHRs), medical device readings, physical examinations, clinical labs, and images, among others.1 As these AI technologies and their applications in health care proliferate, stakeholders have begun to pursue regulatory efforts to ensure these technologies are appropriately integrated into the health care setting.

Though AI is often discussed as if it were a single technology, AI is actually an umbrella term encompassing a variety of technologies and techniques. AI has been defined in various ways, without consensus on a single definition, in part due to its rapidly changing nature. Multiple definitions for AI are provided in the U.S. Code. For example, in 15 U.S.C. Section 9401(3), AI is defined as:

a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to-(A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.2

In a Senate Committee on Health, Education, Labor & Pensions (HELP Committee) white paper entitled Exploring Congress' Framework for the Future of AI: The Oversight and Legislative Role of Congress Over the Integration of Artificial Intelligence in Health, Education, and Labor, AI is defined as "computers, or computer-powered machines, exhibiting human-like intelligent capabilities."3 AI has also been broadly described as "computerized systems that work and react in ways commonly thought to require intelligence."4

Commonly referenced techniques to develop AI may include machine learning (ML), deep learning, supervised learning, and reinforcement learning, among others.5

The AI development life cycle in a health care setting may involve a variety of stakeholders, including AI developers, researchers, health systems, payers, patients, and federal agencies, among others.6 A number of federal agencies within the U.S. Department of Health and Human Services (HHS) have been involved in the regulation and use of AI, including the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology (ASTP/ONC)7 and the Centers for Disease Control and Prevention (CDC).8 CDC, for example, is exploring how AI technologies may be used to better estimate public health sentinel events, including suicide deaths.9 In December 2023, the Biden Administration announced the voluntary commitment of 28 health care companies, including providers and payers, to the responsible purchasing and use of AI in the health care setting.10 Companies that committed included Boston Children's Hospital, CVS Health, and Mass General Brigham, among others.11 Additionally, several emerging private sector organizations are seeking to develop oversight measures and harmonized standards for AI use in health care, often in tandem with the public sector.12

This report provides a broad overview of the progress of AI technologies and selected landmarks in health care; discusses different AI techniques that are most often used in health care settings; explores primary health care tasks to which AI technologies are frequently applied; and provides an overview of selected federal efforts to oversee AI development, deployment, and monitoring in health care settings. Finally, this report provides Congress with legislative considerations regarding the use of AI technologies in health care. This report does not provide definitions for specific AI technologies and techniques.13

AI Evolution Broadly

While there has recently been an increased public focus on AI—particularly in response to the advances in, and widespread availability of, generative AI tools—AI has existed in some form since at least the 1950s.14 The progression of AI technologies may roughly be divided into multiple eras, though the exact chronology of AI development is the subject of much debate amongst experts and stakeholders.

One of the earliest recognized AI programs, written by Christopher Strachey in 1951, was designed to play the game checkers.15 In 1956, John McCarthy and several others introduced the term "artificial intelligence" at a Dartmouth College conference, marking what many consider to be the beginning of modern AI research.16 From the onset, health care was considered a promising area for AI application, and multiple clinical decision support (CDS) tools have been explored and developed since.17 For selected examples of these tools and others in health care, see Figure 1.

During the 1960s and 1970s, rule-based expert systems were the primary AI technologies focused upon. These models were limited by the computing power and data available at the time.18 A shift in focus to the development of ML techniques and neural network architectures in particular occurred during the 1980s and 1990s, producing machines able to "learn." An example of an AI technology from this period is IBM's Deep Blue, known in part for its defeat of world chess champion Garry Kasparov. In the 2000s advances in AI techniques, such as natural language processing, enabled the creation of personal assistants like Amazon's Alexa.19 In the last decade, deep learning techniques have also greatly improved when applied to tasks like image classification.20

Early AI technologies like rule-based expert systems relied upon medical information supplied by, and rules curated by, medical experts. More recent ML algorithms have allowed for more sophisticated, ML-based models capable of recognizing patterns in vast data sets. The scale of such amounts of data may obscure the patterns from human analysts.21 The modern rapid advances in AI, and increased public scrutiny of such technologies, can be attributed to several intersecting factors, including increased availability of large-scale annotated clinical data sets, progress in ML techniques, the availability of open source ML packages, and increases in computer power and data storage, among others.22

Figure 1. Selected AI Developments Timeline

Source: Figure adapted by CRS from various sources. See Shuroug A. Alowais et al., "Revolutionizing Healthcare: The Role of Artificial Intelligence in Clinical Practice," BMC Medical Education, vol. 23, no. 689 (2023), p. 2; Kun-Hsing Yu, Andrew L. Beam, and Isaac S. Kohane, "Artificial Intelligence in Healthcare," Nature Biomedical Engineering, vol. 2 (October 2018), pp. 719-720; Thomas Davenport and Ravi Kalakota, "The Potential for Artificial Intelligence in Healthcare," Future Healthcare Journal, vol. 6, no. 2 (June 2019), pp. 94-95, https://pmc.ncbi.nlm.nih.gov/articles/PMC6616181/pdf/futurehealth-6-2-94.pdf; Vivek Kaul, Sarah Enslin, and Seth A. Gross, "History of Artificial Intelligence in Medicine," Gastrointestinal Endoscopy, vol. 92, no. 4 (2020), pp. 807-809; Rahim Hirani et al., "Artificial Intelligence and Healthcare: A Journey Through History, Present Innovations, and Future Possibilities," Life, vol. 14, no. 5 (2024), pp. 2-3; OpenAI, "Introducing ChatGPT," November 30, 2022, https://openai.com/index/chatgpt/.

Overview of AI in Health Care

A variety of AI technologies may be used in health care settings. These AI technologies can be combined to perform more specialized tasks to support health care actors, often in the provision of care to patients.23 The tasks to which AI technologies are applied in health care fall broadly into three categories: diagnosis and treatment, patient engagement and adherence with treatments plans, and administrative functions.

Techniques and Applications of AI Relevant to Health Care

A variety of AI techniques and applications may be used in health care settings. As is true in other sectors, ML techniques, including deep learning and neural networks, underpin many AI tools used in health care. Some of the more widely used AI techniques and applications in health care include natural language processing, which refers to the use of rule-based or ML approaches to understanding the structure and meaning of human language; rule-based expert systems; physical robots; and robotic process automation.

ML refers to a subset of AI techniques that rely on algorithms and large datasets to train computing devices to sort, identify, and predict information. One of the goals of ML is to create a model capable of interpreting previously unencountered data based upon the data it was trained on.24 In health care, ML has wide applicability, and may, for example, be used in precision medicine to predict which treatment regimes will be most successful for individual patients.25

Neural networks are a complex form of ML meant to imitate the way layers of neurons in the human brain process information.26 In health care, neural networks are commonly used to interpret medical imaging, and may, for example, be used to detect the presence of diabetic retinopathy in images of the human eye.27

Deep learning involves neural networks with many layers of artificial neurons, or nodes, leveraged to better predict outcomes.28 In the health care context, deep learning may be used, for example, to diagnose cancerous lesions in radiological imaging.29

Generative AI

In recent years, AI technologies have continued to rapidly develop and emerge, including advances in generative AI models.30 Generative AI can create novel content; it in particular has garnered much attention, in part due to widespread adoption of tools like ChatGPT, a publicly available generative AI model trained on publicly available data.31 There have been news reports of generative AI tools like ChatGPT providing incorrect answers to medical questions. For example, one news article reported that ChatGPT had provided incorrect information regarding various pharmaceuticals.32 While generative AI may be a powerful tool, its accuracy and usefulness in health care settings is tied to the breadth and quality of the underlying training data. Thus, generative AI tools trained solely on public data scraped from the internet may not be able to answer certain complex questions, including medical queries that rely upon specialized knowledge and training. Furthermore, generative AI models may 'hallucinate,' or provide incorrect information.33 These challenges underscore the importance of ensuring that generative AI technologies are fit for the purpose to which they are ultimately applied, which includes users understanding the caveats and limitations of each tool.34

Natural language processing attempts to parse human language and may involve speech recognition, text analysis, and translation, among other applications.35 In health care, natural language processing may be used, for example, to organize and analyze documentation like clinical notes.36

Rule-based expert systems essentially implement a series of "if-then" rules leading to a decision outcome. To create a rule-based expert system, a human expert is necessary to construct the rules and their relationships with one another.37 In health care, rule-based expert systems are commonly integrated into EHRs and used to support health care providers in making clinical decisions like a patient's diagnosis.38

AI can also be integrated into physical robots, which perform set tasks like lifting objects or delivering supplies. In health care, surgical robots are sometimes used to enhance physician capabilities. Conversely, robotic process automation refers to the performance of set digital tasks by computer programs, often for administrative purposes. In health care, robotic process automation may be used for tasks like billing.39

While these AI technologies may be discrete, they are also increasingly used in combination with one another. For example, robotic process automation may leverage image recognition technologies to extract data from faxes to enter into medical transaction systems.40

AI Applications in Health Care

As stated earlier, the use of AI in health care broadly falls into three categories: diagnosis and treatment, patient engagement and adherence with treatment plans, and administrative functions.

For patient diagnosis and treatment purposes, early AI technologies used were primarily rule-based expert systems. Since the 1970s, there was a shift from rule-based expert systems to more complex AI tools. AI technologies are often used to analyze medical imaging, like radiological or retinal images.41 Prediction models that detect and warn of high-risk conditions, like sepsis and heart failure, have also been developed.

AI technologies used for treatment and diagnosis purposes may also be used to support precision medicine.42 Precision medicine, also known as personalized treatment, is a developing medical approach tailored to specific patients using factors like genetics, environment, and lifestyle. Possible advantages of precision medicine include improved patient outcomes due to increased treatment effectiveness, efficiency, and safety. AI may potentially be useful in precision medicine because it can analyze large and complex datasets, predict outcomes, and optimize treatment plans. ML in particular may strengthen this process. For such treatments to be effective, these technologies need access to individuals' genotypic data, in part to ensure treatment decisions like medication selection and dosages are appropriate for each individual patient.43

AI technologies can also be used to monitor and encourage patient engagement and adherence with medical treatment plans. AI-enabled systems, including those capable of sending alerts and targeted content to patients, could promote improved patient adherence to treatment plans. Wearable devices or patient EHRs may be used to collect data for such applications. Information from these sources can be shared with a patient's physician, who may in turn provide tailored recommendations to the patient (e.g., precision medicine).44

AI technologies are also increasingly used in administration to increase efficiency in claims processing, clinical documentation, revenue cycle management, and medical records management. Additionally, AI-based chatbots can be used in telehealth services.45

Selected Federal Actions

Many federal agencies are developing internal protocols for AI use; some are also developing policies shaping public AI use within their areas of authority.

Executive Order (E.O.) 14110

On October 30, 2023, the Biden Administration released E.O. 14110, entitled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence."46 This E.O. includes direction for various federal agencies to develop, facilitate the development of, or promote standards for AI safety and security.47 Further, according to the White House, the E.O. "protects Americans' privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, [and] advances American leadership around the world."48 It specifically tasks the HHS Secretary (Secretary) with multiple initiatives, including the establishment of an HHS AI Task Force, the development of a quality strategy, the advancement of nondiscrimination protections, the creation of an AI safety program, and the development of a strategy for regulating the use of AI and AI-enabled tools in drug development processes.49

Agency Actions

Multiple HHS offices and operating divisions, including the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology (ASTP/ONC), the U.S. Food and Drug Administration (FDA), the Office for Civil Rights (OCR), and the Centers for Medicare & Medicaid Services (CMS), have pursued regulatory actions regarding AI. These agency efforts are in early stages and are somewhat fragmented, though there is increasingly greater focus on unifying regulatory approaches across HHS.50

Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology

ASTP/ONC, a staff division within the Office of the Secretary, has pursued numerous AI initiatives. ASTP/ONC is generally responsible for advancing the development and use of health information technology (HIT), and establishing data sharing expectations.51 Additionally, in July 2024, HHS underwent a reorganization, during which ASTP/ONC was tasked with "oversight over technology, data, and AI policy and strategy."52 ASTP/ONC carries out its mission in part through its ONC Health IT Certification Program (Certification Program), under which HIT developers may voluntarily pursue ASTP/ONC certification of their HIT modules, including EHRs.53 An HIT module certified under the program indicates the module meets baseline standardized criteria established by ASTP/ONC.54

One of the challenges ASTP/ONC seeks to address related to AI is how to effectively capitalize on the potential of AI, ML, and related algorithmic technologies, while avoiding risks related to the use of invalid, inappropriate, unfair, or unsafe predictions.55 ASTP/ONC notes that algorithms have been used in health care for decades in the form of rule-based decision support interventions (DSIs).56

In January 2024, ASTP/ONC published a final rule in the Federal Register entitled "Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing" (HTI-1).57 In part, HTI-1 focuses on increasing algorithmic transparency through a revised Certification Program criterion for HIT modules that contain DSIs.58 Under HTI-1, DSIs encompass both evidence-based and predictive algorithms.59 For the purposes of ASTP/ONC's final rule, evidence-based DSIs are "limited to only those DSIs that are actively presented to users in clinical workflow to enhance, inform, or influence decision-making related to the care a patient receives and that do not meet the definition for [p]redictive DSI."60 In turn, a predictive DSI is defined as a "technology that supports decision-making based on algorithms or models that derive relationships from training data and then produce an output that results in prediction, classification, recommendation, evaluation, or analysis."61

HTI-1 generally stipulates that certified HIT modules containing evidence-based or predictive DSIs must disclose to selected user parties particular source attributes, or categories of technical performance and quality information.62 The required source attributes for disclosure differ for evidence-based versus predictive DSIs.63 The required disclosure is broadly intended to furnish selected users with bibliographic data for specified DSIs and information relevant to health equity, among other things, so that users may make more informed decisions when choosing and deploying DSIs in clinical settings.64 HTI-1 also requires that developers keep source attributes up-to-date65 and that developers implement risk management practices for predictive DSIs in particular.66 Implementation of the portions of this rule described above are to take effect at the start of 2025.67

Part of the voluntary commitment agreed to by the 28 health care companies regarding the responsible purchasing and use of AI in health care discussed previously draws upon HTI-1.68 Specifically, the commitment stipulates that participants should "work with … peers and partners to ensure outcomes are aligned with fair, appropriate, valid, effective, and safe (FAVES) AI principles."69 These FAVES AI principles are referenced in HTI-1 and are defined in the commitment statement as follows:

  • Fair: "Outcomes of model do not exhibit prejudice or favoritism toward an individual or group based on their inherent or acquired characteristics."
  • Appropriate: "Model and process outputs are well matched to produce results appropriate for specific contexts and populations to which they are applied."
  • Valid: "Model and process outputs have been shown to estimate targeted values accurately and as expected in both internal and external data."
  • Effective: "Outcomes of model have demonstrated benefit in real-world conditions."
  • Safe: "Outcomes of model are free from any known unacceptable risks and for which the probable benefits outweigh any probable risk."70

ASTP/ONC states that the finalized DSI certification criterion and accompanying transparency requirements in HTI-1 align with E.O. 14110 and will "improve transparency, promote trustworthiness, and incentivize the development and wider use of fair, appropriate, valid, effective, and safe [p]redictive DSIs to aid decision-making in healthcare."71 ASTP/ONC further asserts that the "resulting information transparency increases public trust and confidence in these technologies so that the benefits of [them] may expand in safer, more appropriate, and more equitable ways."72

U.S. Food and Drug Administration

FDA regulates the safety and effectiveness of medical devices, which encompass "device software functions," including those that are AI/ML-enabled. The agency only has authority over certain software functions; specifically, those that meet the definition of "device" in the Federal Food, Drug, and Cosmetic Act (FFDCA).73 The 21st Century Cures Act (Cures Act, P.L. 114-255), enacted in 2016, further clarified the scope of software functions that meet the definition of device. Specifically, the Cures Act excluded from the definition of device those software functions intended: (1) "for administrative support of a health care facility;" (2) "for maintaining or encouraging a healthy lifestyle … unrelated to the diagnosis, cure, mitigation, prevention, or treatment of a disease;" (3) "to serve as electronic patient records;" and (4) "for transferring, storing, converting formats, or displaying" certain data and results.74 The law also excluded CDS software that is for the purpose of displaying certain medical information about a patient (e.g., clinical practice guidelines); providing recommendations to a health care professional about prevention, diagnosis, or treatment of a disease; and enabling that professional to independently review the basis for the recommendations.75 Determining whether a software function meets the definition of a device, and is thus regulated by FDA, can be a challenge for the developer. FDA has published several guidance documents addressing enforcement discretion policies for various types of device software functions (e.g., general wellness applications), generally based on the perceived risk of the product.76

SaMD

SaMD is defined as "software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device."

FDA, "Software as a Medical Device (SaMD)," https://www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd.

Device software functions regulated by FDA have proliferated in recent years, and include both software as a medical device (SaMD) and software integral to a traditional hardware device (or software in a medical device, SiMD).77 AI/ML-enabled devices are more often SaMD, and FDA regulates SaMD as it does other devices. That is, these devices are subject to the same regulatory controls as other devices in the same risk class. This type of device is most often Class II—moderate risk—and therefore is subject to premarket review requirements, generally through a 510(k) notification, but also through a De Novo request if there is no predicate device available. In certain cases, AI/ML-enabled devices may be Class III, and would be subject to premarket approval (PMA) prior to marketing.78 All devices are subject to general controls, including adverse event reporting, establishment registration and device listing, and the Quality System regulation (current good manufacturing practices), among others.79

FDA maintains a list of AI/ML-enabled devices, based on publicly available information, that have received marketing authorization. As of August 2024, the agency reports that approximately 950 such devices have been authorized, cleared, or approved for marketing, with the majority in the field of radiology.80 Most of these devices were cleared for marketing through the 510(k) premarket notification pathway, which requires a determination that the device is "substantially equivalent" to a legally marketed predicate device.

FDA has increasingly engaged on the issue of regulation of SaMD in recent years, and specifically AI/ML-enabled SaMD, publishing a proposed regulatory framework for AI/ML-enabled SaMD in 2019 and an action plan for AI/ML-enabled SaMD in 2021.81 In addition, the agency undertook a pilot program, the Software Precertification Pilot Program, which ran from 2017 until 2022.82 A key regulatory challenge in this area is oversight of a product that is not static, but rather changing over time, and how to predict, control, and mitigate that change over time, while preserving the safety and effectiveness of the device. To help address this issue, FDA introduced the concept of a "predetermined change control plan" (PCCP) in its 2019 proposed regulatory framework.83 In late 2022, Congress passed the Food and Drug Omnibus Reform Act of 2022 (FDORA, P.L. 117-328, Division FF, Title III), which provided the agency with new statutory authority to approve a PCCP as part of a PMA application or a 510(k) notification.84 FDA published draft guidance with more information about this mechanism and its appropriate inclusion in premarket applications in April 2023.85

As issues with AI and ML span medical products and centers across FDA, in March 2024, the agency published "Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together."86 This publication "outlines the agency's commitment and cross-center collaboration to protect public health while fostering responsible and ethical medical product innovation through Artificial Intelligence."87

Office for Civil Rights

OCR administers various antidiscrimination statutes that apply to health care settings through actions including promulgating regulations and conducting administrative enforcement.88 One such statute is Section 1557 of the Patient Protection and Affordable Care Act (ACA; P.L. 111-148), which prohibits discrimination on the basis of race, color, national origin, sex, disability, and age by certain health-related entities.89 In May 2024, OCR issued new regulations implementing Section 1557.90 Those regulations expressly clarify that Section 1557 bars covered entities from discriminating through the use of AI in patient care decisions.91 The rule reflects E.O. 14110's objective that AI not "deepen[] discrimination and bias" and its concern that "Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities."92

The Section 1557 rule may not apply to every use of AI in the patient-care context or to every entity that develops a health care-related AI. Section 1557 and its implementing rules apply to federally funded "health program[s] or activit[ies]," as well as to programs or activities "administered by an Executive Agency or any entity established under this title [Title I of the ACA]."93 The statute does not define the phrase "health program or activity." HHS and courts generally agree that the statute covers many federal health programs HHS administers, as well as health plans sold through federal and state exchanges under the ACA's Title I.94 There is also general agreement that the phrase includes providers such as hospitals, nursing homes, and physician practices.95 The statute and its implementing rules may not apply to many software developers, and they do not apply to entities that do not receive federal funding.

The Section 1557 rule prohibits discrimination "through the use of patient care decision support tools,"96 which are defined as "any automated or non-automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities."97 OCR crafted the rule to account for providers' "widespread use of automated decision systems and AI, and the scale by which AI can influence covered entities' clinical decision-making."98 The rule does not apply to AI use outside of "clinical decision-making affecting patient care."99 For example, it does not apply to tools used for scheduling or billing.100 OCR requested additional comment on whether "decision support tools" used in nonclinical settings warrant further rulemaking.101 OCR's definition of "patient care decision support tools" overlaps somewhat with the definitions for predictive DSIs and evidence-based DSIs in ASTP/ONC's HTI-1 final rule, although the Section 1557 regulation applies to health care providers using AI tools (among other things), while ASTP/ONC's regulation focuses on developers.102

To combat discrimination through the use of patient care decision support tools, the Section 1557 rule specifies that "a covered entity has an ongoing duty to make reasonable efforts to identify" any such tools it uses that "measure race, color, national origin, sex, age, or disability."103 A covered entity must "make reasonable efforts to mitigate the risk of discrimination" from tools using such inputs.104 In its responses to public comments on the proposed rule, OCR "acknowledge[d] that it is not always possible to completely eliminate the risk of discriminatory bias in patient care decision support tools."105 One commenter observed in evaluating the proposed rule that "[b]ecause the inner workings of an algorithm can be difficult to fully understand," it "puts the burden on clinicians to be well-versed enough in data and computer science to evaluate, oversee, and correct for potential biases in algorithms."106

Centers for Medicare & Medicaid Services

CMS administers the Medicare program, including Medicare Advantage (MA, or Medicare Part C). MA is an alternative to original Medicare under which eligible beneficiaries may receive their covered services through health care companies under contract with CMS. As referenced in the HHS factsheet, "Biden-Harris Administration Announces Voluntary Commitments from Leading Healthcare Companies to Harness the Potential and Manage the Risks Posed by AI,"107 CMS recently issued a final rule regarding how MA plans may use prior authorizations, which also discusses the limits of using AI in prior authorizations and other coverage decisions.108 This subsection briefly describes Medicare, MA, coverage requirements, payments to MA plans, prior authorizations, and the recent rule.

Background on Medicare, MA, and Coverage Requirements

Medicare is a federal program that pays for covered health care services109 of qualified beneficiaries.110 It consists of four parts – A through D. Parts A and B cover a host of services including hospital care, skilled nursing care, and physician services. Part D covers outpatient prescription drugs.111 Part C112 is an alternative to Parts A and B, that covers substantially all required Parts A and B services.113 MA plans may also include an integrated Part D benefit.

Medicare Parts A and B are referred to as "original Medicare" or "fee for service (FFS)" Medicare because they pay providers for each item, service, or spell of illness that is treated.

Medicare covers a broad range of medical items and services, but there are limits. In general, Medicare pays for items or services that (a) fit into a Medicare benefit category (such as inpatient hospital care, or physician services),114 (b) are not statutorily excluded,115 and (c) are reasonable and necessary.116 The HHS Secretary, and contractors referred to as Medicare Administrative Contractors (MACs), have discretion to determine what specific items and services are "reasonable and necessary" and under what clinical conditions they will be covered. These policies are referred to as National Coverage Determinations (NCDs) when created by the HHS Secretary, or Local Coverage Determinations (LCDs) when created by the MACs. NCDs and LCDs are developed through processes that include examination of peer-reviewed medical research, consensus statements from medical societies, and input from the public.117 In addition, MACs can make coverage determinations on a case-by-case basis considering the clinical circumstances of individual beneficiaries.118

In general, the coverage determinations that apply to FFS Medicare also apply to MA plans.119 MA plans have long been subject to sub-regulatory guidance that requires them to make coverage determinations that are no more restrictive than the NCDs and LCDs under original Medicare.120 However, recent HHS Office of Inspector General (OIG) research found that MA enrollees were being denied coverage of items and services that would have been covered had the beneficiary been enrolled in original Medicare.121 Based partly on these findings, CMS engaged in rulemaking to clarify the requirement that items and services covered under original Medicare must also be covered by MA plans.122 The next section describes MA payments and the incentives to deliver care efficiently.

MA Payments and Prior Authorizations for Care

Rather than being given the fee for service payments used in original Medicare, MA plans receive a capitated payment regardless of the number of covered items or services an enrollee uses.123 As such, MA plans are at risk if total expenditures exceed payments; and within limits, MA plans may retain payments that exceed health care expenditures.124 Partly due to the capitated payments, MA plans have a financial incentive to provide covered services efficiently.

To achieve this, MA plans employ various utilization management (UM) techniques, such as prior authorization, to ensure that items and services an enrollee uses and the plan covers are reasonable and necessary. Prior authorizations125 can reduce unnecessary or potentially harmful care, improper payments, and overall expenditures; they may also be burdensome and costly to providers, and delay or discourage medically necessary care.126 It is unclear how many MA plans use AI as part of the prior authorization process (or as part of another UM technique) to, for example, expedite approval or disapproval of prior authorizations, but the use of AI in MA prior authorizations was discussed in recent rulemaking.127

MA Rulemaking Including Comments on AI

In a 2023 final rule, CMS established guardrails "to ensure that utilization management tools are used, and associated coverage decisions are made, in ways that ensure timely and appropriate access to medically necessary care for beneficiaries enrolled in MA plans."128

The rule codified the sub-regulatory requirement mentioned above that MA plans must base coverage determinations on the Medicare statute, regulations, NCDs, and LCDs.129 It also outlined circumstances and processes under which MA plans may establish additional coverage criteria when coverage criteria from the required sources are not fully established.130 Starting in contract year 2024, MA plans must establish Utilization Management Committees with specified composition.131 These committees are required to review all UM policies at least annually.132 Prior authorizations may only be used to confirm the presence of a diagnosis or other medical criteria that are the basis for the coverage determination.133 MA organizations must make medical necessity determinations based on, among other things, the enrollee's medical history, physician recommendations, and medical notes.134 And if an MA plan intends to issue a partially or fully adverse determination, the determination of coverage must be reviewed by a physician or other medical professional with expertise in the relevant field of medicine.135

The rule does not refer to "AI" or "Artificial Intelligence" directly, but in response to some comments during the rulemaking process, CMS mentioned two digital tools—InterQual and MCG systems—both of which use AI.136 Some commenters requested that CMS prohibit the use of certain tools, such as InterQual or MCG systems, while others advocated for continued use of such tools to guide medical necessity determinations.137 CMS clarified that using these tools in isolation is prohibited, as it would violate other regulatory requirements, such as the requirement that if an organization expects to issue a partially or fully adverse determination, the decision must be reviewed by a physician or other appropriate health care professional with expertise in the relevant field of medicine or health care.138 CMS acknowledged that these products, along with others, are designed to assist in the coverage determination process by consolidating clinical data, medical literature, and CMS policies. However, use of the tools does not absolve an MA plan from adhering to other regulatory requirements. For example, if the MA plan establishes additional internal coverage criteria, those criteria must be based on publicly available evidence in widely used clinical guidelines or medical literature, and the plan must disclose the factors considered in formulating the coverage policy.139 Furthermore, as noted above, if an MA plan intends to issue a partially or fully adverse determination of coverage, the determination must be reviewed by a physician or other medical professional with relevant expertise.140 In short, MA plans may use algorithms, software, or AI systems to assist in coverage determinations,141 "[h]owever, use of these tools, in isolation, without compliance with requirements in this final rule…is prohibited."142

As AI becomes more integrated into different aspects of health care, policies that do not mention AI directly may nonetheless affect its use.

Selected Issues for Consideration

As AI technology use expands in the health care setting, certain existing challenges may be exacerbated, while novel ones may emerge.143

As a foundational issue, trust is required for the effective application of AI technologies. In the clinical health care context, this may involve how patients perceive AI technologies. In a 2023 survey of the U.S. public conducted by the Pew Research Center, 60% of respondents replied it would make them uncomfortable if their care provider relied on AI when making decisions like diagnoses and treatment recommendations. People's perceptions of AI use in health care may be mediated by factors like their genders, races/ethnicities, ages, education levels, and familiarity with AI.144

Additional challenges with AI use in health care may be associated with issues of (1) data access, (2) bias, (3) lack of transparency, (4) privacy, (5) scaling and integration, and (6) uncertainty over liability.145

Though health data have proliferated in recent years, it may be difficult for developers to access the large volumes of high-quality data needed to create effective health care AI tools.146 High-quality data may be difficult to consolidate, since medical data can be fragmented and stored and formatted differently. Additionally, there may be disagreement over who owns, or should own, the medical data used to create these AI tools.147

In turn, the data used to develop AI tools may be limited or biased; this can reduce the safety of such AI tools, and may make them less effective for different patient populations, resulting in treatment disparities in terms of access to health care services and quality of care and health outcomes.148 Bias may be introduced into data in a variety of ways, including poorly developed study designs to collect data (e.g., nonrepresentative data samples) and nonstandardized data across sources.149

AI tools may also lack transparency. This is in part because of the inherent difficulty in determining how some AI tools work (i.e., the black box problem). However, a lack of transparency may also stem from more controllable factors, like the lack of AI tool evaluations in clinical settings or developers not making their tools open-source. This lack of information can make it difficult for health care providers to evaluate whether an AI tool is appropriate for a specific health care application.150 It may also make it difficult for providers to follow rules regulating the use of AI, such as OCR's Section 1557 rule.

There may be challenges associated with privacy and AI use. As AI systems are developed, increasingly large quantities of data will likely be accessible to more people and organizations.151 This can add to privacy risks and concerns regarding health data. Privacy concerns often dovetail with security challenges as well. This is because, as digital health technologies like AI become more complex, greater cybersecurity risks are introduced.152

AI tools can also be challenging to scale up and integrate into new settings because of differences among institutions and patient populations. For example, what is effective in a small rural clinic may not be as effective in a larger metropolitan hospital. Additionally, costs associated with scaling and integrating AI technologies can be prohibitive for some health care institutions.153

The number of parties involved in developing, deploying, and using AI tools has made it difficult to determine legal liability associated with these technologies. Limited legal precedent further compounds these challenges. Such uncertainty may foster reluctance to adopt such technologies, and could slow overall innovation.154

AI use in health care may also prompt more general considerations regarding AI regulatory harmonization nationally and internationally, as well as issues related to the environmental impact of AI use.

A number of frameworks for responsible AI use and regulation have been proposed by both public and private entities.155 While these proposals may confront the key issues regarding the responsible use of AI, there appears to be an increasing need for a consensus framework that stakeholders may reference to facilitate both cohesive regulation and interoperability, especially in health care.156

Consensus Building

Various stakeholders in the health care sector have undertaken efforts to harmonize frameworks for the responsible use of AI. For example, the Coalition for Health AI (CHAI), an interdisciplinary group comprised of more than 1,300 member organizations including technologists, academic researchers, health care organizations, governmental bodies, and patients, seeks to "develop guidelines and guardrails to drive high-quality healthcare by promoting the adoption of credible, fair and transparent health AI systems."157 CHAI was founded in December 2022, and in April 2023, CHAI released its first version of the Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare, a report that "summarizes collective recommendations … [to] enable health AI, harmonizing standards and reporting, and educate end users on how to evaluate AI technologies in ways that can drive their responsible adoption."158 In December 2023, CHAI representatives published a Special Communication in JAMA entitled A Nationwide Network of Health AI Assurance Laboratories, which outlines CHAI's proposal to create a public-private partnership to support a nationwide network of health AI assurance labs that could validate and monitor health AI model performance.159 This proposal was co-authored in part by leaders from ASTP/ONC and FDA. Since the release of this publication, CHAI has forwarded several other documents and proposals, including a draft framework for how AI assurance labs would be certified and the issuance of standardized "model cards" for tested technologies.160

Internationally, there has been a similar proliferation of AI guidance and regulatory approaches. AI technologies created and used in America will likely be used both domestically and abroad, and AI technologies developed and used internationally will likely be used in the United States. It is seen as increasingly necessary to consider and be aware of how an AI technology developed in one country may be regulated in another.161

Finally, related to the use of AI generally is its potential impact on the environment. The high demands on certain resources (e.g., minerals, water, and electricity) to build and power AI technologies could present a risk to human health.162 AI computing and storage functions especially create large energy demands on data centers, which in turn can lead to increased global emissions.163 Thus, the efficiency and environmentalism of AI technologies and infrastructure may be of particular importance.164

Conclusion

AI technologies have existed for decades, and these technologies have become more accessible to the public recently due to technological advancements like generative AI. As their applications have expanded in recent years, the use of AI technologies in health care settings has been subject to greater scrutiny by stakeholders. Though AI technologies may prompt advances and improvements in health care services, many stakeholders maintain there is a need for regulatory guardrails to address potential challenges associated with trust, data access, bias, lack of transparency, and privacy. Government agencies and other stakeholders have begun the process of developing such guardrails. The potential need for harmonization amongst party efforts and who should lead such harmonization initiatives (e.g., public versus private groups, or perhaps public-private partnerships) are among the emerging considerations.


Footnotes

1.

Fei Jiang et al., "Artificial Intelligence in Healthcare: Past, Present and Future," Stroke and Vascular Neurology, vol. 2, no. 4 (2017), p. 230, https://svn.bmj.com/content/svnbmj/2/4/230.full.pdf.

2.

15 U.S.C. §9401(3). This is the definition for "artificial intelligence" cited to in Executive Order 14110, further discussed in section "Executive Order (E.O.) 14110." Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," 88 Federal Register 75191, 75193, November 1, 2023, https://www.govinfo.gov/content/pkg/FR-2023-11-01/pdf/2023-24283.pdf.

3.

U.S. Congress, HELP Committee, Exploring Congress' Framework for the Future of AI: The Oversight and Legislative Role of Congress Over the Integration of Artificial Intelligence in Health, Education, and Labor, white paper, prepared by Ranking Member Senator Bill Cassidy, September 6, 2023, p. 1, https://www.help.senate.gov/imo/media/doc/help_committee_gop_final_ai_white_paper1.pdf.

4.

See CRS Report R47644, Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress.

5.

For more information on each of these AI techniques, including definitions, see the "AI Terminology" section in CRS Report R46795, Artificial Intelligence: Background, Selected Issues, and Policy Considerations. Another publication of interest may include CRS Report R47849, Artificial Intelligence in the Biological Sciences: Uses, Safety, Security, and Oversight.

6.

Laura Adams et al., Artificial Intelligence in Health, Health Care, and Biomedical Science: An AI Code of Conduct Principles and Commitments Discussion Draft, National Academy of Medicine (NAM), commentary, April 8, 2024, p. 2, https://nam.edu/wp-content/uploads/2024/04/Artificial-Intelligence-in-Health-Health-Care-and-Biomedical-Science_final_4.8.24V2.pdf.

7.

See "Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology" section.

8.

HHS, Fact Sheet: Biden-Harris Administration Announces Voluntary Commitments from Leading Healthcare Companies to Harness the Potential and Manage the Risks Posed by AI, December 14, 2023, https://www.hhs.gov/about/news/2023/12/14/fact-sheet-biden-harris-administration-announces-voluntary-commitments-leading-healthcare-companies-harness-potential-manage-risks-posed-ai.html.

9.

Ibid.; CDC, Novel Approaches: "Nowcasting" Suicide Trends, April 11, 2023, https://www.cdc.gov/surveillance/data-modernization/snapshot/2022-snapshot/stories/novel-approaches-suicide-trends.html.

10.

These industry commitments dovetail with the Biden Administration's efforts pursuant to Executive Order (E.O.) 14110. White House, "Delivering on the Promise of AI to Improve Health Outcomes," December 14, 2023, https://www.whitehouse.gov/briefing-room/blog/2023/12/14/delivering-on-the-promise-of-ai-to-improve-health-outcomes/. For more information about E.O. 14110, see "Executive Order (E.O.) 14110" section.

11.

HHS, Fact Sheet: Biden-Harris Administration Announces Voluntary Commitments from Leading Healthcare Companies to Harness the Potential and Manage the Risks Posed by AI, December 14, 2023, https://www.hhs.gov/about/news/2023/12/14/fact-sheet-biden-harris-administration-announces-voluntary-commitments-leading-healthcare-companies-harness-potential-manage-risks-posed-ai.html; White House, "Delivering on the Promise of AI to Improve Health Outcomes," December 14, 2023, https://www.whitehouse.gov/briefing-room/blog/2023/12/14/delivering-on-the-promise-of-ai-to-improve-health-outcomes/.

12.

Erin Schumaker et al., "The AI Assurance Labs Are Coming," Politico, September 18, 2024, https://www.politico.com/newsletters/future-pulse/2024/09/18/the-ai-assurance-labs-are-coming-00179662.

13.

For more information about specific AI technologies and techniques, additional related Congressional Research Service (CRS) publications that may be of interest are collated in CRS Insight IN12458, Artificial Intelligence: CRS Products.

14.

The cyclical interest in AI technologies has been framed by some as a series of AI "summers" and "winters," wherein summers are periods of intensified interest and progress in AI technologies, and winters are periods of declined interest and stalled progress in AI technologies. There is not universal agreement amongst stakeholders about when these periods have historically begun and ended. For more information on the "seasons" of AI, see Ben Lutkevich, "AI Winter," August 2024, https://www.techtarget.com/searchenterpriseai/definition/AI-winter.

15.

B.J. Copeland, "History of Artificial Intelligence (AI)," Britannica, September 13, 2024, https://www.britannica.com/science/history-of-artificial-intelligence; Shuroug A. Alowais et al., "Revolutionizing Healthcare: The Role of Artificial Intelligence in Clinical Practice," BMC Medical Education, vol. 23, no. 689 (2023), p. 2.

16.

Shuroug A. Alowais et al., "Revolutionizing Healthcare: The Role of Artificial Intelligence in Clinical Practice," BMC Medical Education, vol. 23, no. 689 (2023), p. 2; Dartmouth, "Artificial Intelligence Coined at Dartmouth," https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth; John McCarthy et al., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955, p. 2, http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf.

17.

Kun-Hsing Yu, Andrew L. Beam, and Isaac S. Kohane, "Artificial Intelligence in Healthcare," Nature Biomedical Engineering, vol. 2 (October 2018), p. 719. 'Clinical decision support' (CDS) "encompasses a variety of tools to enhance decision-making in the clinical workflow," and may include computerized alerts and reminders to health care providers and patients, clinical guidelines, and documentation templates, among other things. Assistant Secretary for Technology Policy/Office of the National Coordinator for Health IT (ASTP/ONC), "Clinical Decision Support," https://www.healthit.gov/topic/safety/clinical-decision-support.

18.

Shuroug A. Alowais et al., "Revolutionizing Healthcare: The Role of Artificial Intelligence in Clinical Practice," BMC Medical Education, vol. 23, no. 689 (2023), p. 2; Kun-Hsing Yu, Andrew L. Beam, and Isaac S. Kohane, "Artificial Intelligence in Healthcare," Nature Biomedical Engineering, vol. 2 (October 2018), p. 719.

19.

Shuroug A. Alowais et al., "Revolutionizing Healthcare: The Role of Artificial Intelligence in Clinical Practice," BMC Medical Education, vol. 23, no. 689 (2023), p. 2.

20.

Kun-Hsing Yu, Andrew L. Beam, and Isaac S. Kohane, "Artificial Intelligence in Healthcare," Nature Biomedical Engineering, vol. 2 (October 2018), p. 720.

21.

Kun-Hsing Yu, Andrew L. Beam, and Isaac S. Kohane, "Artificial Intelligence in Healthcare," Nature Biomedical Engineering, vol. 2 (October 2018), p. 719.

22.

Kun-Hsing Yu, Andrew L. Beam, and Isaac S. Kohane, "Artificial Intelligence in Healthcare," Nature Biomedical Engineering, vol. 2 (October 2018), p. 722; for more background information on AI's evolution, see CRS Report R47644, Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress.

23.

Some experts and stakeholders may sometimes use terms like "AI methods," "AI techniques," "AI approaches," and "AI types" interchangeably or define them differently.

24.

Network of the National Library of Medicine (NNLM), "Machine Learning," June 23, 2022, https://www.nnlm.gov/guides/data-glossary/machine-learning.

25.

Thomas Davenport and Ravi Kalakota, "The Potential for Artificial Intelligence in Healthcare," Future Healthcare Journal, vol. 6, no. 2 (June 2019), p. 94, https://pmc.ncbi.nlm.nih.gov/articles/PMC6616181/pdf/futurehealth-6-2-94.pdf.

26.

For more information on neural networks and how they function, see CRS Report R46795, Artificial Intelligence: Background, Selected Issues, and Policy Considerations and CRS Report R47644, Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress.

27.

NNLM, "Neural Networks," June 29, 2022, https://www.nnlm.gov/guides/data-glossary/neural-networks.

28.

Thomas Davenport and Ravi Kalakota, "The Potential for Artificial Intelligence in Healthcare," Future Healthcare Journal, vol. 6, no. 2 (June 2019), pp. 94-95, https://pmc.ncbi.nlm.nih.gov/articles/PMC6616181/pdf/futurehealth-6-2-94.pdf. For more information on deep learning and how it functions, see CRS Report R46795, Artificial Intelligence: Background, Selected Issues, and Policy Considerations.

29.

Thomas Davenport and Ravi Kalakota, "The Potential for Artificial Intelligence in Healthcare," Future Healthcare Journal, vol. 6, no. 2 (June 2019), pp. 94, https://pmc.ncbi.nlm.nih.gov/articles/PMC6616181/pdf/futurehealth-6-2-94.pdf.

30.

Laura Adams et al., Artificial Intelligence in Health, Health Care, and Biomedical Science: An AI Code of Conduct Principles and Commitments Discussion Draft, National Academy of Medicine (NAM), commentary, April 8, 2024, p. 2, https://nam.edu/wp-content/uploads/2024/04/Artificial-Intelligence-in-Health-Health-Care-and-Biomedical-Science_final_4.8.24V2.pdf.

31.

Niam Yaraghi, "Generative AI in Health Care: Opportunities, Challenges, and Policy," Brookings Institution, January 8, 2024, https://www.brookings.edu/articles/generative-ai-in-health-care-opportunities-challenges-and-policy/.

32.

Sophie Putka, "ChatGPT Flubbed Drug Information Questions," MedPage Today, December 7, 2023, https://www.medpagetoday.com/meetingcoverage/ashp/107716.

33.

IBM, "What are AI Hallucinations?," https://www.ibm.com/topics/ai-hallucinations.

34.

For more background information on generative AI, see CRS In Focus IF12426, Generative Artificial Intelligence: Overview, Issues, and Questions for Congress.

35.

NNLM, "Natural Language Processing," June 13, 2022, https://www.nnlm.gov/guides/data-glossary/natural-language-processing.

36.

Thomas Davenport and Ravi Kalakota, "The Potential for Artificial Intelligence in Healthcare," Future Healthcare Journal, vol. 6, no. 2 (June 2019), p. 95, https://pmc.ncbi.nlm.nih.gov/articles/PMC6616181/pdf/futurehealth-6-2-94.pdf.

37.

For more information on the classification of rule-based expert systems, see the "Algorithms and AI" section in CRS Report R46795, Artificial Intelligence: Background, Selected Issues, and Policy Considerations.

38.

Thomas Davenport and Ravi Kalakota, "The Potential for Artificial Intelligence in Healthcare," Future Healthcare Journal, vol. 6, no. 2 (June 2019), p. 95, https://pmc.ncbi.nlm.nih.gov/articles/PMC6616181/pdf/futurehealth-6-2-94.pdf.

39.

Ibid.

40.

Ibid.

41.

Luís Pinto-Coelho, "How Artificial Intelligence is Shaping Medical Imagining Technology: A Survey of Innovations and Applications," Bioengineering, vol. 10, no. 12 (2023), https://pmc.ncbi.nlm.nih.gov/articles/PMC10740686/pdf/bioengineering-10-01435.pdf.

42.

Thomas Davenport and Ravi Kalakota, "The Potential for Artificial Intelligence in Healthcare," Future Healthcare Journal, vol. 6, no. 2 (June 2019), pp. 95-96, https://pmc.ncbi.nlm.nih.gov/articles/PMC6616181/pdf/futurehealth-6-2-94.pdf.

43.

Shuroug A. Alowais et al., "Revolutionizing Healthcare: The Role of Artificial Intelligence in Clinical Practice," BMC Medical Education, vol. 23, no. 689 (2023), pp. 5-6.

44.

Thomas Davenport and Ravi Kalakota, "The Potential for Artificial Intelligence in Healthcare," Future Healthcare Journal, vol. 6, no. 2 (June 2019), p. 96, https://pmc.ncbi.nlm.nih.gov/articles/PMC6616181/pdf/futurehealth-6-2-94.pdf.

45.

Ibid.

46.

E.O. 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," 88 Federal Register 75191, November 1, 2023, https://www.govinfo.gov/content/pkg/FR-2023-11-01/pdf/2023-24283.pdf.

47.

White House, "Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence," October 30, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

48.

Ibid.

49.

E.O. 14110 §8(b). Several other mentions of the HHS Secretary may be found in sections 4.4, 5.2, 7.2, and 12. For more information on E.O. 14110 and what it directs, see CRS Report R47843, Highlights of the 2023 Executive Order on Artificial Intelligence for Congress.

50.

Rebecca Pifer, "HHS Artificial Intelligence Task Force Takes Shape," Healthcare Dive, March 14, 2024, https://www.healthcaredive.com/news/hhs-artificial-intelligence-task-force-details/710250/; Billy Mitchell, "HHS Developing New AI Strategic Plan," FedScoop, October 9, 2024, https://fedscoop.com/hhs-developing-new-ai-strategic-plan/.

51.

ASTP/ONC, "About ASTP/ONC," https://www.healthit.gov/topic/about-astponc. Prior to this reorganization, ASTP/ONC was referred to as ONC.

52.

HHS, HHS Reorganizes Technology, Cybersecurity, Data, and Artificial Intelligence Strategy and Policy Functions, July 25, 2024, https://www.hhs.gov/about/news/2024/07/25/hhs-reorganizes-technology-cybersecurity-data-artificial-intelligence-strategy-policy-functions.html.

53.

ASTP/ONC, ONC Health IT Certification Program, February 8, 2024, https://www.healthit.gov/sites/default/files/PUBLICHealthITCertificationProgramOverview.pdf. An HIT module is defined as "any service, component, or combination thereof that can meet the requirements of at least one certification criterion adopted by the Secretary." 45 C.F.R. §170.102.

54.

For more information about the Certification Program, see ASTP/ONC, "About the ONC Health IT Certification Program," https://www.healthit.gov/topic/certification-ehrs/about-onc-health-it-certification-program; ASTP/ONC, "ONC Health IT Certification Program Test Method," https://www.healthit.gov/topic/certification-ehrs/onc-health-it-certification-program-test-method; and 45 C.F.R. Part 170.

55.

ASTP/ONC, "Minimizing Risks and Maximizing Rewards from Machine Learning," September 7, 2022, https://www.healthit.gov/buzz-blog/health-data/minimizing-risks-and-maximizing-rewards-from-machine-learning.

56.

According to ASTP/ONC, the agency uses the term DSI as a more modern, inclusive, and accurate label for CDS systems. Ibid.; ASTP/ONC, "Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing," 89 Federal Register 1192, 1196, January 9, 2024, https://www.govinfo.gov/content/pkg/FR-2024-01-09/pdf/2023-28857.pdf.

57.

ASTP/ONC, "Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing," 89 Federal Register 1192, January 9, 2024, https://www.govinfo.gov/content/pkg/FR-2024-01-09/pdf/2023-28857.pdf.

58.

Ibid. at 1192.

59.

Ibid. at 1196, 1234; 45 C.F.R. §170.315(b)(11)(iii).

60.

ASTP/ONC, "Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing," 89 Federal Register 1192, 1240, January 9, 2024, https://www.govinfo.gov/content/pkg/FR-2024-01-09/pdf/2023-28857.pdf.

61.

Ibid. at 1244; 45 C.F.R. §170.102. ASTP/ONC's final rule further clarifies that, for its purposes, predictive DSIs "perceive environments through the use of training data; abstract perceptions into models as they learn relationships in that data; and produce an output, often for an individual, through inference based on those learned relationships." ASTP/ONC, "Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing," 89 Federal Register 1192, 1244, January 9, 2024, https://www.govinfo.gov/content/pkg/FR-2024-01-09/pdf/2023-28857.pdf. Conversely, the final rule additionally notes that "evidence-based DSI likely represents another form of Artificial Intelligence, though that form is fundamentally based on rules-based models." Ibid.

62.

Ibid. at 1196-1197.

63.

45 C.F.R. §170.315(b)(11)(iv).

64.

ASTP/ONC, "Decision Support Interventions (DSI) Fact Sheet: Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule," December 2023, p. 1, https://www.healthit.gov/sites/default/files/page/2023-12/HTI-1_DSI_fact%20sheet_508.pdf. In particular, ASTP/ONC anticipates that the provision of this information over time will help users evaluate whether predictive DSIs are fair, appropriate, valid, effective, and safe (FAVES). ASTP/ONC, "Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing," 89 Federal Register 1192, 1233, January 9, 2024, https://www.govinfo.gov/content/pkg/FR-2024-01-09/pdf/2023-28857.pdf.

65.

45 C.F.R. §170.402(b)(4).

66.

For more information on intervention risk management, see 45 C.F.R. §170.315(b)(11)(vi).

67.

ASTP/ONC, "Decision Support Intervention (DSI) Fact Sheet: Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule," December 2023, p. 1, https://www.healthit.gov/sites/default/files/page/2023-12/HTI-1_DSI_fact%20sheet_508.pdf.

68.

For more information on this commitment, see Introduction section. HHS, Fact Sheet: Biden-Harris Administration Announces Voluntary Commitments from Leading Healthcare Companies to Harness the Potential and Manage the Risks Posed by AI, December 14, 2023, https://www.hhs.gov/about/news/2023/12/14/fact-sheet-biden-harris-administration-announces-voluntary-commitments-leading-healthcare-companies-harness-potential-manage-risks-posed-ai.html; White House, "Delivering on the Promise of AI to Improve Health Outcomes," December 14, 2023, https://www.whitehouse.gov/briefing-room/blog/2023/12/14/delivering-on-the-promise-of-ai-to-improve-health-outcomes/.

69.

Health Sector AI Commitments, December 2023, p. 1, https://www.healthit.gov/sites/default/files/2023-12/Health_Sector_AI_Commitments_FINAL_120923.pdf.

70.

Health Sector AI Commitments, December 2023, p. 2, https://www.healthit.gov/sites/default/files/2023-12/Health_Sector_AI_Commitments_FINAL_120923.pdf.

71.

ASTP/ONC, "Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing," 89 Federal Register 1192, 1194, January 9, 2024, https://www.govinfo.gov/content/pkg/FR-2024-01-09/pdf/2023-28857.pdf.

72.

Ibid.

73.

21 U.S.C. §321(h)(1); FFDCA §201(h)(1).

74.

21 U.S.C. §360j(o)(1)(A)-(D); FFDCA §520(o)(1)(A)-(D).

75.

21 U.S.C. §360j(o)(1)(E); FFDCA §520(o)(1)(E).

76.

See, e.g., FDA, General Wellness: Policy for Low Risk Devices: Guidance for Industry and Food and Drug Administration Staff, September 27, 2019, https://www.fda.gov/media/90652/download; FDA, Policy for Device Software Functions and Mobile Medical Applications: Guidance for Industry and Food and Drug Administration Staff, September 28, 2022, https://www.fda.gov/media/80958/download.

77.

FDA, "Software as a Medical Device (SaMD)," https://www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd.

78.

For more information about the regulation of devices, including 510(k) notification, De Novo classification request, and PMA, see CRS Report R47374, FDA Regulation of Medical Devices.

79.

For more information about FDA regulation of devices generally, see CRS Report R47374, FDA Regulation of Medical Devices.

80.

FDA, "Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices," https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices.

81.

FDA, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD): Discussion Paper and Request for Feedback, April 2, 2019, https://www.fda.gov/media/122535/download?attachment; FDA, Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan, January 2021, https://www.fda.gov/media/145022/download?attachment.

82.

FDA, "Digital Health Software Precertification (Pre-Cert) Pilot Program," https://www.fda.gov/medical-devices/digital-health-center-excellence/digital-health-software-precertification-pre-cert-pilot-program.

83.

FDA, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD): Discussion Paper and Request for Feedback, April 2, 2019, p.10, https://www.fda.gov/media/122535/download?attachment.

84.

21. U.S.C. §360e-4; FFDCA §515C.

85.

FDA, Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions: Draft Guidance for Industry and Food and Drug Administration Staff, April 3, 2023, https://www.fda.gov/media/166704/download.

86.

FDA, Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together, March 15, 2024, https://www.fda.gov/media/177030/download?attachment. CBER: Center for Biologics Evaluation and Research; CDER: Center for Drug Evaluation and Research; CDRH: Center for Devices and Radiological Health; OCP: Office of Combination Products.

87.

FDA, "Artificial Intelligence and Machine Learning in Software as a Medical Device," https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device.

88.

See CRS Legal Sidebar LSB11169, HHS Finalizes Rule Addressing Section 1557 of the ACA's Incorporation of Title IX; HHS, "About Us," April 11, 2024, https://www.hhs.gov/ocr/about-us/index.html.

89.

42 U.S.C. §18116.

90.

OCR and CMS, Nondiscrimination in Health Programs and Activities, 89 Federal Register 37522, May 6, 2024, https://www.govinfo.gov/content/pkg/FR-2024-05-06/pdf/2024-08711.pdf.

91.

Ibid. at 37524, 37544, and 37701. Portions of the 2024 Section 1557 rule related to sex discrimination have been enjoined. The rule governing discrimination through AI systems remains in effect except inasmuch as it extends to gender identity discrimination. HHS, "Section 1557 of the Patient Protection and Affordable Care Act," December 6, 2024, https://www.hhs.gov/civil-rights/for-individuals/section-1557/index.html.

92.

Executive Office of the President, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," 88 Federal Register 75191, 75192, November 1, 2023, https://www.govinfo.gov/content/pkg/FR-2023-11-01/pdf/2023-24283.pdf.

93.

42 U.S.C. §18116(a).

94.

See CRS Legal Sidebar LSB11160, The Scope of ACA Section 1557: "Health Program or Activity".

95.

Ibid.

96.

OCR and CMS, "Nondiscrimination in Health Programs and Activities," 89 Federal Register 37522, 37701, May 6, 2024, https://www.govinfo.gov/content/pkg/FR-2024-05-06/pdf/2024-08711.pdf.

97.

Ibid. at 37695.

98.

Ibid. at 37643.

99.

Ibid. at 37644.

100.

Ibid. at 37644; see also Gregory E. Fosheim et al., "Ten Takeaways from Long-Awaited Section 1557 Nondiscrimination Protections," MWE.com, May 8, 2024, https://www.mwe.com/insights/ten-takeaways-from-long-awaited-section-1557-nondiscrimination-protections/.

101.

OCR and CMS, "Nondiscrimination in Health Programs and Activities," 89 Federal Register 37522, 37650, May 6, 2024, https://www.govinfo.gov/content/pkg/FR-2024-05-06/pdf/2024-08711.pdf.

102.

Ibid. at 37643; Gregory E. Fosheim et al., "Ten Takeaways from Long-Awaited Section 1557 Nondiscrimination Protections," MWE.com, May 8, 2024, https://www.mwe.com/insights/ten-takeaways-from-long-awaited-section-1557-nondiscrimination-protections/.

103.

Ibid. at 37651, 37701.

104.

Ibid. at 37701.

105.

Ibid. at 37648.

106.

Shania Kennedy, "More Guidance Needed to Curb Discrimination by Clinical Algorithm Use," TechTarget, January 11, 2023, https://healthitanalytics.com/news/more-guidance-needed-to-curb-discrimination-by-clinical-algorithm-use.

107.

CMS administers Medicare, Medicaid, the State Children's Health Insurance Program (CHIP), the federal health insurance exchanges, and certain functions for state health insurance exchanges. This section only discusses Medicare Advantage, as it was one example provided by HHS in the fact sheet. HHS, Fact Sheet: Biden-Harris Administration Announces Voluntary Commitments from Leading Healthcare Companies to Harness the Potential and Manage the Risks Posed by AI, December 14, 2023, https://www.hhs.gov/about/news/2023/12/14/fact-sheet-biden-harris-administration-announces-voluntary-commitments-leading-healthcare-companies-harness-potential-manage-risks-posed-ai.html.

108.

CMS, "Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program, Medicare Prescription Drug Benefit Program, Medicare Cost Plan Program, and Programs for All-Inclusive Care for the Elderly," 88 Federal Register 22120, April 12, 2023, https://www.govinfo.gov/content/pkg/FR-2023-04-12/pdf/2023-07115.pdf.

109.

See CMS website, "Medicare Coverage of Items and Services," https://www.cms.gov/cms-guide-medical-technology-companies-and-other-interested-parties/coverage/medicare-coverage-items-and-services.

110.

CRS In Focus IF10885, Medicare Overview.

111.

CRS Report R40611, Medicare Part D Prescription Drug Benefit.

112.

The Balanced Budget Act of 1997 (BBA97, P.L. 105-33) amended the Social Security Act (SSA) to establish Medicare Part C, at the time referred to as Medicare+Choice. Prior to 1997, the SSA allowed Medicare beneficiaries to receive covered services through managed care organizations under authority at SSA Section 1876 (42 U.S.C. §1395mm). The Medicare+Choice program has been amended multiple times, including changing the name to Medicare Advantage.

113.

SSA §1852(a)(1)(B), 42 U.S.C. §1395w-22. MA plans are not required to cover hospice care or organ acquisition costs for a kidney transplant. If an MA enrollee chooses to participate in hospice, or is eligible for a kidney transplant, these options are paid for through original Medicare. They may offer additional benefits not covered under Parts A and B, and must include an out-of-pocket maximum. MA enrollees are often restricted to receiving care from medical providers or suppliers who are part of the MA plan's contracted network. While MA plans are administered by private companies, MA is still part of Medicare.

114.

AI and related technologies (AI) are not a Medicare benefit category, and in most cases are not the basis for separate or modified payment. AI that is an integral part of a covered diagnosis, treatment, or piece of equipment (such as durable medical equipment used in a beneficiary's home) may be covered under the appropriate category, but the presence or absence of AI would generally not affect the payment amount. AI may be paid for within existing payment policies where increases in payment are provided to defray the added cost of expensive new technology, such as under the New Technology Add-On Payment (NTAP) for hospital inpatient payments.

115.

Statutorily excluded items and services include, in most cases, eyeglasses, hearing aids, and dental care. Certain items and services may be covered by Medicare if they are explicitly authorized in the SSA, such as preventive benefits.

116.

SSA §1862(a)(1)(A), 42 U.S.C. §1395y(a)(1)(A).

117.

The HHS Secretary may be asked to develop an NCD when there is limited evidence of whether an item or service is reasonable and necessary. This may be the case for (a) new uses for existing items, or (b) new or novel items for which there is not an extensive body of research. In such cases, the HHS Secretary may grant coverage in the context of a study (i.e., Coverage with Evidence Development), or offer innovators an opportunity to work together earlier in the approval process to ensure innovators know what evidence is necessary for Medicare coverage and allow them to build that into their clinical trials. (i.e., Parallel Review, and Transitional Coverage for Emerging Technologies.)

118.

See, e.g., Tamara Syrek Jensen, Joseph Chin, and James Rollins, et al., National Coverage Analysis, Gender Dysphoria and Gender Reassignment Surgery, CMS, Decision Memo, August 16, 2016, https://www.cms.gov/medicare-coverage-database/view/ncacal-decision-memo.aspx?proposed=N&ncaid=282&keywordtype=starts&keyword=gender&bc=0.

119.

An MA plan is allowed to standardize benefits if its service area spans multiple MACs with differing coverage policies (SSA §1852(a)(2)(C)). Additionally, an MA plan is not expected to implement a coverage expansion that was not taken into account in its contract with CMS; in these cases, CMS pays for the expansion. MA plans would implement coverage of an expansion when it is taken into account in the contract with CMS. (SSA §1852(a)(5)).

120.

CMS, Medicare Managed Care Manual, Chapter 4 - Benefits and Beneficiary Protections, Section 10.16 –Medical Necessity, April 22, 2016, https://www.cms.gov/Regulations-and-Guidance/Guidance/Manuals/Downloads/mc86c04.pdf.

121.

Christi A. Grimm, Some Medicare Advantage Organization Denials of Prior Authorization Requests Raise Concern About Beneficiary Access to Medically Necessary Care, HHS, OIG, April 2022, OEI-09-18-00260, p. 9, https://oig.hhs.gov/oei/reports/OEI-09-18-00260.pdf.

122.

CMS, "Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program, Medicare Prescription Drug Benefit Program, Medicare Cost Plan Program, and Programs for All-Inclusive Care for the Elderly," 88 Federal Register 22120, 22186, April 12, 2023, https://www.govinfo.gov/content/pkg/FR-2023-04-12/pdf/2023-07115.pdf.

123.

For a more detailed discussion of MA payments, see CRS Report R40425, Medicare Primer, and Medicare Payment Advisory Commission, Medicare Advantage Program Payment System, October 2023, https://www.medpac.gov/wp-content/uploads/2022/10/MedPAC_Payment_Basics_23_MA_FINAL_SEC.pdf.

124.

MA plans are subject to a medical loss ratio requirement which requires that 85% of plan payments be used for patient care. CMS, "Medical Loss Ratio," https://www.cms.gov/medicare/health-drug-plans/medical-loss-ratio.

125.

Prior authorization is "a process through which the physician or other health care provider is required to obtain advance approval from the plan that payment will be made for a service or item furnished to the enrollee." CMS, Medicare Managed Care Manual, Chapter 4 - Benefits and Beneficiary Protections, Section 110.1.1, Provider Network Standards, April 22, 2016, https://www.cms.gov/Regulations-and-Guidance/Guidance/Manuals/Downloads/mc86c04.pdf.

126.

For example, a recent GAO report found that PA used in original Medicare decreased program expenditures for items and services by decreasing unnecessary utilization; the report also found that some durable medical equipment suppliers experienced challenges in obtaining the necessary documentation from referring physicians to submit for the PA requests. GAO, Medicare: CMS Should Take Action to Continue Prior Authorization Efforts to Reduce Spending, GAO-18-341, April 2018, p. 1, https://www.gao.gov/products/GAO-18-341.

127.

CMS, "Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program, Medicare Prescription Drug Benefit Program, Medicare Cost Plan Program, and Programs for All-Inclusive Care for the Elderly," 88 Federal Register 22120, 22186, April 12, 2023, https://www.govinfo.gov/content/pkg/FR-2023-04-12/pdf/2023-07115.pdf.

128.

CMS, "Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program, Medicare Prescription Drug Benefit Program, Medicare Cost Plan Program, and Programs for All-Inclusive Care for the Elderly," 88 Federal Register 22120, 22186, April 12, 2023, https://www.govinfo.gov/content/pkg/FR-2023-04-12/pdf/2023-07115.pdf.

129.

Ibid. at 22187.

130.

Ibid. at 22328-22329.

131.

Ibid. at 22331.

132.

Ibid.

133.

Ibid. at 22332.

134.

Ibid. at 22329.

135.

Ibid. at 22194-22195.

136.

Ibid. at 22194-22195.

137.

Ibid. at 22194

138.

Ibid.

139.

42 C.F.R. §422.101(b)(6).

140.

CMS, "Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program, Medicare Prescription Drug Benefit Program, Medicare Cost Plan Program, and Programs for All-Inclusive Care for the Elderly," 88 Federal Register 22334, April 12, 2023, https://www.govinfo.gov/content/pkg/FR-2023-04-12/pdf/2023-07115.pdf.

141.

CMS, "Frequently Asked Questions related to Coverage Criteria and Utilization Management Requirements in CMS Final Rule (CMS-4201-F)" (Feb. 6, 2024), available at https://www.cms.gov/about-cms/information-systems/hpms/hpms-memos-archive-weekly/hpms-memos-wk-2-february-5-9. Question 2 specifically addresses the use of algorithms and artificial intelligence to make coverage determinations, clarifies that "it is the responsibility of the MA organization to ensure that the algorithm or artificial intelligence complies with all applicable rules for how coverage determinations by MA organizations are made," and provides multiple specific examples.

142.

CMS, "Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program, Medicare Prescription Drug Benefit Program, Medicare Cost Plan Program, and Programs for All-Inclusive Care for the Elderly," 88 Federal Register 22194, April 12, 2023, https://www.govinfo.gov/content/pkg/FR-2023-04-12/pdf/2023-07115.pdf.

143.

Laura Adams et al., Artificial Intelligence in Health, Health Care, and Biomedical Science: An AI Code of Conduct Principles and Commitments Discussion Draft, National Academy of Medicine (NAM), commentary, April 8, 2024, p. 2, https://nam.edu/wp-content/uploads/2024/04/Artificial-Intelligence-in-Health-Health-Care-and-Biomedical-Science_final_4.8.24V2.pdf.

144.

Alec Tyson et al., 60% of Americans Would be Uncomfortable With Provider Relying on AI in Their Own Health Care, Pew Research Center, February 22, 2023, pp. 4, 7-8, https://www.pewresearch.org/wp-content/uploads/sites/20/2023/02/PS_2023.02.22_AI-health_REPORT.pdf.

145.

U.S. Government Accountability Office (GAO) and NAM, Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care, GAO-21-7SP, November 30, 2020, p. 21, https://www.gao.gov/assets/gao-21-7sp.pdf.

146.

Ibid. at 22.

147.

Ibid.

148.

Ibid. at 24.

149.

Ibid.

150.

Ibid. at 26-27.

151.

Ibid. at 27-29.

152.

For more information on cybersecurity, see CRS In Focus IF12591, Cybersecurity and Digital Health Information.

153.

GAO and NAM, Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care, GAO-21-7SP, November 30, 2020, pp. 25-26, https://www.gao.gov/assets/gao-21-7sp.pdf.

154.

Ibid. at 30.

155.

Laura Adams et al., Artificial Intelligence in Health, Health Care, and Biomedical Science: An AI Code of Conduct Principles and Commitments Discussion Draft, National Academy of Medicine (NAM), commentary, April 8, 2024, p. 2, https://nam.edu/wp-content/uploads/2024/04/Artificial-Intelligence-in-Health-Health-Care-and-Biomedical-Science_final_4.8.24V2.pdf.

156.

Ibid.

157.

CHAI, "Coalition for Health AI (CHAI) Names Board of Directors and CEO," March 4, 2024, https://chai.org/coalition-for-health-ai-chai-names-board-of-directors-and-ceo/.

158.

Nigam H. Shah et al., "A Nationwide Network of Health AI Assurance Laboratories," JAMA, vol. 331, no. 3 (2024), p. 245; CHAI, Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare, April 4, 2023, p. 3, https://www.coalitionforhealthai.org/papers/blueprint-for-trustworthy-ai_V1.0.pdf.

159.

Nigam H. Shah et al., "A Nationwide Network of Health AI Assurance Laboratories," JAMA, vol. 331, no. 3 (2024).

160.

CHAI, "Assurance Standards Guide and Reporting Checklist," https://chai.org/assurance-standards-guide/; CHAI, "CHAI Advances Assurance Lab Certification and 'Nutrition Label' for Health AI," October 18, 2024, https://chai.org/chai-advances-assurance-lab-certification-and-nutrition-label-for-health-ai/.

161.

Laura Adams et al., Artificial Intelligence in Health, Health Care, and Biomedical Science: An AI Code of Conduct Principles and Commitments Discussion Draft, National Academy of Medicine (NAM), commentary, April 8, 2024, p. 3, https://nam.edu/wp-content/uploads/2024/04/Artificial-Intelligence-in-Health-Health-Care-and-Biomedical-Science_final_4.8.24V2.pdf.

162.

Ibid. at 5.

163.

Ibid.

164.

Ibid.