Regulating Artificial Intelligence: U.S. and International Approaches and Considerations for Congress

Regulating Artificial Intelligence: U.S. and International Approaches and Considerations for Congress

June 4, 2025 (R48555)
Jump to Main Text of Report

Contents

Summary

Artificial intelligence (AI) presents many potential benefits and challenges in the private and public sectors. No federal legislation establishing broad regulatory authorities for the development or use of AI or prohibitions on AI has been enacted. Recent Congresses have passed primarily more targeted AI provisions. Different Administrations have focused their attention on federal engagement in AI, albeit with somewhat different emphases on specific topics. The focus on AI safety under the Biden Administration appears to be shifting toward security concerns during the second Trump Administration. Stakeholders in the United States have debated how to approach AI innovation and regulation in order to harness the opportunities of AI technologies, such as enhanced government operations and worker efficiency, while minimizing potential problems, such as bias and inaccuracies in AI-generated output.

U.S. AI Governance and Regulation

Outside of broad AI governance frameworks, most of the U.S. regulatory efforts regarding AI have centered on (1) federal agency assessments and enforcement of existing regulatory authorities, (2) exploration of whether individual agencies require additional authorities, and (3) securing voluntary commitments from industry. Much of the legislation proposed in the 118th and 119th Congresses have emphasized the development of voluntary guidelines and best practices and reporting of industry-conducted evaluations of AI systems. The approach of the U.S. federal government as a whole appears to be cautious in regard to regulating AI in the private sector and more focused on oversight of federal government uses of AI. In the absence of federal AI regulations, states have been enacting their own laws. Critics assert that such a patchwork of AI laws creates challenges for companies and that a nationwide regulatory structure may incentivize product development.

Approaches to AI Governance and Regulation, Including International Approaches

Proponents of broad federal AI regulations assert that they would lead to less legal uncertainty for AI developers and improve the public's trust in AI systems, thus supporting AI innovation. Opponents of broad federal AI regulations assert that industry is taking steps to self-regulate and that additional regulation would stifle innovation at a time when international competition in AI is accelerating, which could lead to negative economic and national security outcomes for the United States. Other analysts have criticized such characterizations as presenting a false dichotomy between regulation and innovation and instead support a mixture of targeted, flexible approaches depending on the AI technology and its application.

Similar to the United States, some other countries, including the United Kingdom, have taken to date a measured approach to regulating AI. In contrast, the European Union (EU) has enacted a broad regulatory approach through the EU AI Act, which classifies AI systems into risk categories with different degrees of requirements and obligations. Some analysts have raised concerns that the EU AI Act creates or will create barriers for companies developing and deploying AI. China has enacted targeted AI laws and is working on broader AI regulation, though China's economic and science and technology policies feature a heavy government role in private sector development. China's approach has been characterized as a vertical, technology-specific framework influenced by national security concerns and economic development goals, with the EU's AI Act described as a horizontal, risk-based framework focused on ethical considerations and transparency.

Considerations and Options for Congress

Congress and the Administration might frame AI legislation and policies in various ways. One approach could include actions aimed at promoting AI safety and averting risks to people, another approach may focus more on security, and yet another approach may focus on accelerating innovation potentially accompanied by voluntary commitments from industry. These approaches, among others, may also be combined. Congressional actions might focus on leveraging federal agencies' existing authorities without enacting additional AI-specific laws or on creating new cross-sector authorities or broad regulations to address potential risks from AI, such as transparency and accountability requirements. Additionally, congressional actions might focus on providing federal agencies with authorities or direction to support domestic AI development—such as through public-private partnerships and providing resources for AI research and education—or engaging with international efforts to harmonize AI governance. In a time of rapid AI development, such efforts may need to frequently evolve or incorporate mechanisms for periodic review and flexibility at the state and federal levels.


Introduction

Artificial intelligence (AI) presents many potential benefits and challenges in the private and public sectors.1 In the United States, there has been broad debate about how to harness the opportunities of AI technologies, such as through enhanced operations and worker efficiency, while minimizing potential problems, such as bias and inaccuracies in AI-generated output.

Generally, proponents of comprehensive federal AI regulations assert that such regulations would lead to less legal uncertainty for AI developers and improve the public's trust in AI systems, thus supporting AI innovation. Opponents of broad federal AI regulation assert that the AI industry is already taking steps to self-regulate and that additional regulation would stifle innovation and competitiveness at a time when international competition in AI is accelerating, which could lead to negative economic and national security outcomes. Other analysts have criticized such characterizations as presenting a false dichotomy between regulation and innovation and instead support a mixture of targeted, flexible approaches depending on the AI technology and its application.

This report provides an overview of current federal laws pertaining to AI, approaches to AI regulation and governance efforts in the United States and other selected countries—as well as multi-country governance proposals—and selected policy considerations and legislative options for Congress. The scope of this report does not extend to AI infrastructure or equipment, such as AI computing chips and data centers, or related export controls. For information on AI infrastructure topics, see CRS In Focus IF12899, Data Centers and Cloud Computing: Information Technology Infrastructure for Artificial Intelligence, by Ling Zhu.

Defining AI

There is no single, widely agreed upon definition of AI. However, some common descriptions and features have emerged. Congress has previously enacted laws with definitions of AI, such as through the National AI Initiative Act of 2020 (Division E of the William M. [Mac] Thornberry National Defense Authorization Act [NDAA] of FY2021, P.L. 116-283),2 which defines AI as

a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to- (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.

Federal agencies have put forth definitions of AI systems as well, such as that in the National Institute of Standards and Technology's (NIST's) Artificial Intelligence Risk Management Framework (AI RMF).3 The NIST AI RMF incorporates a definition of AI first put forth by the Organization for Economic Cooperation and Development (OECD) in 2019. While the definition of AI system remains unchanged in the AI RMF, in March 2024, the OECD updated its broad definition of AI to more explicitly reference the generation of content, in part responding to the emergence of general-purpose and generative AI.4 The updated definition is contained in the OECD AI Principles, which have been adopted by 47 countries—including the United States and the EU—as of May 20255:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

In an explanatory memorandum about the 2024 updated definition for an AI system, the OECD states, "While the definition is necessarily short and concise, its application in practice depends on a range of complex and technical considerations."6

Along these lines, a variety of factors can complicate defining AI for legislative and regulatory efforts. First, AI can be considered an umbrella term for various technologies (e.g., facial recognition technology, or FRT), and applications (e.g., medical image classification in health care, large language models for text chatbots). Second, AI technologies and applications have been evolving rapidly. Whether and how to craft definitions for AI in legislation necessarily involves consideration of the current and future applicability of a definition. For example, policymakers might consider whether a definition is expansive enough to not hinder the future applicability of a law or regulation as AI develops and evolves while being narrow enough to provide clarity on the entities the law affects.

As state and local governments may use different descriptions of AI, Congress could provide a common definition of AI as part of any federal laws to establish governance and regulatory frameworks for AI technologies. The importance of agreed-upon AI terminologies has been described in U.S.-international efforts. For example, the United States–European Union Trade and Technology Council previously stated:

AI terminology is pivotal to cooperation on AI in part due to the present momentum in the field, and due to the broader role of language in constructing and explaining scientific paradigms. Terminology is a necessary basis for technical standards and creates shared frames of reference between like-minded partners and across disciplines. Ultimately, different terminologies express distinct "technological cultures," thus revealing, through both alignment and divergence, the existence of gaps, unnecessary divergences and inconsistencies, and other points of departure for cooperation and collaboration.7

Regulatory Considerations

As with many technologies, AI itself is neither inherently good nor inherently bad. AI technologies can provide many benefits, such as increasing worker efficiency and productivity, accelerating research discoveries, and improving responses to cybersecurity incidents. At the same time, AI can present challenges and risks, such as job loss from work task automation, harms to civil liberties from biases or out-of-scope use, and potential loss of privacy through surveillance with technologies such as FRT.

Proponents of additional AI regulations argue that they are necessary to mitigate potential risks from AI systems that might function dangerously or unpredictably, thereby improving safety. As defined by the International Organization for Standardization, safe AI systems should "not under defined conditions, lead to a state in which human life, health, property, or the environment is damaged."8 Proponents further assert that regulation is needed to, for example, protect against potential discriminatory outcomes that might disproportionately impact marginalized populations and determine liability in the event of AI errors or misuse. Some stakeholders have asserted that well-designed regulations could include "stability and clarity for innovation."9

Opponents of additional regulation assert that such risks can be mitigated by applying current federal laws and agency authorities to AI technologies rather than creating new regulations too soon.10 Some stakeholders have expressed that it is difficult to develop and optimize new regulations for advanced, broadly applicable, rapidly developing technologies, such as AI. Opponents often claim that additional regulations might stifle AI innovation—particularly for smaller companies and startups with fewer resources to use for complying with new regulations—and disincentivize experimentation that could lead to AI systems with fewer risks and improved safety.11 More broadly, stakeholders have raised concerns that potential constraints to U.S. AI innovation from new regulations might lead to the United States losing its historically dominant position in a global "race" to develop advanced AI. For example, according to one academic analysis, "the U.S. still leads in producing top AI models—but China is closing the performance gap."12

Congress and the Administration might frame AI legislation and policies in various ways that reflect different priorities. One approach might include promoting AI safety and averting risks to people, while another approach might focus on security. Still other approaches might focus on accelerating innovation, potentially accompanied by voluntary commitments from industry. These approaches might be combined in various ways in tandem with other forms of AI regulation. Regardless, the technical challenges of how to ensure that AI systems align with human goals and values—however those are defined for specific cultural, legal, and societal contexts—remain,13 and overcoming those challenges in reliable and scalable ways are ongoing areas of research and testing.

A September 2024 report by the Stanford University Cyber Policy Center summarized the authors' views of the debates around regulation of AI as follows:

Regulation [of AI] is both urgently needed and unpredictable. It also may be counterproductive, if not done well. However, governments cannot wait until they have perfect and complete information before they act, because doing so may be too late to ensure that the trajectory of technological development does not lead to existential or unacceptable risk.14

Policymakers may shape the boundaries of AI development—technical, societal, legal, and ethical15—and deployment and may aim to do so in ways that support innovation; enhance benefits broadly; and protect Americans' privacy, civil rights, and civil liberties.16 In a time of rapid AI development, such efforts may need to frequently evolve or incorporate mechanisms for periodic review and flexibility.

AI Governance and Regulation in the United States

U.S. federal laws and policy documents have included language on supporting AI innovation while managing risks. For example, an April 2025 government-wide policy memorandum from the Office of Management and Budget (OMB) directed federal agencies to accelerate their use of AI by focusing on innovation, governance, and public trust; to implement risk management practices; and to "prioritize the use of AI that is safe, secure, and resilient."17 The OMB memorandum describes "risks from the use of AI" as including those "related to efficacy, safety, fairness, transparency, accountability, appropriateness, or lawfulness of a decision or action resulting from the use of AI."18

These overarching aims are reflected in the NIST AI RMF as well. The following subsections provide an overview of federal activities in the legislative and executive branches and U.S. approaches to regulating AI technologies.

Federal Laws Addressing AI

While Members of Congress have introduced hundreds of bills including the term artificial intelligence since the 115th Congress, fewer than 30 have been enacted as of May 2025. Of those, nearly half consisted of AI-focused provisions either in appropriations or national defense authorization legislation. Arguably the most expansive law was the National Artificial Intelligence Initiative Act of 2020 (Division E of the NDAA of FY2021, P.L. 116-283). The act:

  • codified the American AI Initiative;
  • established a National Artificial Intelligence Initiative Office to support federal AI activities, which was launched in January 2021 under the first Trump Administration19;
  • established an interagency committee at the Office of Science and Technology Policy to coordinate federal programs and activities in support of the law, and
  • established a National AI Advisory Committee, which produced over 30 reports between 2022 and 2024,20 including recommendations for the current Trump Administration.21

The act also directed AI activities at selected federal science agencies, including National Science Foundation (NSF) support for a network of National AI Research Institutes. Other laws have also focused on federal AI research and development (R&D). For example, the CHIPS and Science Act (P.L. 117-167) included numerous AI-related provisions directing certain federal science agencies to support AI R&D activities and the development of technical standards and guidelines related to safe and trustworthy AI systems.22 Since the law was enacted, various federal programs established by the act have provided grants in support of AI R&D, such as the NSF's Regional Innovation Engines program and the Economic Development Administration's Regional Technology and Innovation Hubs program.23

Additional laws have directed individual federal agencies—including the General Services Administration (GSA) and OMB—to support the use of AI across the federal government:

  • The AI in Government Act of 2020 (Division U, Title I, of the Consolidated Appropriations Act, 2021, P.L. 116-260) created within GSA an AI Center of Excellence to facilitate federal adoption of AI and collect and make public information regarding federal programs, pilots, and other initiatives.24 The act required OMB to issue a memorandum to federal agencies regarding the development of AI policies; approaches for removing barriers to using AI technologies; and best practices for identifying, assessing, and mitigating any discriminatory impact or bias and any unintended consequences of using AI.25
  • The Advancing American AI Act (Subtitle B of Title LXXII of Division G of the FY2023 NDAA, P.L. 117-263) required OMB to (1) incorporate additional considerations when developing guidance for using AI in the federal government26; (2) develop an initial means to ensure that federal contracts for acquiring AI address privacy, civil rights and liberties, and the protection of government data and information27; and (3) require the head of each federal agency (except the Department of Defense) to prepare and maintain an inventory of current and planned AI use cases.28

Other AI-related laws focused on certain aspects of AI training29 and sector-specific use.30 None established broad regulatory authorities for the development or use of AI or prohibitions on AI use.

State AI Laws

According to the National Conference of State Legislatures, as of late April 2025, at least 48 states and Puerto Rico had introduced into their legislatures more than 1,000 bills that include the term AI in the 2025 legislative season.31 Those bills range widely in scope, including mentions of AI in broader legislation (such as commemoration resolutions and commendations), the establishment of task forces to study aspects of AI, and requirements for disclosures regarding AI-generated content (such as in political advertisements and mental health chatbots). Regarding bills focused on AI, in prior years, states have enacted a range of cross-sector AI legislation impacting private sector AI developers and deployers, including the following examples:

  • In September 2024, California enacted multiple pieces of AI legislation, including those that require generative AI developers to digitally mark AI outputs (S.B. 942) and require AI developers to disclose information about which data they use to train their models (A.B. 2013).32 Governor Newsom vetoed an AI safety testing bill (S.B. 1047), which would have required developers of certain large-scale AI models (over certain computing power and cost thresholds) to test them before releasing and would have authorized the state attorney general to bring civil actions for specified critical harms (e.g., harms from the creation of chemical, biological, radiological, or nuclear weapons or from cyberattacks on critical infrastructure).33
  • In May 2024, Colorado enacted AI legislation (SB 24-205) focused on consumer protections, safety, and disclosure of use.34 Subsequently, Colorado's House Bill 24-1468 created an AI Impact Task Force to consider and propose recommendations regarding protections for consumers and workers from AI systems and automated decision systems. That task force released a report with recommendations to clarify SB 24-205.35
  • In March 2024, the State of Washington enacted legislation (ESSB 5838) to establish an AI task force to examine the development and use of AI by public and private sector entities and make recommendations to the Washington state legislature regarding guidelines and potential legislation for the use and regulation of AI systems to protect safety, privacy, and civil and intellectual property rights.36

Congressional AI Activities

Recent Congresses have established various AI-focused caucuses, task forces, and working groups and held numerous hearings on AI topics as part of their AI-focused policymaking activities. For example, as in prior sessions, the 119th Congress has established AI caucuses in both the Senate and the House.37 These caucuses enable Members of Congress to exchange information and ideas with colleagues.38

Leadership in both the Senate and the House previously established groups to inform the development of AI policies and legislation. In the 118th Congress, the Senate's Bipartisan AI Working Group sought to "complement the traditional congressional committee-driven policy process" given the cross-jurisdictional nature of AI.39 After hosting nine AI Insight Forums40 to bring AI expertise from across industry, academia, and civil society to the Senate, the working group released a roadmap for AI policy in the Senate in May 2024.41 The roadmap identified areas of consensus for AI policy development, including in supporting U.S. innovation in AI; AI and the workforce; high impact uses of AI; elections and democracy; privacy and liability; transparency, explainability,42 intellectual property, and copyright; safeguarding against AI risks; and national security.43

In February 2024, House leadership during the 118th Congress announced the establishment of a bipartisan, 24-Member Task Force on Artificial Intelligence "to explore how Congress can ensure America continues to lead the world in AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats."44 The task force reported holding multiple hearings and roundtables to engage with experts, after which it released a report in December 2024 with 66 key findings and 89 recommendations organized into 15 chapters.45 The Task Force report also adopted high-level policy considerations to guide future congressional efforts: identify AI issue novelty (i.e., whether a policy issue is "truly new for AI due to capabilities that did not previously exist" in order to avoid duplicative legislative mandates), promote AI innovation, protect against AI risks and harms, empower government with AI, affirm the use of a sectoral regulatory regime, take an incremental approach, and keep humans at the center of AI policy.

Executive Branch AI Actions

The U.S. executive branch has worked to establish federal government-wide AI initiatives and activities, including through executive orders, strategic plans, reports, policy memoranda, and advisory and coordination committees.46 Over multiple Administrations—including both Trump Administrations and the Biden Administration—executive branch policy has been set by a series of executive orders. Subsequent Administrations have rescinded or modified the executive orders or implementation approaches of previous Administrations. As a consequence, agencies in prior Administrations may have engaged in activities not continued in subsequent Administrations due to changing executive direction. Particularly relevant to this discussion is President Trump's rescinding in 2025 of President Biden's executive order of 2023 on AI, as described below.

Both the Trump and Biden Administrations took a variety of actions related to AI, with policy documents including language on supporting AI innovation while assessing for and managing potential risks. For example, the aforementioned government-wide policy memoranda from OMB regarding federal agency use of AI have directed agencies to advance AI governance and innovation while managing the risks of AI.47 In addition to the NIST AI RMF, these overarching aims have been reflected in other documents, such as President Trump's 2025 Executive Order (E.O.) 14179, Removing Barriers to American Leadership in Artificial Intelligence48; President Biden's 2023 E.O. 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,49 subsequently rescinded by President Trump50; President Trump's 2020 E.O. 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government51; and 2019 E.O. 13859, Maintaining American Leadership in Artificial Intelligence.52

During the Biden Administration, E.O. 14110 directed over 50 federal agencies to engage in more than 100 specific actions across eight overarching policy areas.53 The E.O. also used Defense Production Act authorities54 to require (1) companies developing, or intending to develop, certain dual-use AI models to report to the government on model training, testing, and data ownership and (2) entities that acquire, develop, or possess potential large computing infrastructure to report to the government on location and amount of computing power. Those requirements are no longer in force after President Trump's E.O. 14148 revoked E.O. 14110 on January 20, 2025.55 The extent to which these activities may support current aims is to be determined. The Trump Administration has directed a review of any actions taken pursuant to E.O. 14110.56

The Trump Administration's E.O. 14179 set forth a policy "to sustain and enhance America's global AI dominance"57 and called for an "AI Action Plan" by July 30, 2025, inviting public comment on policy ideas for the plan.58 The comment period on the plan ended on March 15, 2025, with more than 8,700 comments reportedly received.59 In February 2025, Vice President Vance, in remarks at the AI Action Summit in Paris, France, stated that the U.S. plan under preparation "avoids an overly precautionary regulatory regime."60 Through such actions, the current Administration has signaled an intention toward what it characterizes as pro-economic-growth AI policies, potentially with a comparatively smaller federal role.

Federal agencies have been reporting their internal AI use cases in what is called the "AI Use Case Inventory" pursuant to E.O. 13906 and the Advancing American AI Act and OMB Memorandum M-25-21.61 As of January 23, 2025, agencies reported over 1,990 current and planned AI use cases (excluding retired AI use cases).62 The top three categories for AI uses in that inventory were mission-enabling (internal agency support), health and medical, and government services (which includes benefits and service delivery).63 This inventory of AI use cases is intended to increase transparency to assist with oversight of agency activities and investments. However, there have been concerns raised about the comprehensiveness and accuracy of the inventories as well as recommendations put forth to improve reporting. For example, in December 2023, the Government Accountability Office (GAO) made 35 recommendations to 19 federal agencies to fully implement federal AI requirements. As of May 2025, 31 of GAO's recommendations were listed as open.64 Additionally, in February 2024, the National AI Advisory Committee recommended that the federal AI use case inventory be expanded by limiting certain categories of exceptions, such as the reporting exceptions for sensitive law enforcement uses and for common commercial products.65

U.S. Approaches to Regulating AI

Beyond broad AI governance frameworks set forth above, most of the U.S. regulatory efforts regarding AI have centered on federal agency assessments and enforcement of existing regulatory authorities, exploration of whether individual agencies require additional authorities, and securing voluntary commitments from industry. Further, many of the proposed bills in the 118th and 119th Congresses have emphasized the development of voluntary guidelines and best practices and reporting of industry-conducted evaluations of AI systems rather than prohibitions or independent evaluation of AI uses and technologies. The approach of the U.S. federal government as a whole appears to be more focused on oversight of federal government uses of AI than regulating AI in the private sector.

In terms of approaches to regulating AI, federal government activities may be broadly thought of in the following categories: regulating the AI technologies directly, regulating the use of the AI across sectors—with a more agnostic approach to the technological cause of the outcome—and regulating the use of AI within particular sectors.

Regulating the AI Technologies

One approach that has been taken by some executive actions and introduced legislation has been regulating AI technologies themselves, such as by using a technical threshold such as computing power or proposing transparency requirements.

For example, as discussed earlier in the report, E.O. 14110, when it was in place, required companies developing or intending to develop certain dual-use AI models to report to the federal government on model training, testing, and data ownership. The initial technical conditions that triggered the reporting requirements included "any model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations" (FLOPs).66 At the time of introduction of E.O. 14110, the minimum computation threshold that triggered a reporting requirement reportedly exceeded the computational power required for AI models in use at that time.67 According to one analysis updated on May 6, 2025, numerous models now surpass 10^25 FLOPs, and one—xAI's Grok-3—has passed 10^26 FLOPs.68 As an example of transparency requirements for particular AI technologies, the AI Disclosure Act of 2023 (H.R. 3831, 118th) would have required that any output of generative AI include a disclaimer that it was generated by AI, enforced by the Federal Trade Commission (FTC). In the 119th Congress, the Quashing Unwanted and Interruptive Electronic Telecommunications Act (H.R. 1027) would require disclosures with respect to robocalls using AI.

On one hand, a technology-specific approach might result in more targeted accountability and oversight of AI applications. On the other hand, such an approach might be rigid and not fully account, even in the short term, for potential risks, as AI technologies have been evolving rapidly.

Regulating the Use of AI Technologies Across Sectors

Some Members of Congress have proposed to regulate the use and oversight of AI across sectors, sometimes taking a more technology-neutral approach. For example, the Algorithmic Accountability Act (S. 2892 and H.R. 5628, 118th) would have directed the FTC to require impact assessments of automated decision systems (including but not limited to AI) and augmented critical decision processes (ACDP)69 from certain covered commercial entities (e.g., large companies). The bill would have required covered entities to attempt to eliminate or mitigate any negative impacts on a consumer's life from an ACDP. The bill would also have required the FTC to develop a publicly accessible repository to publish a limited subset of information about each automated decision system or ACDP for which the agency received a report. Additionally, the bill would have required the FTC to provide guidance and technical assistance to covered entities and to establish a Bureau of Technology to aid and advise the FTC with respect to enforcement of the act.

While legislation such as the Algorithmic Accountability Act would take a broad approach to regulating AI and algorithmic systems, it would have stipulated evaluation and reporting requirements and did not contain specific prohibitions on use. Some analysts have asserted that focusing on automated systems, rather than AI explicitly, better captures the technical features of concern and avoids the questions of defining and delineating AI from non-AI systems.70 Critiques of an approach that places requirements only on large companies assert that many smaller companies may provide products to a range of customers, including state and local governments, and thus those risks would not be addressed.71 As an alternative, policymakers might implement similar requirements for all companies but provide support (e.g., technical or financial) for small- and medium-size enterprises (SMEs).

Regulating the Use of AI Technologies Within Sectors

Some legislative approaches have focused on the use of AI technologies within particular sectors. For example, Members of Congress have introduced legislation focusing on governance and regulation of AI uses within the financial sector, elections and campaign finance, and health care. Examples of bills pertaining to AI uses in the financial sector include the Preventing Deep Fake Scams Act (H.R. 1734), which would "establish the Task Force on [AI] in the Financial Services Sector"; and the AI PLAN Act (H.R. 2152), which would "require a strategy to defend against the economic and national security risks posed by the use of [AI] in the commission of financial crimes."72 Regarding AI uses in elections and campaign finance, example bills include the Fraudulent Artificial Intelligence Regulations Elections Act of 2024 (S. 4714 and H.R. 3875, 118th) and the AI Transparency in Elections Act of 2024 (H.R. 8668, 118th).73 The 118th Congress also introduced S. 4862 "to ensure that new advances in artificial intelligence are ethically adopted to improve the health of all individuals, and for other purposes" pertaining to health care.74

Selected International Approaches to AI Governance and Regulation

Similar to the U.S. federal government, some other countries, such as the United Kingdom (UK), have taken to date a sector-specific approach to regulating AI. In contrast, the European Union (EU) has enacted a broad regulatory approach through the EU AI Act. China has enacted targeted AI laws and is working on a broader AI regulation, though its economic and science and technology policies feature a heavy government role in private sector development. This section of the report provides high-level summary information on the approaches by the UK, EU, and China—as well as multi-country governance proposals and activities—as compared to U.S. approaches.

United Kingdom

The UK does not have a general statutory regulation for AI. Rather, AI is regulated through existing legal frameworks for sectors in which AI is used.75 The UK government has worked to develop national AI strategy and policy documents, action plans, and research institutes. Depending on control of the UK government, the approaches taken toward AI have varied, particularly with respect to the need for new, broad regulation.

For example, in September 2021, the UK National AI Strategy was published76 under the center-right Conservative Party, followed by a 2023 policy white paper, "AI Regulation: A Pro-Innovation Approach."77 The policy paper laid out the UK government's intention to "put in place a new framework to bring clarity and coherence to the AI regulatory landscape."78 The UK framework is meant to have an "agile and iterative approach, recognizing the speed at which these technologies are evolving," underpinned by five principles: safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.79 According to the policy paper, these principles would initially be issued on a non-statutory basis and implemented by existing regulators (e.g., the Health and Safety Executive, Equality and Human Rights Commission, and Competition and Markets Authority) using their existing authorities rather than creating a new regulatory entity. However, the UK government would provide central support functions, including:

  • monitoring and evaluating the framework's effectiveness and implementation of the principles,
  • assessing and monitoring risks across the economy arising from AI,
  • conducting horizon scanning and gap analysis—including by convening industry—to inform a coherent response to emerging AI technology trends,
  • supporting testbeds and regulatory sandbox initiatives to help AI innovators get new technologies to market,80
  • providing education and awareness to give clarity to businesses and empower citizens, and
  • promoting interoperability with international regulatory frameworks.81

In response to comments on the 2023 policy white paper, the UK government clarified that, after reviewing an initial period without implementing regulations, it anticipated requiring regulators to "have due regard to the [cross-sectoral AI] principles."82

Since then, the UK government has undertaken efforts to support innovation and work toward broader AI regulations. For example, support for a pro-innovation approach was echoed in the January 2025 "AI Opportunities Action Plan,"83 released under the center-left Labour Party, which won the UK general election in July 2024.84 However, the plan also supports "well-designed and implemented regulation, alongside effective assurance tools," arguing that UK regulators "have an important role in supporting innovation as part of their Growth Duty."85 Further, a bill has been introduced in the House of Lords to make provision for the regulation of AI.86 The bill would direct the UK Secretary of State to create an AI Authority to, among other things, ensure alignment of approach across relevant regulators and accredit independent AI auditors.

One analysis described the UK's approach to AI regulation as a "principles-based, non-statutory, and cross-sector framework" that aims to "balance innovation and safety by applying the existing technology-neutral regulatory framework to AI."87 However, that analysis and others assert that AI regulatory activity is expected to increase since the election of a Labour government88 and cite AI legislation in the King's Speech in July 2024.89

European Union

On April 21, 2021, the European Commission released a proposed regulatory framework for AI—the Artificial Intelligence Act (AI Act).90 (For more information about the EU's components, see the text box on "European Union (EU) Institutions" below.) After revisions and negotiations, the act was formally signed in June 2024 and broadly entered into force on August 1, 2024.91 However, none of the act's prohibitions or requirements began to apply until February 2, 2025, as described below, and final implementation actions stretch through 2030.92

European Union (EU) Institutions

The EU is a political and economic partnership representing a form of cooperation among 27 sovereign member states.93 The EU includes three main institutions involved in proposing, approving/rejecting, and implementing legislation:

The European Parliament is the only directly elected institution of the EU and includes 705 members.

The European Commission has 27 commissioners representing the interests of the EU as a whole and functions as the EU's primary executive body.

The Council of the European Union (or the Council of Ministers) has 27 national ministers (the president or prime minister of every member state) representing the interests of the EU's national governments.94

The AI Act adopts a risk-based approach, classifying AI systems into several risk categories, to which different degrees of requirements and obligations apply (see Figure 1)95:

  • Unacceptable risk. AI systems with a clear threat to safety, livelihoods, and rights of people are banned. This includes AI systems used in harmful AI-based manipulation and deception, harmful AI-based exploitation of vulnerabilities, social scoring, individual criminal offense risk assessment or prediction, untargeted collection of internet or closed-circuit television material to create or expand FRT databases, emotion recognition in workplaces and education institutions, biometric categorization to deduce certain protected characteristics, and real-time biometric identification for law enforcement purposes in publicly accessible spaces.96 The ban on AI systems posing unacceptable risks went into effect on February 2, 2025.
  • High risk. High-risk AI systems can pose serious risks to health, safety, or fundamental rights. These can include AI safety components in critical infrastructure; AI systems that may determine access to education, employment, public services, or financial services; AI-based safety components of products (e.g., in robot-assisted surgery); remote biometric identification; use by law enforcement that might interfere with fundamental rights; use in migration, asylum, and border control management; and AI systems used in courts or the administration of justice. High-risk AI systems are authorized but subject to assessments before they can be placed on the EU market and to post-market monitoring obligations. Pre- and post-market obligations include having risk assessment and mitigation systems; using high-quality datasets to minimize the risk of discriminatory outcomes; creating detailed documentation; and having a high level of robustness, cybersecurity, and accuracy.97 Obligations for all high-risk systems are to apply beginning in August 2026.
  • Transparency risk. AI systems that pose risks of impersonation or deception are subject to information and transparency requirements. For example, users must be made aware when they interact with chatbots, and deployers of AI systems that generate or manipulate image, audio, or video content must make sure the content is identifiable (which could include visual labeling or digital watermarking).98 Most of these requirements are to apply beginning in August 2025.
  • Minimal or no risk. For AI systems deemed minimal or no risk, such as AI-enabled video games or spam filters, there are no AI Act requirements.
  • General-purpose AI (GPAI).99 All GPAI models will have to maintain up-to-date technical documentation—including summary information on the content used in training the models—and comply with EU copyright law. GPAI models trained using a total computing power exceeding a certain threshold (10^25 FLOPS) are presumed to pose systemic risks. Those GPAI models exceeding that threshold must also constantly assess and mitigate risks, such as through documenting and reporting serious incidents and implementing corrective measures.100 Rules for GPAI systems that must comply with transparency requirements are to apply beginning in August 2025.101

Figure 1. European Union AI Act's Risk-Based Regulatory Approach

Source: CRS, adapted from Tambiama Madiega, "Artificial Intelligence Act: Briefing—EU Legislation in Progress," European Parliamentary Research Service, September 2024, https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf.

Notes: A general-purpose AI (GPAI) model means "an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications."

The AI Act established an EU AI Office to provide advice on, and monitoring of, the implementation of the act.102 Each EU member state also must establish or designate at least one market surveillance authority and at least one notifying authority to ensure the application and implementation of the act. The act requires member states to provide for financial and non-monetary penalties and other enforcement measures for infringements (i.e., non-compliance by the operator with the prohibitions or obligations in the law, or supplying incorrect, incomplete, or misleading information to authorities).103 Financial penalties are expected to range from €7.5 million (or 1.5% of global annual turnover) to €35 million (or 7% of global annual turnover) depending on the type of infringement and size of the company.104

The AI Act also sets forth measures in support of AI innovation.105 For example, the act requires each member state to establish at least one regulatory sandbox to facilitate development and testing of AI systems in real-world settings while under regulatory oversight before entering the market. It also requires member states to undertake innovation measures to specifically help SMEs and startups.

Some analysts have been broadly supportive of the AI Act's risk-based approach and its recognition that risks posed by AI systems vary with the contexts in which they are used. Critics have asserted that the regulation does not go far enough to protect fundamental rights, recommending stronger risk mitigation measures.106 Other critiques have raised concerns that the act's rules create barriers for companies developing and deploying AI, including high initial compliance costs and prolonged time to market for new AI tools, and recommend fewer requirements and lower fines, regardless of company size, to spur innovation.107 According to recent reports and analyses, the EU may be increasingly emphasizing competitiveness and focusing on potential strategies and financial investments to spur EU AI development in an attempt to help overcome regulatory barriers, building on the aforementioned innovation measures in the AI Act.108 Other analysts have posited that the AI Act may provide benefits to European AI developers. For example, developers might focus more on innovation by navigating one standardized regulation rather than a patchwork from individual EU member countries, and they might have a competitive advantage in being able to assert that their AI products are trustworthy because they are in compliance with the AI Act.109

U.S. tech companies, including Meta and Apple, have reportedly declined to launch some AI products in the EU as a result of the AI Act and related regulations, including the Digital Markets Act and data protection rules, citing "the unpredictable nature of the European regulatory environment" and "regulatory uncertainties."110 Broadly, though, most analysis seems to agree that the ultimate impact of the regulation will depend on the details of its implementation, many of which are to be determined, as the majority of the provisions have yet to be implemented.

China

In July 2017, China's State Council issued A New Generation Artificial Intelligence Development Plan, setting broad goals and plans for developing AI technologies and applications and calling for the country to lead the world in AI by 2030.111 Since then, China's regulatory actions have been targeted to particular AI technologies and sector applications, including through the 2023 Provisions on Management of Deep Synthesis in Internet Information Service ("Deep Synthesis Rule") and the 2023 Interim Measures for the Management of Generative Artificial Intelligence Services ("Generative AI Measures").112 The Deep Synthesis Rule regulates the use of deep fake technologies in generating or changing digital content. The Generative AI Measures aim to mitigate risks associated with public-facing generative AI services—including content security, personal data protection, data security, and intellectual property violations—and sets forth a multi-tiered system of obligations for providers of generative AI services (whether the provider is based within or outside of China) to mitigate such risks.113 One analysis characterized China's approach as a vertical, technology-specific framework influenced by national security concerns and economic development goals, contrasting it with the EU's AI Act, described as a horizontal, risk-based framework focused on ethical considerations and transparency.114 While some have described such regulations as restrictive for Chinese AI development, others have asserted that they "offer little protective value to the Chinese public" and instead send "a strong pro-growth signal" to industry.115 One expert described these regulatory actions as "underscoring the state's interest in controlling online news content" and in keeping with the government's "efforts to prevent what it considers political and social disruption and enforce censorship and content regulations more broadly."116

No draft of a broad AI law has been taken under consideration by China's National People's Congress. In March 2024, an expert group in China released a draft law, Artificial Intelligence Law of the People's Republic of China, which reportedly aims to promote innovation in AI technology, develop a healthy AI industry, and regulate AI products and services in a more comprehensive way than previous governance proposals and with more details on the responsibilities for specific actors.117 Also in March 2024, four Chinese government agencies jointly released the Measures for the Labelling of Artificial Intelligence-Generated and Synthetic Content, which are to come into effect as of September 1, 2025. These rules require AI-generated content to be implicitly labelled (i.e., embedded in a digital file's metadata) or explicitly labelled (i.e., easily perceived by users, added to text, audio, images, video, and virtual scenes).118 Some analysis has described the labeling measures as providing clarity on the implementation of the broader Deep Synthesis Rule and Generative AI Measures described above.119

As part of the aforementioned 2017 New Generation AI Development Plan, the Chinese government called for both state and non-state actors to support the central government in pursuing global leadership in AI. The Chinese government plays a large role in supporting and directing R&D in strategic technologies such as AI through a range of state-led industrial and science and technology policies.120 For example, the Chinese government can assert control and influence over private sector firms through acquiring controlling stakes, directly subsidizing companies, and providing government guidance funds (i.e., state-directed public-private investment funds) to seed early-stage AI firms.121 Broadly, the outsized role of the Chinese government in shaping private sector activities represents an overarching difference in AI development and commercialization as compared to the more independent private sector AI activities in the United States, UK, and EU. Some analysts have asserted that a heavy reliance on government funding can lead to inefficiencies, such as misallocated capital and market distortions.122 According to one analysis, though, while the United States has historically led China in private AI investment and translating AI research into real-world applications, foreign investment is slowly increasing for China's generative AI sector, and China is "rapidly closing the performance gap" with the United States.123 Additionally, the Chinese government is reportedly prioritizing a diversified approach to AI, focusing on basic research, core software and hardware technologies, and AI applications.124

Multi-Country and Bilateral AI Governance Activities

The United States has engaged in multilateral AI activities as well as bilateral activities with the EU, UK, and India. Among the major multi-country AI initiatives are those spearheaded by the OECD, the Group of Seven (G7), and the United Nations. For example, in 2019, the OECD member countries committed to a set of common AI principles, which were updated in 2024 to consider new technological developments, notably in general-purpose and generative AI.125 The principles aim to promote inclusive growth, human-centered values, transparency, safety and security, and accountability, and they encourage investments in R&D. In October 2023, the G7 leaders announced an agreement on guiding principles on AI and a voluntary code of conduct for AI developers under the Hiroshima AI process.126 The 11 guiding principles on AI and code of conduct build on the 2019 OECD AI principles and are meant to inform national regulatory efforts implemented by G7 nations "in line with a risk-based approach."127 The UN General Assembly committed to establishing a multidisciplinary Independent International Scientific Panel on AI and initiate a Global Dialogue on AI Governance, calling for co-facilitators from both a developed and a developing country.128 The United Nations, along with some academic and nonprofit stakeholders, have called for more inclusive discussions of AI governance with countries in the Global South.129

In 2020, 15 countries—including the United States—launched the Global Partnership on AI to bring together expertise from a range of stakeholders focused on project-oriented collaboration.130 As announced in July 2024, it partnered with OECD to bring together 44 countries across six continents "to advance an ambitious agenda for implementing human-centric, safe, secure and trustworthy AI," with nationally funded Expert Support Centres located in Canada, France, and Japan.131

Regarding bilateral activities, in June 2021, the U.S.-EU Trade and Technology Council was established,132 serving as a forum to coordinate U.S. and EU approaches to global trade, economic, and technology issues, with cooperation on "responsible AI innovation … in line with shared democratic values."133 As part of the Indo-U.S. Science and Technology Forum, the U.S.-India Artificial Intelligence Initiative was established in March 2021 to serve as a platform to discuss bilateral AI R&D collaboration between the United States and India.134

In November 2023, the United States participated in the UK's AI Safety Summit, which also included participants from 28 countries across the EU, Africa, the Middle East, and Asia. In tandem with the summit, the United States established a U.S. AI Safety Institute (AISI) to operationalize the NIST AI RMF135 by creating guidelines, tools, benchmarks, and best practices for identifying, evaluating, and mitigating AI risks.136 At the same time, the UK announced its own AISI, and in April 2024, the U.S. and UK AISIs agreed to jointly develop tests for the most advanced AI models.137 As of February 2025, the UK AISI has been renamed the AI Security Institute to "reflect AISI's focus on serious AI risks with security implications," removing prior foci on bias or freedom of speech.138 Similarly in the United States, during the second Trump Administration, NIST has reportedly removed certain skills—including AI safety, responsible AI, and AI fairness—as part of updated cooperative agreement language for scientific partners working with the AISI.139

Policy Considerations and Options for Congress

As Congress continues debating whether and, if so, how to regulate AI technologies or their uses, it might evaluate a range of policy considerations and potential legislative options in addition to maintaining the status quo. This section provides a high-level overview of such considerations from a few overarching topic areas: leveraging existing frameworks, creating new AI regulations or agency authorities, and engaging with international regulatory efforts. This section further provides selected potential policy considerations that are common across AI technologies and sectoral applications and are likely to be considered by policymakers in the near and long terms.

Leveraging Existing Frameworks

Certain federal agencies have stated that their existing legal authorities apply to automated systems and new technologies, including AI technologies.

Applying Existing Federal Agency Authorities to AI

In April 2023, officials at the FTC, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and the Department of Justice's Civil Rights Division released a "Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems."140 The joint statement highlighted the agencies' authorities and reiterated efforts to monitor the development and use of automated systems and to protect individuals' rights "regardless of whether legal violations occur through traditional means or advanced technologies."141 As an example of using those authorities with AI systems, in December 2023, the FTC brought an action against the drugstore chain Rite Aid, alleging it acted unfairly in its use of FRT to surveil customers. Rite Aid agreed to a court-ordered settlement that, among other things, prohibited it from using FRT for a period of five years.142

There have been efforts to better understand whether agencies need additional authorities to effectively address any concerns that are specific to advanced AI technologies and tools. For example, in response to the November 2020 E.O. 13859, OMB provided guidance to federal agencies for regulatory and nonregulatory oversight of AI applications developed and deployed outside of the federal government.143 That memorandum laid out 10 principles for the stewardship of AI applications, and it directed federal agencies to provide plans to conform to the guidance, including any statutory authorities governing agency regulation of AI applications, regulatory barriers to AI applications, and any planned or considered regulatory actions on AI. Few agencies appear to have provided comprehensive, publicly available responses. Congress might consider whether an assessment of agency authorities would be valuable for its oversight and deliberations on legislation on AI. If so, options for Congress could include requesting information from federal agencies, directing an investigative entity such as GAO to conduct an assessment across agencies, drafting legislation directing development and provision of agency assessments, and conducting oversight of agency responses.

Congress might consider maintaining the current regulatory and governance environment, leveraging the authorities of federal agencies without enacting additional laws specific to AI technologies. On one hand, this approach would refrain from placing additional regulatory constraints on AI development and deployment, which has been a concern from those focused on international competition and U.S. leadership in AI innovation. On the other hand, in the absence of federal AI regulations, states have been enacting their own laws (see the "State AI Laws" text box). Critics assert that such a patchwork of AI laws creates challenges for companies, which must evaluate and ensure compliance across states, increasing regulatory burden and costs. Further, such costs may disproportionately impact SMEs and startup companies, which likely do not have the same legal and financial resources as larger companies do.

To support AI development without additional restrictions, Congress might consider codifying existing agency activities, such as those at the NIST AISI. For example, the Future of Artificial Intelligence Innovation Act of 2024 (S. 4178, 118th) would have codified the NIST AISI. Additionally, Congress might consider modifying or updating agencies' existing authorities to support regulatory refinement or providing new authorities to support innovation. Options for Congress might include the following:

  • Requiring regulatory agencies to provide AI-specific guidance as it relates to their existing authorities. For example, in early January 2025, under the Biden Administration, the Food and Drug Administration released draft guidance for industry "on the use of AI to produce information or data intended to support regulatory decision-making regarding safety, effectiveness, or quality of drugs," with comments due by April 7, 2025.144
  • Requiring an interagency effort to create a comprehensive federal regulatory policy for ensuring AI safety that details each agency's responsibilities. One example might be the Coordinated Framework for Regulation of Biotechnology. As described in the Federal Register, the notice for the Coordinated Framework noted that while existing statutes provided a basic network of agency jurisdiction over both research and products, the Coordinated Framework would help assure reasonable safeguards for the public. The notice further stated that the framework was "expected to evolve in accord with the experiences of the industry and the agencies."145 The framework has been in place since 1986 and most recently updated in 2017.146

Such options for a more robust "whole-of-government" approach to AI governance may help address what some analysts have identified as notable areas for improving implementation of legal and policy requirements, such as greater detail and transparency into Agency Compliance Plans147 with AI requirements and the roles of chief AI officers across federal agencies.148

Creating New AI Regulations or Authorities

Various stakeholders have called for new cross-sector authorities or broad regulations to address potential risks from AI models and tools. Among federal proposals are transparency and accountability requirements, such as requirements for impact assessments of AI models, third-party audits of AI tools, and labels or other disclosures (e.g., digital watermarking) for AI-generated content. For example, the Preventing Algorithmic Collusion Act of 2025 (S. 232, 119th) would prohibit the use of pricing algorithms (including AI algorithms) that can facilitate collusion, and it would create an antitrust law enforcement audit tool to increase transparency. The Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025 (S. 1396, 119th) would require transparency with respect to content and content provenance information for AI-generated or algorithmically modified digital content. The Artificial Intelligence Research, Innovation, and Accountability Act of 2024 (S. 3312, 118th) would have required online platform operators to disclose the use of generative AI systems and would have implemented transparency reporting requirements on deployers and developers of high-impact AI systems, akin to the risk-based approach in the aforementioned Algorithmic Accountability Act from the 118th Congress.

Some analysts have asserted a need for government-mandated oversight of AI conducted by professional auditors, which could create an industry of AI auditors to "deliver accountability for AI without disincentivizing innovation."149 Some pro-business groups such as the U.S. Chamber of Commerce have called for "thoughtful laws and rules for the development of responsible AI and its ethical deployment," asserting that "failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies."150

Various industry stakeholders have echoed calls for regulating AI technologies, including putting forth recommendations.151 However, some analysts have argued that the calls for regulations from technology firms may be intended to protect companies' interests and may not align with the priorities of other stakeholders. For example, at a Senate Judiciary Committee hearing on May 16, 2023, in response to a request from Senator John Kennedy for recommendations on AI regulations, OpenAI CEO Sam Altman stated that he would "form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards," in addition to other recommendations, such as requiring independent audits for compliance with safety standards.152 Analysts have raised concerns that such a plan might make it more difficult for others, such as startups and open-source developers, to enter into developing advanced AI systems.153

Supporting U.S. AI Development and Deployment

In lieu of, or in addition to, new requirements on AI, Congress might consider providing federal agencies with additional authorities or direction to support domestic AI development. As one example, NIST has worked in partnership with private sector groups—including industry, technology trade associations, nonprofits, and civil society groups—to create resources such as the voluntary AI RMF, as directed by law.154 Such efforts might be expanded upon to direct adapting the AI RMF for certain sectors, uses, or groups, such as small businesses. Many comments on an AI Action Plan by the Trump Administration expressed support for NIST and the AI RMF.155 Along these lines, for example, some legislation has been introduced that would have directed NIST to work with other public and private sector organizations and to develop guidance and best practices for AI development—such as dataset and model training documentation; disclosures of security practices such as third-party assessments; and public reporting on AI systems' capabilities, limitations, and appropriate uses.156 In the 119th Congress, the Testing and Evaluation Systems for Trusted AI Act of 2025 (S. 1633) would direct NIST and the Department of Energy to establish testbeds to develop a strategy to assess, and eventually demonstrate, measurement standards for evaluating AI systems used by federal agencies, in coordination with a newly established public-private working group.157

Congress could provide new authorities to create regulatory sandboxes. Such sandboxes are frameworks set up by regulators where firms are exempt from legal risk of certain regulations and allowed to test novel products in the marketplace under close regulatory supervision. For example, the Consumer Financial Protection Bureau has used regulatory sandboxes in financial services.158 In the 118th Congress, bills were introduced that would have created sandboxes for AI projects at financial regulatory agencies (H.R. 9309/S. 4951, 118th) and would have created a new office in OMB to create a universal sandbox (S. 4919, 118th).

Legislation has been introduced in the 119th Congress (H.R. 2385) that seeks to codify and shape the National Artificial Intelligence Research Resource (NAIRR). According to statements from the bill's co-authors,159 the NAIRR, a two-year pilot program launched by NSF in January 2024 as part of an effort to "democratize access to critical resources necessary to power responsible AI discovery and innovation,"160 aims to support researchers and students, as well as small businesses and startups, and might increase domestic innovation and competition. The NAIRR pilot builds off of recommendations in a January 2023 report from a statutorily created NAIRR Task Force161 and previous direction from the revoked E.O. 14110.

Engaging with International Efforts to Regulate AI

Some policy experts and government officials have asserted that the ability of individual countries to support domestic growth of AI technologies may be impacted by the extent to which their regulatory and governance mechanisms are consistent with those of other countries.162 Such international alignment at a broad level may facilitate trade, improve regulatory oversight across countries, and enable international cooperation.163 In May 2023, G7 countries—including the United States, the UK, and the EU (as a "non-enumerated member")—agreed to prioritize AI governance collaborations, emphasizing the importance of forward-looking, risk-based approaches to AI development and deployment.164

Congress might consider whether to pursue collaborative efforts to regulate AI with like-minded countries, as through alignment on policy goals or direction to federal agencies, in line with previous executive orders from the Trump and Biden Administrations. Some experts have referred to an "imperative of global [AI] governance" as "irrefutable," pointing to global sourcing for AI's raw materials, such as critical minerals and training data, and international deployment of GPAI—including beneficial uses and any negative downstream impacts—across borders.165

At the same time as calls for international collaboration have been growing, so too have concerns about international competition in AI R&D and innovation. The laws and regulations that countries create for technologies such as AI can affect a country's competitiveness—a "multi-dimensional and somewhat nebulous concept" that "can include a wide array of context-specific factors"—not only through their presence or absence but through their quality and the effectiveness of their implementation.166 Other considerations can also influence competitiveness, including domestic factors (e.g., R&D funding, education and workforce training, infrastructure availability, and AI adoption by businesses)167 and international factors (e.g., trade agreements, foreign investments). These factors may offset one another to varying extents. For example, while China has implemented numerous AI regulations and is under supply chain constraints for AI infrastructure, the synergy between the government and industrial sector—including government guidance funds supporting AI startups—might be helping to offset potential innovation constraints from those regulations, leading through innovative startups such as DeepSeek.168

Congress could consider whether and how to balance such factors, such as through supporting international collaborations and U.S. AI development while acknowledging concerns about a potential "race to the bottom" where AI systems become unsafe in pursuit of rapid development, as some experts have warned.169 Some bills pertaining to international AI research and activities have been previously introduced, such as the International Artificial Intelligence Research Partnership Act of 2024 (H.R. 8700, 118th) and H.Res. 649 (118th).170 Additionally, the Future of Artificial Intelligence Innovation Act of 2024 (S. 4178, 118th) would have supported domestic AI safety research, as well as international coalitions on AI innovation, development, and alignment of standards development with "like-minded governments" of foreign countries.


Footnotes

1.

For more on AI technologies and policy issues, see CRS In Focus IF12426, Generative Artificial Intelligence: Overview, Issues, and Considerations for Congress, by Laurie Harris; and CRS Video WVB00756, Current Issues in Artificial Intelligence, by Laurie Harris et al.

2.

See 15 U.S.C. §9401(3). Additional statutory definitions can be found in the U.S. Code for artificial intelligence at Title 10, Section 4061 note prec., and for foundational artificial intelligence model at Title 10, Section 4001 note.

3.

National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0), January 2023, https://doi.org/10.6028/NIST.AI.100-1.

4.

Organization for Economic Cooperation and Development (OECD), "OECD Updates AI Principles to Stay Abreast of Rapid Technological Developments," press release, May 3, 2024, https://www.oecd.org/en/about/news/press-releases/2024/05/oecd-updates-ai-principles-to-stay-abreast-of-rapid-technological-developments.html.

5.

OECD, "Recommendation of the Council on Artificial Intelligence," OECD/LEGAL/0449, adopted May 22, 2019, amended March 5, 2024, https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449.

6.

OECD, "Explanatory Memorandum on the Updated OECD Definition of an AI System," March 2024, https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/03/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_3c815e51/623da898-en.pdf.

7.

United States–European Union Trade and Technology Council, EU-U.S. Terminology and Taxonomy for Artificial Intelligence: First Edition, May 2023, p. 1, https://www.nist.gov/system/files/documents/noindex/2023/05/31/WG1%20AI%20Taxonomy%20and%20Terminology%20Subgroup%20List%20of%20Terms.pdf.

8.

International Organization for Standardization, "Trustworthiness—Vocabulary," https://www.iso.org/obp/ui/en/#iso:std:iso-iec:ts:5723:ed-1:v1:en:~:text=3.2.17-,safety,-property%20of%20a.

9.

Julie Heng, "Moving Beyond the Regulation/Deregulation Trap," Center for Strategic and International Studies, February 20, 2025, https://www.csis.org/blogs/perspectives-innovation/moving-beyond-regulationderegulation-trap.

10.

See, for example, Daniel Castro, "Ten Principles for Regulation That Does Not Harm AI Innovation," Center for Data Innovation, February 8, 2023, https://www2.datainnovation.org/2023-ten-principles-ai-regulation.pdf.

11.

Kevin A. Bryan and Florenta Teodoridis, "Balancing Market Innovation Incentives and Regulation in AI: Challenges and Opportunities," Brookings Institution, September 24, 2024, https://www.brookings.edu/articles/balancing-market-innovation-incentives-and-regulation-in-ai-challenges-and-opportunities/.

12.

Institute for Human-Centered AI, Stanford University, "AI Index 2025 Annual Report," April 2025, p. 3, https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf.

13.

World Economic Forum, "AI Value Alignment: How We Can Align Artificial Intelligence with Human Values," October 17, 2024, https://www.weforum.org/stories/2024/10/ai-value-alignment-how-we-can-align-artificial-intelligence-with-human-values/.

14.

Florence G'sell, "Regulating Under Uncertainty: Governance Options for Generative AI," 2nd ed., Stanford Cyber Policy Center, Freeman Spogli Institute, Stanford Law School, September 2024, https://cyber.fsi.stanford.edu/content/regulating-under-uncertainty-governance-options-generative-ai.

15.

See, for example, NIST, AI RMF, p. 10.

16.

As described in White House, "White House Releases New Policies on Federal Agency AI Use and Procurement," April 7, 2025, https://www.whitehouse.gov/articles/2025/04/white-house-releases-new-policies-on-federal-agency-ai-use-and-procurement/. See also section on "Civil Rights and Civil Liberties," in Bipartisan House Task Force on AI, Bipartisan House Task Force Report on AI: Guiding Principles, Forward-Looking Recommendations, and Policy Proposals," 118th Cong., December 2024, https://www.speaker.gov/wp-content/uploads/2024/12/AI-Task-Force-Report-FINAL.pdf.

17.

Office of Management and Budget (OMB), Accelerating Federal Use of AI Through Innovation, Governance, and Public Trust, M-25-21, April 3, 2025, https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf, archived at https://perma.cc/5NF9-7DWF. This document rescinded and replaced M-24-10 (March 28, 2024, archived at https://perma.cc/5NF9-7DWF), in which OMB had directed federal agencies to "advance AI governance and innovation while managing the risks from the use of AI in the federal government, particularly those affecting the rights and safety of the public."

18.

OMB, Accelerating Federal Use of AI, p. 3.

19.

The White House, "The White House Launches the National Artificial Intelligence Initiative Office," January 12, 2021, https://trumpwhitehouse.archives.gov/briefings-statements/white-house-launches-national-artificial-intelligence-initiative-office/.

20.

"National AI Advisory Committee," archived at https://web.archive.org/web/20250117223307/https://ai.gov/naiac/.

21.

National AI Advisory Committee (NAIAC), "NAIAC Insights for the Administration of President Donald J. Trump," draft report, January 22, 2025, https://www.nist.gov/system/files/documents/noindex/2025/01/24/NAIAC_New_Administration_Report-Draft_2025.01.22.pdf.

22.

Additional examples include the Identifying Outputs of Generative Adversarial Networks Act (P.L. 116-258) and the Detection Equipment and Technology Evaluation to Counter the Threat of Fentanyl and Xylazine Act of 2024 (P.L. 118-186).

23.

National Science Foundation (NSF), "About NSF Engines," https://www.nsf.gov/funding/initiatives/regional-innovation-engines/about-nsf-engines; and Economic Development Administration (EDA), "Regional Technology and Innovation Hubs (Tech Hubs)," https://www.eda.gov/funding/programs/regional-technology-and-innovation-hubs. Future support for these programs is unclear given the President's proposed FY2026 budget, which proposes to eliminate the EDA, implement NSF reorganization plans, and reduce NSF's budget by $4.70 billion from an FY2025 enacted funding level of $9.06 billion.

24.

The act codified the GSA AI Center of Excellence that was launched in 2019.

25.

In response to this direction, as well as from the Advancing American AI Act and Executive Order 14110, OMB released the aforementioned M-24-10. On April 3, 2025, OMB released M-25-21, which rescinded and replaced M-24-10.

26.

Such additional considerations included those in the National Security Commission on AI's report entitled "Key Considerations for the Responsible Development and Fielding of AI" and the input of governmental and nongovernmental privacy, civil rights, and civil liberties experts.

27.

In response to this direction from the Advancing American AI Act, OMB released memorandum M-24-18, Advancing the Responsible Acquisition of Artificial Intelligence in Government, which was rescinded and replaced by OMB, Driving Efficient Acquisition of Artificial Intelligence in Government, M-25-22, April 3, 2025, https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-22-Driving-Efficient-Acquisition-of-Artificial-Intelligence-in-Government.pdf.

28.

Federal agency reported AI use cases for 2025 are available at https://github.com/ombegov/2024-Federal-AI-Use-Case-Inventory. See the "Executive Branch Activities" section below for more information on AI use case inventories.

29.

For example, the Artificial Intelligence Training for the Acquisition Workforce Act (P.L. 117-207).

30.

For example, regarding the use of AI and machine learning for improving airport safety, see the FAA Reauthorization Act of 2024 (P.L. 118-63).

31.

National Conference of State Legislatures, "Artificial Intelligence 2025 Legislation," updated April 24, 2025, https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation.

32.

S.B. 942, the California AI Transparency Act, https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB942; and A.B. 2013, Generative artificial intelligence: training data transparency, https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB2013.

33.

Legislative text for S.B. 1047, Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, is available at https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047; Governor Newsom's veto message is available at https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf.

34.

Legislative text and summary information available at https://leg.colorado.gov/bills/sb24-205.

35.

See Kathryn M. Rattigan, "Colorado's AI Task Force Proposes Updates to State's AI Law," National Law Review, February 6, 2025, https://natlawreview.com/article/colorados-ai-task-force-proposes-updates-states-ai-law; and Legislative Council Staff, Report and Recommendations: Artificial Intelligence Impact Task Force, February 2025, https://leg.colorado.gov/sites/default/files/images/report_and_recommendations_0.pdf.

36.

Legislative text and the task force's report are available at https://www.atg.wa.gov/aitaskforce.

37.

Information and current membership list for the Senate AI caucus is available at https://www.heinrich.senate.gov/artificial-intelligence-caucus (Senate). Information about the House bipartisan Congressional AI Caucus comes from an e-Dear Colleague letter on February 27, 2025. (E-Dear Colleague letters have restricted access to congressional intranet systems.)

38.

CRS Report R40683, Congressional Member Organizations (CMOs) and Informal Member Groups: Their Purpose and Activities, History, and Formation, by Sarah J. Eckman.

39.

Bipartisan Senate AI Working Group, "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate," May 2024, p. 2, https://www.schumer.senate.gov/imo/media/doc/Roadmap_Electronic1.32pm.pdf.

40.

The AI Insight Forums were closed-door meetings. The nonprofit group Tech Policy Press has compiled information reportedly about the various AI Insight Forums available at https://www.techpolicy.press/us-senate-ai-insight-forum-tracker/.

41.

Bipartisan Senate AI Working Group, "Driving U.S. Innovation in Artificial Intelligence," pp. 2-3.

42.

As described in the NIST, AI RMF, pp. 16-17, "Explainability refers to a representation of the mechanisms underlying AI systems' operation" that "can answer the question of 'how' a decision was made in the system."

43.

For a list of CRS's AI products, including sector-specific reports on AI and elections, intellectual property, and copyright, see CRS Insight IN12458, Artificial Intelligence: CRS Products, by Laurie Harris and Rachael D. Roan.

44.

See Office of House Speaker Mike Johnson, "House Launches Bipartisan Task Force on Artificial Intelligence," press release, February 20, 2024, https://www.speaker.gov/2024/02/20/house-launches-bipartisan-task-force-on-artificial-intelligence/.

45.

Bipartisan House Task Force on AI, Bipartisan House Task Force Report on AI: Guiding Principles, Forward-Looking Recommendations, and Policy Proposals, 118th Cong., December 2024, https://www.speaker.gov/wp-content/uploads/2024/12/AI-Task-Force-Report-FINAL.pdf.

46.

As one example, in October 2022, the Office of Science and Technology Policy (OSTP)—which leads interagency science and technology policy coordination efforts and advises the Executive Office of the President on science and technology policy—released a white paper intended to "support the development of federal policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems." OSTP, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, October 2022, https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/.

47.

OMB, M-24-10; and OMB, M-25-21.

48.

E.O. 14179 of January 23, 2025, "Removing Barriers to American Leadership in Artificial Intelligence," 90 Federal Register 8741, January 31, 2025, https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence.

49.

E.O. 14110 of October 30, 2023, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," 88 Federal Register 75191, November 1, 2023, https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.

50.

E.O. 14148 of January 20, 2025, "Initial Recissions of Harmful Executive Orders and Actions," 90 Federal Register 8237, January 28, 2025, https://www.federalregister.gov/documents/2025/01/28/2025-01901/initial-rescissions-of-harmful-executive-orders-and-actions.

51.

E.O. 13960 of December 3, 2020, "Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government," 85 Federal Register 78939, December 8, 2020, https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government.

52.

E.O. 13859 of February 11, 2019, "Maintaining American Leadership in Artificial Intelligence," 84 Federal Register 3967, February 14, 2019, https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence.

53.

For more information summarizing the requirements and timelines for deliverables from E.O. 14110, see CRS Report R47843, Highlights of the 2023 Executive Order on Artificial Intelligence for Congress, by Laurie Harris and Chris Jaikaran.

54.

50 U.S.C. §§4501-4568.

55.

E.O. 14148.

56.

E.O. 14179, §5.

57.

E.O. 14179, §2.

58.

NSF, "Request for Information [RFI] on the Development of an Artificial Intelligence (AI) Action Plan," 90 Federal Register 9088, February 6, 2025, https://www.federalregister.gov/documents/2025/02/06/2025-02305/request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan.

59.

Per the RFI notice and comment information at https://www.regulations.gov/document/NSF_FRDOC_0001-3479.

60.

Vice President JD Vance, "Remarks by the Vice President at the Artificial Action Summit in Paris, France," February 11, 2025, American Presidency Project, UC Santa Barbara, https://www.presidency.ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france.

61.

Federal agency reported AI use cases for 2025 are available at https://github.com/ombegov/2024-Federal-AI-Use-Case-Inventory. Instructions for agency reporting are available https://www.cio.gov/assets/resources/2024-Guidance-for-AI-Use-Case-Inventories.pdf.

62.

U.S. Chief Information Officers Council, "Policies and Priorities: Executive Order (EO) 13960," https://www.cio.gov/policies-and-priorities/Executive-Order-13960-AI-Use-Case-Inventories-Reference/. Of these use cases, 337 were identified as rights-impacting and/or safety-impacting per the prior definitions of the now-rescinded OMB M-24-10.

63.

Chief Information Officers Council, "Policies and Priorities: Executive Order (EO) 13960."

64.

Government Accountability Office (GAO), Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements, GAO-24-105980, December 12, 2023, https://www.gao.gov/products/gao-24-105980.

65.

NAIAC, Recommendation: Expand the AI Use Case Inventory by Limiting the 'Sensitive Law Enforcement' Exception, February 2024, https://perma.cc/H889-X4NM; and NAIAC, Recommendation: Expand the AI Use Case Inventory by Limiting the 'Common Commercial Products' Exception, February 2024, https://perma.cc/Q2RK-PAPJ.

66.

E.O. 14110, §4.2(b). FLOP was defined in E.O. 14110 as "any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base." FLOPs are a measure of the computational capacity and efficiency of AI systems.

67.

For example, researchers estimated that the computational power minimum threshold in the E.O. (10^26 FLOPs) was "more than any model trained to date" and that OpenAI's GPT-4 model was just under that threshold. See Markus Anderljung et al., "Frontier AI Regulation: Managing Emerging Risks to Public Safety," arXiv (non-peer reviewed), July 6, 2023, https://arxiv.org/abs/2307.03718; and Rishi Bommasani et al., "Decoding the White House AI Executive Order's Achievements," Stanford University, Institute for Human-Centered AI, November 2, 2023, https://hai.stanford.edu/news/decoding-white-house-ai-executive-orders-achievements.

68.

Epoch AI, "AI Benchmarking Hub," updated May 28, 2025, https://epoch.ai/data/ai-benchmarking-dashboard.

69.

As defined in the bill, the term augmented critical decision processes referred to systems, processes, or activities that use automated decision systems to make decisions that have legal, material, or a similarly significant effect on consumers' lives relating to access to or the cost, terms, or availability of such things as education, employment, essential utilities, financial services, health care, housing, and legal services.

70.

Jakob Mokander et al., "The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What Can They Learn from Each Other?," Minds and Machines, vol. 32 (August 2022), pp. 751-758, https://link.springer.com/article/10.1007/s11023-022-09612-y.

71.

Mokander et al., "The US Algorithmic Accountability Act."

72.

For more information on AI in financial services, see CRS Report R47997, Artificial Intelligence and Machine Learning in Financial Services, by Paul Tierno.

73.

For more information on AI in elections and campaign finance, see CRS Insight IN12222, Artificial Intelligence (AI) and Campaign Finance Policy: Recent Developments, by R. Sam Garrett.

74.

For more information on AI in health care, see CRS Report R48319, Artificial Intelligence (AI) in Health Care, coordinated by Nora Wells.

75.

Elizabeth Rough and Nikki Sutherland, "UK Government Policy and Regulation," in "Artificial Intelligence: A Reading List," UK House of Commons Library, August 20, 2024, https://commonslibrary.parliament.uk/research-briefings/cbp-10003/.

76.

Government of the United Kingdom (UK), National AI Strategy, updated December 18, 2022, https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version.

77.

Government of the UK, Department for Science, Innovation, and Technology (DSIT), "A Pro-Innovation Approach to AI Regulation," updated August 3, 2023, https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper.

78.

DSIT, "A Pro-Innovation Approach to AI Regulation," Executive Summary, point 8.

79.

DSIT, "A Pro-Innovation Approach to AI Regulation," Executive Summary, points 9-10.

80.

Regulatory sandboxes and testbeds allow innovators to test products and services in controlled environments, potentially have a reduced time to market at a lower cost, and receive expert support from regulatory entities. UK Financial Conduct Authority, "Regulatory Sandbox," updated September 5, 2024, https://www.fca.org.uk/firms/innovation/regulatory-sandbox.

81.

DSIT, "A Pro-Innovation Approach to AI Regulation," Executive Summary, point 14.

82.

DSIT, "A Pro-Innovation Approach to AI Regulation: Government Response," paragraph 109, updated February 6, 2024, https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response.

83.

Matt Clifford, AI Opportunities Action Plan, DSIT, January 2025, https://www.gov.uk/government/publications/ai-opportunities-action-plan.

84.

For more on that election, see CRS Insight IN12386, The United Kingdom's 2024 Election, by Derek E. Mix.

85.

Clifford, AI Opportunities Action Plan. The "Growth Duty" requires regulators to consider the desirability of promoting economic growth alongside the delivery of protections set out in relevant legislation. For more, see Government of the UK, "Growth Duty,".

86.

"Artificial Intelligence (Regulation) Bill," HL Bill 76, introduced March 4, 2024, Parliament: House of Commons, United Kingdom, https://bills.parliament.uk/bills/3942.

87.

Valeria Gallo and Suchitra Nair, "The UK's Framework for AI Regulation: Agility Is Prioritized, but Future Legislation Is Likely to Be Needed," Deloitte UK, February 21, 2024, https://www.deloitte.com/uk/en/Industries/financial-services/blogs/the-uks-framework-for-ai-regulation.html.

88.

Charlotte Halford, "The Approach to the Regulation of AI in the UK," DAC Beachcroft, March 6, 2025, https://www.dacbeachcroft.com/The-approach-to-the-regulation-of-AI-in-the-UK.

89.

The King's Speech is read by the king on the occasion of the State Opening of Parliament, and it sets forth the scope of legislation that the UK government intends to pursue in the forthcoming parliamentary session. See UK Parliament, "King's Speech," https://www.parliament.uk/site-information/glossary/kings-speech/.

90.

European Commission, "Proposal for a Regulation of the European Parliament and of the Council: Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts," April 21, 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206.

91.

Text of the EU AI Act is available at https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689.

92.

Future of Life Institute, "Implementation Timeline," EU Artificial Intelligence Act, https://artificialintelligenceact.eu/implementation-timeline/.

93.

For more information on the EU, including its structure, governance, and policies, see CRS Report RS21372, The European Union: Questions and Answers, by Kristin Archick.

94.

See CRS In Focus IF11211, The European Parliament and U.S. Interests, by Kristin Archick; and James McBride, "How Does the European Union Work?," Council on Foreign Relations, updated March 11, 2022, https://www.cfr.org/backgrounder/how-does-european-union-work.

95.

Certain AI systems are outside of the scope of the AI Act, including those used for military, defense, or national security purposes or for the sole purpose of scientific R&D.

96.

European Commission, "AI Act," https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

97.

European Commission, "AI Act."

98.

With certain exceptions—for example, when used to prevent criminal offenses.

99.

A general-purpose AI (GPAI) model means "an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications."

100.

Tambiama Madiega, "Artificial Intelligence Act: Briefing—EU Legislation in Progress," European Parliamentary Research Service, September 2024, https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf.

101.

According to one analysis updated on May 28, 2025, numerous models now surpass 10^25 FLOPs, which would fall within the definition of GPAI models with systemic risk, triggering such compliance requirements when they go into effect. Epoch AI, "AI Benchmarking Hub."

102.

European AI Office, https://digital-strategy.ec.europa.eu/en/policies/ai-office.

103.

See "Article 99: Penalties," at https://artificialintelligenceact.eu/article/99/.

104.

Minesh Tanna and William Dunning, "The EU AI Act: A Quick Guide," Simmons and Simmons, July 12, 2024, https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide.

105.

See Chapter VI: Measures in Support of Innovation, at https://artificialintelligenceact.eu/chapter/6/.

106.

Laura Caroli, "Talks on the EU AI Act Code of Practice at a Crucial Phase," Center for Strategic and International Studies, January 24, 2025, https://www.csis.org/analysis/talks-eu-ai-act-code-practice-crucial-phase.

107.

Justyna Lisinska, "Draghi's Competitiveness Report Shows Why the EU Needs a Pro-Innovation Approach Towards AI," Center for Data Innovation, September 25, 2024, https://datainnovation.org/2024/09/draghis-competitiveness-report-shows-why-the-eu-needs-a-pro-innovation-approach-towards-ai/; and Jochen Ditsche and Maria Mikhaylenko, "The Effectiveness of the EU's New AI Act Will Depend on How Businesses, Policymakers, and Society Adapt," Roland Berger, May 8, 2024, https://www.rolandberger.com/en/Insights/Publications/European-AI-Act-Opportunities-and-challenges.html.

108.

See, for example, Mario Draghi, The Future of European Competitiveness, September 2024, https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en; and Carlo Altomonte and Valbona Zeneli, "The Draghi Report Grabbed Europe's Attention. Now It's Time for the EU to Put It Into Action," Atlantic Council, January 6, 2025, https://www.atlanticcouncil.org/blogs/new-atlanticist/the-draghi-report-grabbed-europes-attention-now-its-time-for-the-eu-to-put-it-into-action/.

109.

Ditsche and Mikhaylenko, "The Effectiveness of the EU's New AI Act."

110.

Stephen Morris, "SAP Chief Warns EU Against Over-Regulating Artificial Intelligence," Financial Times, October 1, 2024, https://www.ft.com/content/9db8fe6d-3f8a-4886-a439-c23faf459c23; Ina Fried, "Meta Won't Offer Future Multimodal AI Models in EU," Axios, July 17, 2024, https://www.axios.com/2024/07/17/meta-future-multimodal-ai-models-eu; and Ivana Saric, "Apple Says It Won't Roll Out AI Features in Europe Due to Regulatory Concerns," Axios, June 21, 2024, https://www.axios.com/2024/06/21/apple-ai-features-europe.

111.

Graham Webster et al., "Full Translation: China's 'New Generation Artificial Intelligence Plan' (2017)," DigiChina, Stanford University, Cyber Policy Center, August 1, 2017, https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/.

112.

Barbara Li and Amaya Zhou, "Navigating the Complexities of AI Regulation in China," Reed Smith, August 7, 2024, https://www.reedsmith.com/en/perspectives/2024/08/navigating-the-complexities-of-ai-regulation-in-china.

113.

Mimi Zou and Lu Zhang, "Navigating China's Regulatory Approach to Generative Artificial Intelligence and Large Language Models," Cambridge Forum on AI: Law and Governance, vol. 1 (January 6, 2025), https://www.cambridge.org/core/journals/cambridge-forum-on-ai-law-and-governance/article/navigating-chinas-regulatory-approach-to-generative-artificial-intelligence-and-large-language-models/969B2055997BF42DE693B7A1A1B4E8BA.

114.

Błażej Sajduk and Dominika Dziwisz, "Comparative Analysis of AI Development Strategies: A Study of China's Ambitions and the EU's Regulatory Framework," European Hub for Contemporary China, September 20, 2024, https://doi.org/10.31175/eh4s.2014.12.

115.

Angela Huyue Zhang, "The Promise and Perils of China's Regulation of Artificial Intelligence," Columbia Journal of Transnational Law, vol. 63, no. 1 (January 21, 2025), https://www.jtl.columbia.edu/volume-63/the-promise-and-perils-of-chinas-regulation-of-artificial-intelligence.

116.

Written testimony of Ngor Luong in U.S.-China Economic and Security Review Commission, Current and Emerging Technologies in U.S.-China Economic and National Security Competition, Panel III: China's Progress in Commercial Applications of Selected Emerging Technologies, hearings, 118th Cong., 2nd sess., February 1, 2024, https://www.uscc.gov/sites/default/files/2024-02/Ngor_Luong_Testimony.pdf.

117.

Hipolito Calero, "An Analysis of China's AI Governance Proposals," Center for Security and Emerging Technology, Georgetown University, September 12, 2024, https://cset.georgetown.edu/article/an-analysis-of-chinas-ai-governance-proposals/.

118.

Jeffrey Shin, "AI Watch: Global Regulatory Tracker—China," White and Case, March 31, 2025, https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-china.

119.

Lauren Hurcombe et al., "China Released New Measures for Labelling AI-Generated and Synthetic Content," DLA Piper, March 24, 2025, https://www.technologyslegaledge.com/2025/03/china-released-new-measures-for-labelling-ai-generated-and-synthetic-content.

120.

For more on China's science and industrial policies broadly, see CRS In Focus IF10964, Made in China 2025 and Industrial Policies: Issues for Congress, by Karen M. Sutter.

121.

See Luong, written testimony; Hodan Omaar, "How Innovative Is China in AI?," Information Technology and Innovation Foundation, August 26, 2024, https://itif.org/publications/2024/08/26/how-innovative-is-china-in-ai/; and Ruby Scanlon, "Beyond DeepSeek: How China's AI Ecosystem Fuels Breakthroughs," Lawfare, February 14, 2025, https://www.lawfaremedia.org/article/beyond-deepseek—how-china-s-ai-ecosystem-fuels-breakthroughs.

122.

Scanlon, "Beyond DeepSeek."

123.

Omaar, "How Innovative Is China in AI?"

124.

James Pomfret and Summer Zhen, "China's Xi Calls for Self Sufficiency in AI Development amid U.S. Rivalry," Reuters, April 30, 2025, https://www.reuters.com/world/china/chinas-xi-calls-self-sufficiency-ai-development-amid-us-rivalry-2025-04-26/.

125.

OECD, "OECD Updates AI Principles to Stay Abreast of Rapid Technological Developments," press release, March 5, 2024, https://www.oecd.org/newsroom/oecd-updates-ai-principles-to-stay-abreast-of-rapid-technological-developments.htm.

126.

The Hiroshima AI Process is a G7 effort to develop a comprehensive policy framework, consisting of an analysis of priority risks, challenges, and opportunities of generative AI; guiding principles on AI for all AI actors; a voluntary international code of conduct for developers of advanced AI systems; and development of responsible AI tools and best practices. See European Commission, "Commission Welcomes G7 Leaders' Agreement on Guiding Principles and a Code of Conduct on Artificial Intelligence," press release, October 23, 2023, https://ec.europa.eu/commission/presscorner/detail/en/ip_23_5379; and European Commission, "G7 Leaders' Statement on the Hiroshima AI Process," October 30, 2023, https://digital-strategy.ec.europa.eu/en/library/g7-leaders-statement-hiroshima-ai-process.

127.

See European Commission, "Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems," October 2023, https://ec.europa.eu/newsroom/dae/redirection/document/99641.

128.

UN General Assembly, "The Pact for the Future," September 22, 2024, pp. 49-50, https://docs.un.org/en/A/RES/79/1.

129.

See for example, UN AI Advisory Body, Governing AI for Humanity, September 2024, p. 8, https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf; Chinasa T. Okolo, "AI in the Global South: Opportunities and Challenges Towards More Inclusive Governance," Brookings Institution, November 1, 2023, https://www.brookings.edu/articles/ai-in-the-global-south-opportunities-and-challenges-towards-more-inclusive-governance/; Adebola Folorunso et al., "A Policy Framework on AI Usage in Developing Countries and Its Impact," Global Journal of Engineering and Technology Advances, vol. 21, no. 1 (October 2024), pp. 154-166, https://gjeta.com/sites/default/files/GJETA-2024-0192.pdf.

130.

Global Partnership on Artificial Intelligence, "GPAI: Our Work," https://gpai.ai/projects/.

131.

OECD, "GPAI and OECD Unite to Advance Coordinated International Efforts for Trustworthy AI," July 3, 2024, https://www.oecd.org/en/about/news/speech-statements/2024/07/GPAI-and-OECD-unite-to-advance-coordinated-international-efforts-for-trustworthy-AI.html.

132.

For more on the U.S.-EU Trade and Technology Council, see CRS In Focus IF12575, U.S.-EU Trade and Technology Council: Background and Issues, by Shayerah I. Akhtar and Danielle M. Trachtenberg.

133.

For more information, see Office of the United States Trade Representative, "U.S.-EU Joint Statement of the Trade and Technology Council," May 31, 2023, https://ustr.gov/about-us/policy-offices/press-office/press-releases/2023/may/us-eu-joint-statement-trade-and-technology-council; and European Commission, "EU-US Trade and Technology Council (2021-2024)," https://digital-strategy.ec.europa.eu/en/factpages/eu-us-trade-and-technology-council-2021-2024.

134.

For more information on this initiative, see U.S.-India Artificial Intelligence Initiative, "U.S.-India Artificial Intelligence (USIAI) Initiative," https://usiai.iusstf.org/introduction1.

135.

NIST, AI RMF.

136.

For more information, see NIST, "U.S. Artificial Intelligence Safety Institute: Strategic Vision," https://www.nist.gov/aisi/strategic-vision.

137.

U.S. Department of Commerce, "U.S. and UK Announce Partnership on Science of AI Safety," press release, April 1, 2024, https://www.commerce.gov/news/press-releases/2024/04/us-and-uk-announce-partnership-science-ai-safety.

138.

UK Government, "Tackling AI Security Risks to Unleash Growth and Deliver Plan for Change: UK's AI Safety Institute Becomes 'UK AI Security Institute,'" press release, February 14, 2025, https://www.gov.uk/government/news/tackling-ai-security-risks-to-unleash-growth-and-deliver-plan-for-change.

139.

Will Knight, "Under Trump, AI Scientists Are Told to Remove 'Ideological Bias' from Powerful Models," Wired, March 14, 2025, https://www.wired.com/story/ai-safety-institute-new-directive-america-first/.

140.

Rohit Chopra et al., "Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems," April 25, 2023, https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf.

141.

Chopra et al., "Joint Statement on Enforcement Efforts."

142.

FTC, "Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology Without Reasonable Safeguards," press release, December 19, 2023, https://perma.cc/2E6M-GANC.

143.

OMB, Guidance for Regulation of Artificial Intelligence Applications, M-21-06, November 17, 2020, https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf.

144.

Food and Drug Administration, "Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products; Draft Guidance for Industry; Availability; Comment Request," 90 Federal Register 1157, January 7, 2025, https://www.federalregister.gov/documents/2025/01/07/2024-31542/considerations-for-the-use-of-artificial-intelligence-to-support-regulatory-decision-making-for-drug.

145.

OSTP, "Coordinated Framework for Regulation of Biotechnology," 51 Federal Register 123, June 26, 1986, https://archives.federalregister.gov/issue_slice/1986/6/26/23299-23366.pdf.

146.

Department of Agriculture, Environmental Protection Agency, and Food and Drug Administration, The Coordinated Framework for the Regulation of Biotechnology: Plain Language Information on the Biotechnology Regulatory System, November 2023, https://usbiotechnologyregulation.mrp.usda.gov/sites/default/files/coordinated-framework-plain-language.pdf.

147.

Agencies were required to submit compliance plans regarding AI actions in response to the February 2019 E.O. 13859 and the April 2025 OMB memorandum M-25-21.

148.

Jennifer Wang et al., "Assessing the Implementation of Federal AI Leadership and Compliance Mandates," Institute for Human-Centered AI and the Regulation, Evaluation, and Governance Lab, Stanford University, January 2025, https://hai.stanford.edu/policy/assessing-the-implementation-of-federal-ai-leadership-and-compliance-mandates.

149.

Edwin Farley and Christian Lansang, "AI Auditing: First Steps Towards the Effective Regulation of Artificial Intelligence Systems," Harvard Journal of Law and Technology, vol. 38 (February 19, 2025), https://jolt.law.harvard.edu/digest/ai-auditing-first-steps-towards-the-effective-regulation-of-artificial-intelligence-systems.

150.

U.S. Chamber of Commerce, "Artificial Intelligence Commission Report," March 9, 2023, https://www.uschamber.com/technology/artificial-intelligence-commission-report.

151.

See, for example, Google, "Recommendations for Regulating AI," https://ai.google/static/documents/recommendations-for-regulating-ai.pdf.

152.

Testimony of Sam Altman CEO, OpenAI, in U.S. Congress, Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law, Oversight of A.I.: Rules for Artificial Intelligence, hearings, 118th Cong., 1st sess., May 16, 2023, S. Hrg. 118-37, https://www.govinfo.gov/app/details/CHRG-118shrg52706/CHRG-118shrg52706.

153.

Jillian Deutsch, "Would Government Licensing Chill Competition Around AI?," Government Technology, July 21, 2023, https://www.govtech.com/artificial-intelligence/would-government-licensing-chill-competition-around-ai.

154.

See 15 U.S.C. §278h–1(c).

155.

Mohar Chatterjee and Gabby Miller, "Hot Buttons for the AI Action Plan," Politico Pro Newsletter, March 20, 2025, https://subscriber.politicopro.com/newsletter/2025/03/hot-buttons-for-the-ai-action-plan-00239753.

156.

The AI Development Practices Act of 2024 (H.R. 9466, 118th).

157.

Bill text is available at https://fedscoop.com/wp-content/uploads/sites/5/2025/05/HLA25385.pdf.

158.

For more on regulatory sandboxes and how they have been used at the Consumer Financial Protection Bureau, see CRS In Focus IF12875, Regulatory Sandboxes at the Consumer Financial Protection Bureau, by Karl E. Schneider.

159.

Office of Rep. Jay Obernolte, "Reps. Obernolte, Beyer Introduce Bipartisan CREATE AI Act to Expand Access to Artificial Intelligence Research Tools," press release, March 31, 2025, https://obernolte.house.gov/media/press-releases/reps-obernolte-beyer-introduce-bipartisan-create-ai-act-expand-access.

160.

NSF, "Democratizing the Future of AI R&D: NSF to Launch National AI Research Resource Pilot," January 24, 2024, https://www.nsf.gov/news/democratizing-future-ai-rd-nsf-launch-national-ai.

161.

NAIRR Task Force, Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem: An Implementation Plan for a National Artificial Intelligence Research Resource, January 2023, https://perma.cc/9VVM-Q5C9. Statutory establishment of, and direction for, the NAIRR Task Force is at Title 42, Section 9415, of the U.S. Code.

162.

See for example, Department of Industry, Science and Resources (Australia), Safe and Responsible AI in Australia: Discussion Paper, June 2023, p. 3, https://apo.org.au/sites/default/files/resource-files/apo-nid322938.pdf.

163.

Alex Engler, "The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment," Brookings Institution, April 25, 2023, https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/.

164.

The White House, "G7 Hiroshima Leaders' Communique," May 20, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.

165.

United Nations, AI Advisory Body, Governing AI for Humanity, September 2024, https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf.

166.

Paul Davidson et al., How Do Laws and Regulations Affect Competitiveness: The Role for Regulatory Impact Assessment, OECD, 2021, https://www.oecd.org/content/dam/oecd/en/publications/reports/2021/02/how-do-laws-and-regulations-affect-competitiveness_bffcf65f/7c11f5d5-en.pdf.

167.

For more on U.S. economic considerations regarding AI, see CRS In Focus IF12762, The Macroeconomic Effects of Artificial Intelligence, by Lida R. Weinstock and Paul Tierno.

168.

Scanlon, "Beyond DeepSeek."

169.

Ina Fried, "Google's Hassabis Warns Against AI 'Race,'" Axios AI+ Newsletter, February 14, 2025, https://www.axios.com/newsletters/axios-ai-plus-1fbeee90-ea4e-11ef-9140-6dbc403a18ee.html.

170.

H.Res. 649 (118th), "Calling on the United States to champion a regional artificial intelligence strategy in the Americas to foster inclusive artificial intelligence systems that combat biases within marginalized groups and promote social justice, economic well-being, and democratic values."