Artificial Intelligence: Overview, Recent 
August 4, 2023 
Advances, and Considerations for the 118th 
Laurie A. Harris 
Congress 
Analyst in Science and 
Technology Policy 
Artificial intelligence (AI)—a term generally thought of as computerized systems that work and 
  
react in ways commonly thought to require intelligence—can encompass a range of technologies, 
methodologies, and application areas, such as natural language processing, facial recognition, and 
 
robotics. The concept of AI has existed for decades, with the term first being coined in the 1950s, 
followed by alternating periods of much development and lulls in activity and progress.  
A notable area of recent advancement has been in generative AI (GenAI), which refers to machine learning (ML) models 
developed through training on large volumes of data in order to generate content. Technological advancements in the 
underlying models since 2017, combined with the open availability of these tools to the public in late 2022, have led to 
widespread use. The underlying models for GenAI tools have been described as “general-purpose AI,” meaning they can be 
adapted to a wide range of downstream tasks. Such advancements, and the wide variety of applications for AI technologies, 
have renewed debates over appropriate uses and guardrails, including in the areas of health care, education, and national 
security. 
AI technologies, including GenAI tools, have many potential benefits, such as accelerating and providing insights into data 
processing, augmenting human decisionmaking, and optimizing performance for complex systems and tasks. GenAI tools, 
for example, are increasingly capable of performing a broad range of tasks, such as text analysis, image generation, and 
speech recognition. However, AI systems may perpetuate or amplify biases in the datasets on which they are trained; may not 
yet be able to fully explain their decisionmaking; and often depend on such vast amounts of data and other resources that they 
are not widely accessible for research, development, and commercialization beyond a handful of technology companies. 
Numerous federal laws on AI have been enacted over the past few Congresses, either as standalone legislation or as AI-
focused provisions in broader acts. These include the expansive National Artificial Intelligence Initiative Act of 2020 
(Division E of P.L. 116-283), which included the establishment of an American AI Initiative and direction for AI research, 
development, and evaluation activities at federal science agencies. Additional acts have directed certain agencies to undertake 
activities to guide AI programs and policies across the federal government (e.g., the AI in Government Act of 2020, P.L. 116-
260; and the Advancing American AI Act, Subtitle B of P.L. 117-263). In the 117th Congress, at least 75 bills were 
introduced that either focused on AI and ML or had AI/ML-focused provisions. Six of those were enacted.  
In the 118th Congress, as of June 2023, at least 40 bills had been introduced that either focused on AI/ML or contained 
AI/ML-focused provisions, and none has been enacted. Collectively, bills in the 118th Congress address a range of topics, 
including federal government oversight of AI; training for federal employees; disclosure of AI use; export controls; use-
specific prohibitions; and support for the use of AI in particular sectors, such as cybersecurity, weather modeling, wildfire 
detection, precision agriculture, and airport safety. 
A primary consideration under debate in the United States and internationally is whether and how to regulate AI 
technologies. The European Union’s draft Artificial Intelligence Act would broadly take a risk-based approach to regulatory 
requirements and prohibitions for certain uses. In the United States, previously introduced legislation has sought to require 
impact assessments and reporting for automated decision systems—including but not limited to AI systems—in critical areas 
(e.g., health care, employment, and criminal justice). Other perspectives on AI regulation have suggested a sector-specific 
approach with interagency coordination. A general concern for Congress might be how to approach AI regulatory efforts in a 
way that balances support for innovation and beneficial uses while minimizing current and future harms. 
Additional considerations for the 118th Congress might include whether the current federal government mechanisms are 
sufficient for AI oversight and policymaking, the role of the federal government in supporting AI research and development, 
the potential impact of AI technologies on the workforce, disclosure of AI use, testing and validation of AI systems, and 
potential ways to support the development of trustworthy and responsible AI. 
Congressional Research Service 
 
 link to page 4  link to page 4  link to page 5  link to page 6  link to page 7  link to page 7  link to page 8  link to page 9  link to page 10  link to page 10  link to page 12  link to page 13  link to page 14  link to page 15 AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
Contents 
Introduction ..................................................................................................................................... 1 
Artificial Intelligence Background and History .............................................................................. 1 
Recent Artificial Intelligence Advances .......................................................................................... 2 
Benefits and Potential Risks of Artificial Intelligence Technologies .............................................. 3 
Artificial Intelligence Technologies in Selected Sectors ................................................................. 4 
Health Care................................................................................................................................ 4 
Education .................................................................................................................................. 5 
National Security ....................................................................................................................... 6 
Artificial Intelligence Laws and Legislation ................................................................................... 7 
Current Federal Laws Addressing Artificial Intelligence .......................................................... 7 
Federal AI Legislation Introduced in the 117th and 118th Congresses ....................................... 9 
Perspectives on Regulating Artificial Intelligence ........................................................................ 10 
Other Considerations for the 118th Congress ................................................................................. 12 
 
Contacts 
Author Information ........................................................................................................................ 12 
 
Congressional Research Service 
 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
Introduction 
As the use of artificial intelligence (AI) has grown across a wide range of sectors, so too have 
strategies to influence the growth of AI and mitigate potential risks. This report provides a brief 
background on AI technologies and recent advances, including in generative AI (GenAI); benefits 
and risks of AI tools; current federal laws addressing AI; perspectives on regulating AI; and 
selected other considerations for the 118th Congress. This report also provides an overview of AI 
in selected sectors; however, it does not attempt to address all applications of AI. Information on 
the application of AI technologies in additional sectors can be found in separate CRS products.1 
Artificial Intelligence Background and History 
AI can generally be thought of as computerized systems that work and react in ways commonly 
thought to require intelligence, such as the ability to learn, solve problems, and achieve goals 
under uncertain and varying conditions, with varying levels of autonomy. AI is not one thing; 
rather, AI systems can encompass a range of methodologies and application areas, such as natural 
language processing, robotics, and facial recognition.  
Common terms used in the field of AI include machine learning (ML), deep learning (DL), and 
neural networks. ML, often referred to as a subfield of AI, examines how to build computer 
programs that automatically improve their performance at some task, through experience, without 
relying on explicit rules-based programming to do so.2 One of the goals of ML is to teach 
algorithms to successfully interpret data that have not previously been encountered. DL systems 
learn from large amounts of data to subsequently recognize and classify related, but previously 
unobserved, data. Neural networks, a type of DL often described as being loosely modeled after 
the human brain, consist of thousands or millions of processing nodes (i.e., computational units). 
DL approaches have been used in systems across many areas of AI research, from autonomous 
vehicles to voice recognition technologies.3  
Historically, there has been debate over which technologies should be classified as AI. For 
example, robotic process automation (RPA) has been described as the use of rules-based software 
to automate highly repetitive, routine tasks normally performed by knowledge workers.4 Because 
it automates activities performed by humans, it is often described as an AI technology. However, 
some argue that RPA is not AI because it does not include a learning component. Others discuss 
RPA as a basic tool that can be combined with AI to create complex process automation, or 
intelligent process automation, along an “intelligent automation continuum.”5 
 
1 For example, additional products cover AI in consumer lending, defense, and copyright law. See CRS In Focus 
IF12399, Automation, Artificial Intelligence, and Machine Learning in Consumer Lending, by Cheryl R. Cooper; CRS 
Report R46458, Emerging Military Technologies: Background and Issues for Congress, by Kelley M. Sayler; and CRS 
Legal Sidebar LSB10922, Generative Artificial Intelligence and Copyright Law, by Christopher T. Zirpoli.  
2 Adapted from Erik Brynjolfsson, Tom Mitchell, and Daniel Rock, “What Can Machines Learn, and What Does It 
Mean for Occupations and the Economy?,” AEA Papers and Proceedings, vol. 108 (May 1, 2018), pp. 43-47, 
https://dspace.mit.edu/bitstream/handle/1721.1/120302/pandp.20181019.pdf.  
3 Larry Hardesty, “Explained: Neural Networks,” Massachusetts Institute of Technology (MIT) News, April 14, 2017, 
http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414.  
4 For more information, see IBM, “What Is Robotic Process Automation (RPA)?,” https://www.ibm.com/topics/rpa.  
5 IBM Global Business Services, “Using Artificial Intelligence to Optimize the Value of Robotic Process Automation,” 
September 2017, at https://www.ibm.com/downloads/cas/KDKAAK29.  
Congressional Research Service  
 
1 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
The term artificial intelligence was coined at the Dartmouth Summer Research Project on 
Artificial Intelligence, a conference proposed in 1955 and held the following year.6 Since that 
time, the field of AI has gone through what some have termed summers and winters—periods of 
much research and advancement followed by lulls in activity and progress. The reasons for the AI 
winters have included a focus on theory over practical applications, research problems being 
more difficult than anticipated, and limitations of the technologies of the time. Much of the 
current progress and research in AI, which began around 2010, has been attributed to the 
availability of large datasets (i.e., big data), improved ML approaches and algorithms, and more 
powerful computers.7 
Recent Artificial Intelligence Advances 
One of the most notable areas of advancement in AI has been in GenAI models and applications, 
such as ChatGPT.8 GenAI refers to ML models developed through training on large volumes of 
data in order to generate content. ChatGPT is an AI chatbot9 from a company called OpenAI, 
underpinned by a type of AI called a large language model (LLM).10 LLMs are trained on 
massive amounts of data, largely collected from public internet sites. When a user provides a 
prompt, usually a text prompt, the model can generate words or paragraphs with human-like 
quality. Other models can create different types of outputs from text prompts, such as images, 
music, videos, and computer code. GenAI models work to match the style and appearance of the 
underlying data, and they have shown what has been called “capability overhang,” meaning 
hidden capabilities of AI systems that researchers have not uncovered or thought to test for yet. 
While GenAI tools are not new, recent advances—particularly since the introduction of the 
transformer architecture11 in 2017 and improvements in “generative pre-trained transformer” 
(GPT) models since 2019—combined with the open availability to the public of these tools 
(2022) have led to widespread use. 
Various measures may be used to assess the state of development and impacts of AI technologies, 
such as research and development (R&D) activities, technical performance of AI systems against 
established benchmarks, education and employment of AI experts, adoption of AI tools by public 
and private sector entities, and the development of AI policies and governance models in the 
United States and other countries. The annual AI Index report, led by a steering committee at 
Stanford University, is a prominent example of an attempt to quantify and collate such measures. 
The 2023 AI Index report, released in April 2023, noted that “AI continued to post state-of-the-art 
 
6 See J. McCarthy et al., “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” August 
31, 1955, http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html.  
7 Executive Office of the President, National Science and Technology Council, Committee on Technology, Preparing 
for the Future of Artificial Intelligence, October 2016, pp. 5-6. For additional information on these factors and a short 
history of AI, see also the appendix of Peter Stone et al., “Artificial Intelligence and Life in 2030,” One Hundred Year 
Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, September 2016, 
http://ai100.stanford.edu/2016-report. 
8 For more information on GenAI, see CRS In Focus IF12426, Generative Artificial Intelligence: Overview, Issues, and 
Questions for Congress, by Laurie A. Harris. 
9 An AI chatbot is a computer program that uses AI to interpret user questions and generate automated responses.  
10 AI chatbots from other companies include Bard from Google and Claude from Anthropic. 
11 Transformer models process a sequence of whole sentences (rather than analyzing word by word), which helps them 
to “remember” past information. They use mathematical techniques called attention or self-attention to detect how data 
elements, even when far away sequentially, influence and depend on each other. These methods make GPT models 
faster to train, more efficient in understanding context, and highly scalable. See Ashish Vaswani et al., “Attention Is All 
You Need,” 31st Conference on Neural Information Processing Systems, Long Beach, CA, 2017, https://papers.nips.cc/
paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. 
Congressional Research Service  
 
2 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
results, but year-over-year improvement on many benchmarks continues to be marginal” while 
also highlighting that “AI models are starting to rapidly accelerate scientific progress,” being used 
in 2022, for example, to aid hydrogen fusion work and generate new antibodies for use in medical 
therapies.12 
Benefits and Potential Risks of Artificial 
Intelligence Technologies 
AI technologies and services hold the potential to both be beneficial in a number of sectors and 
pose a number of societal risks. Broadly, AI technologies can accelerate and provide insights into 
data processing, augment human decisionmaking, optimize performance for complex tasks and 
systems, and improve safety for people in dangerous occupations. In medicine, for example, AI 
technologies that can rapidly and accurately predict protein structures, such as AlphaFold from 
DeepMind and ESMFold from Meta, can aid researchers in understanding how diseases work and 
creating new drugs to treat them.13 Large-scale GenAI models are  
capable of an increasingly broad range of tasks, from text manipulation and analysis, to 
image generation, to unprecedentedly good speech recognition. These systems demonstrate 
capabilities in question answering and the generation of text, image, and code unimagined 
a decade ago, and they outperform the state of the art on many benchmarks, old and new.14 
However, AI systems may perpetuate or amplify biases contained in the datasets that they are 
trained on, may not yet be fully able to explain their decisionmaking, and often depend on vast 
datasets that are not widely accessible to facilitate R&D. Together, such challenges can lead to an 
inability to fully assess and understand the operations and outputs of AI systems—sometimes 
referred to as the “black box” problem. Large-scale models are prone to generating false 
content—sometimes called “hallucinating”—and are routinely biased, as they are largely trained 
on data scraped from the internet and therefore reflect the human biases expressed there. 
Stakeholders have questioned the adequacy of current laws and regulations for dealing with 
societal and ethical issues that may arise with the continued development and broad application of 
AI.15 Further, stakeholders have raised concerns about potential job losses from AI automation 
and questioned the adequacy of human capital in both the public and private sectors to develop 
and work with AI.16 
 
12 Nestor Maslej et al., The AI Index 2023 Annual Report, AI Index Steering Committee, Institute for Human-Centered 
AI, Stanford University, April 2023, p. 73, https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-
Report_2023.pdf.  
13 Melissa Heikkila and Will Douglas Heaven, “What’s Next for AI,” MIT Technology Review, December 23, 2022, 
https://www.technologyreview.com/2022/12/23/1065852/whats-next-for-ai/.  
14 Maslej et al., The AI Index 2023 Annual Report, “Introduction to the AI Index Report 2023,” p. 2.  
15 See, for example, Anton Korinek, “Why We Need a New Agency to Regulate Advanced Artificial Intelligence: 
Lessons on AI Control from the Facebook Files,” Brookings Institution, December 8, 2021, 
https://www.brookings.edu/research/why-we-need-a-new-agency-to-regulate-advanced-artificial-intelligence-lessons-
on-ai-control-from-the-facebook-files/.  
16 See, for example, European Commission and the U.S. Council of Economic Advisors, The Impact of Artificial 
Intelligence on the Future of Workforces in the European Union and the United States of America, December 5, 2022, 
https://www.whitehouse.gov/wp-content/uploads/2022/12/TTC-EC-CEA-AI-Report-12052022-1.pdf.  
Congressional Research Service  
 
3 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
Artificial Intelligence Technologies in Selected 
Sectors 
AI technologies have potential applications across a wide range of sectors. A selection of broad, 
crosscutting issues with application-specific examples of ongoing congressional interest are 
discussed in the CRS report Artificial Intelligence: Background, Selected Issues, and Policy 
Considerations.17 Those issues and examples include implications for the U.S. workforce, 
international competition and federal investment in AI R&D, standards development, and ethical 
AI—including questions about bias, fairness, and algorithm transparency (for example, in 
criminal justice applications). 
In addition to those issues and applications, three areas of potential use that may be of growing 
interest to Congress—particularly in light of the advances in, and widespread availability of, 
GenAI tools—are health care, education, and national security. In other parts of the federal 
government, experts have asserted a need to understand the impacts and future directions of AI 
applications in these areas. For example, the chief AI officer at the Department of Health and 
Human Services, Greg Singleton, at a June 2023 Health Innovation Summit discussed “the role 
that AI will play in healthcare, as well as the importance of regulations.”18 A May 2023 
Department of Education report, Artificial Intelligence and the Future of Teaching and Learning, 
describes the rising interest in AI in education and highlights reasons to address AI in education 
now.19 And the 2023 Annual Threat Assessment of the U.S. Intelligence Community states, “New 
technologies—particularly in the fields of AI and biotechnology—are being developed and are 
proliferating faster than companies and governments can shape norms, protect privacy, and 
prevent dangerous outcomes.”20 This section will discuss some of the potential benefits and 
concerns with the use of AI technologies in these sectors. 
Health Care 
Numerous companies and researchers have been developing and testing AI technologies for use 
in health care—for example, to improve the drug development process by increasing efficiency 
and decreasing time and cost,21 to detect diseases earlier, and to more consistently analyze 
medical data.22 A 2022 report by the Government Accountability Office identified a variety of 
ML-based technologies to assist with the diagnostic processes for five selected diseases—certain 
cancers, diabetic retinopathy (an eye condition that can cause blindness in diabetic patients), 
 
17 CRS Report R46795, Artificial Intelligence: Background, Selected Issues, and Policy Considerations, by Laurie A. 
Harris. 
18 Jose Rascon, “HHS CAIO: AI’s Impact on Healthcare Sector Still to Be Determined,” Meritalk, June 8, 2023, 
https://meritalk.com/articles/hhs-caio-ais-impact-on-healthcare-sector-still-to-be-determined/.  
19 Department of Education, Office of Educational Technology, Artificial Intelligence and the Future of Teaching and 
Learning: Insights and Recommendations, May 2023, pp. 1-3, https://tech.ed.gov/files/2023/05/ai-future-of-teaching-
and-learning-report.pdf.  
20 Office of the Director of National Intelligence (ODNI), Annual Threat Assessment of the U.S. Intelligence 
Community, February 6, 2023, p. 26, https://www.odni.gov/files/ODNI/documents/assessments/ATA-2023-
Unclassified-Report.pdf.  
21 U.S. Government Accountability Office (GAO), Artificial Intelligence in Health Care: Benefits and Challenges of 
Machine Learning in Drug Development, GAO-20-215, January 31, 2020, https://www.gao.gov/products/gao-20-
215sp.  
22 GAO, Artificial Intelligence in Healthcare: Benefits and Challenges of Machine Learning Technologies for Medical 
Diagnostics, GAO-22-104629, September 29, 2022, https://www.gao.gov/products/gao-22-104629.  
Congressional Research Service  
 
4 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
Alzheimer’s disease, heart disease, and COVID-19—though these ML technologies have 
generally not been widely adopted.23 Some hospitals have also experimented with using voice 
recognition, and associated ML and natural language processing technology, to assist doctors and 
patients.24  
While there are many encouraging developments for using AI technologies in health care, 
stakeholders have remarked on the slow progress in using AI broadly within health care settings, 
and various challenges remain. Researchers and clinicians have raised questions about the 
accuracy, security, and privacy of these technologies; the availability of sufficient health data on 
which to train systems; medical liability in the event of adverse outcomes; the adequacy of 
current user consent processes; and patient access and receptivity.25 These questions reflect the 
potential risks from using AI systems. For example, a poorly designed system might lead to 
misdiagnosis; systems trained on biased data can reflect or amplify those biases in their outputs; 
and if a flawed AI system is adopted widely, it might result in widespread injury to patients.26 
Education 
According to the U.S. Department of Education, AI, ML, and related technologies “will have 
powerful impacts on learning, not only through direct supports for students, but also by 
empowering educators to be more adaptive to learner needs and less consumed by routine, 
repetitive tasks.”27 The report also notes that AI in education presents risks, including 
apprehension from parents and educators and the potential for AI algorithms to be biased, 
possibly leading to unfair decisions about what or how a student should learn.28 
The rapid development of AI chatbots and the public release of ChatGPT in November 2022 have 
spurred debate among teachers and education administrators. Some teachers have begun using 
these AI tools in their classrooms, highlighting benefits such as making lessons more interactive, 
aiding the development of critical thinking skills, teaching students media literacy, generating 
personalized lesson plans, saving teachers time on administration, and aiding students whose first 
language is not English.29 Others have raised concerns about students using the systems to cheat 
on assignments by writing essays and taking tests for them, with some school systems banning 
the use of chatbots on their networks.30 Numerous researchers and companies have been 
developing and deploying detection tools to identify text generated by AI, though there remain 
 
23 Ibid. 
24 Ruth Hailu, “5 Burning Questions About Deploying Voice Recognition Technology in Health Care,” STAT News, 
July 10, 2019, https://www.statnews.com/2019/07/10/5-questions-voice-recognition-technology/.  
25 Hailu, “5 Burning Questions;” and Lauren Joseph, “5 Burning Questions About Using Artificial Intelligence to 
Prevent Blindness,” STAT News, July 17, 2019, https://www.statnews.com/2019/07/17/artificial-intelligence-to-
prevent-blindness/. 
26 Alvin Powell, “AI Revolution in Medicine,” Harvard Gazette, November 11, 2020, https://news.harvard.edu/gazette/
story/2020/11/risks-and-benefits-of-an-ai-revolution-in-medicine/; and W. Nicholson Price, “Risks and Remedies for 
Artificial Intelligence in Health Care,” Brookings Institution, November 14, 2019, https://www.brookings.edu/articles/
risks-and-remedies-for-artificial-intelligence-in-health-care/.  
27 Department of Education, Office of Educational Technology, “Artificial Intelligence,” https://tech.ed.gov/ai/.  
28 Ibid. 
29 Will Douglas Heaven, “ChatGPT Is Going to Change Education, Not Destroy It,” MIT Technology Review, April 6, 
2023, https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/.  
30 Dan Rosenzweig-Ziff, “New York City Blocks Use of the ChatGPT Bot in Its Schools,” Washington Post, January 5, 
2023, https://www.washingtonpost.com/education/2023/01/05/nyc-schools-ban-chatgpt/. 
Congressional Research Service  
 
5 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
issues with the accuracy of these tools.31 Stakeholders have also raised concerns about data 
privacy risks, including, for example, whether information shared or stored in AI-enabled systems 
is used for further product training without gaining explicit user consent and, more broadly, 
whether such systems are subject to federal or state privacy laws, such as the Family Educational 
Rights and Privacy Act.32 
National Security 
AI technologies have a wide range of national security applications, including intelligence, 
surveillance, and reconnaissance; logistics; cyber operations; command and control; semi-
autonomous and autonomous vehicles; and weapons systems.33 Since at least 2017, the U.S. 
military has begun integrating AI systems into combat systems and discussing AI as a key 
technology to ensure future warfighting capabilities.34 At the same time, other countries, 
including China and Russia, have released national plans and statements of intent to lead in the 
development of AI technologies.35  
The Department of Defense’s (DOD’s) unclassified investments in AI have grown from just over 
$600 million in FY2016 to approximately $1.1 billion in FY2023, with DOD maintaining over 
685 active AI projects.36 DOD has an AI strategy, which outlines the following aims: delivering 
AI-enabled capabilities for key missions; partnering with leading private sector technology 
companies, academia, and global allies; cultivating a leading AI workforce; and leading in 
military ethics and AI safety.37 The intelligence community (IC) has also released a strategy for 
using AI—the AIM Initiative—as well as AI ethics principles and an AI ethics framework for the 
IC.38 While AI holds to potential to assist the IC in its work, AI systems also “pose grave security 
 
31 Geoffrey A. Fowler, “We Tested a New ChatGPT-Detector for Teachers. It Flagged an Innocent Student,” 
Washington Post, April 3, 2023, https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-
turnitin/.  
32 See, for example, Department of Education, Office of Educational Technology, Artificial Intelligence and the Future 
of Teaching and Learning, pp. 32-33. 
33 For more information on AI and national security considerations for Congress, see CRS Report R45178, Artificial 
Intelligence and National Security, by Kelley M. Sayler; and CRS Report R46458, Emerging Military Technologies: 
Background and Issues for Congress, by Kelley M. Sayler.  
34 Department of Defense, Summary of the 2018 National Defense Strategy, p. 3, https://dod.defense.gov/Portals/1/
Documents/pubs/2018-National-Defense-Strategy-Summary.pdf; and Marcus Weisgerber, “The Pentagon’s New 
Algorithmic Warfare Cell Gets Its First Mission: Hunt ISIS,” Defense One, May 14, 2017, http://www.defenseone.com/
technology/2017/05/pentagons-new-algorithmic-warfare-cell-gets-its-first-mission-hunt-isis/137833/.  
35 China State Council, “A Next Generation Artificial Intelligence Development Plan,” July 20, 2017, translated by 
New America, https://www.newamerica.org/documents/1959/translation-fulltext-8.1.17.pdf; and Tom Simonite, “For 
Superpowers, Artificial Intelligence Fuels New Global Arms Race,” Wired, August 8, 2017, https://www.wired.com/
story/for-superpowers-artificial-intelligence-fuels-new-global-arms-race.  
36 The amount listed as the FY2023 investment reflects DOD’s FY2023 unclassified budget request for AI. DOD’s 
actual investments in AI in FY2023 may be higher. Office of the Under Secretary of Defense (Comptroller)/Chief 
Financial Officer, Defense Budget Overview: United States Department of Defense Fiscal Year 2023 Budget Request, 
April 2022, p. 4-7, https://comptroller.defense.gov/Portals/45/Documents/defbudget/FY2023/
FY2023_Budget_Request_Overview_Book.pdf; and GAO, Artificial Intelligence: Status of Developing and Acquiring 
Capabilities for Weapon Systems, GAO-22-104965, February 2022, https://www.gao.gov/assets/gao-22-104765.pdf.  
37 DOD, Summary of the 2018 Department of Defense Artificial Intelligence Strategy, 2018, https://media.defense.gov/
2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.  
38 ODNI, The AIM Initiative: a Strategy for Augmenting Intelligence Using Machines , January 16, 2019, 
https://www.dni.gov/files/ODNI/documents/AIM-Strategy.pdf; ODNI, Principles of Artificial Intelligence Ethics for 
the Intelligence Community, https://www.intelligence.gov/principles-of-artificial-intelligence-ethics-for-the-
intelligence-community; and ODNI, Artificial Intelligence Ethics Framework for the Intelligence Community, June 
2020, https://www.intelligence.gov/artificial-intelligence-ethics-framework-for-the-intelligence-community.  
Congressional Research Service  
 
6 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
challenges for which [the United States is] currently unprepared, including the development of 
novel cyber weapons, large-scale disinformation attacks, and the design of advanced biological 
weapons.”39 
Artificial Intelligence Laws and Legislation 
Current Federal Laws Addressing Artificial Intelligence 
Federal laws addressing AI or including AI-focused provisions have been enacted over the past 
few Congresses. Arguably the most expansive law was the National Artificial Intelligence 
Initiative (NAII) Act of 2020 (Division E of the William M. (Mac) Thornberry National Defense 
Authorization Act [NDAA] of FY2021, P.L. 116-283). The NAII Act included sections  
•  codifying the establishment of an American AI Initiative,  
•  establishing a National Artificial Intelligence Initiative Office to support federal 
AI activities,  
•  establishing an interagency committee at the Office of Science and Technology 
Policy to coordinate federal programs and activities in support of the NAII, and  
•  establishing a National AI Advisory Committee.  
The NAII Act further directed AI activities at the National Science Foundation (NSF), National 
Institute of Standards and Technology (NIST),40 National Oceanic and Atmospheric 
Administration, and Department of Energy. Specific provisions include mandating (1) NSF 
support for a network of National AI Research Institutes; (2) a National Academies of Sciences, 
Engineering, and Medicine study on the current and future impact of AI on the U.S. workforce 
across sectors;41 and (3) a task force to investigate the feasibility of, and plan for, a National AI 
Research Resource.42 
Individual agencies—including the General Services Administration (GSA), the Office of 
Management and Budget (OMB), and the Office of Personnel Management (OPM)—have also 
been statutorily directed to undertake activities to support the use of AI across the federal 
government: 
•  GSA. The AI in Government Act of 2020 (AGA, Division U, Title I, of the 
Consolidated Appropriations Act, 2021, P.L. 116-260) created within GSA an AI 
Center of Excellence to facilitate the adoption of AI technologies in the federal 
 
39 Written testimony of Jason Matheny, President and CEO, RAND Corporation, in Senate Committee on Homeland 
Security and Governmental Affairs, Artificial Intelligence: Risks and Opportunities, hearing, March 8, 2023, 
https://www.hsgac.senate.gov/hearings/artificial-intelligence-risks-and-opportunities/testimony-matheny-2023-03-08/.  
40 Included in the direction to NIST was development of a risk management framework for trustworthy AI systems, 
which was released in January 2023. See NIST, AI Risk Management Framework (AI RMF 1.0), January 26, 2023, 
https://doi.org/10.6028/NIST.AI.100-1.  
41 National Academies of Sciences, Engineering, and Medicine, “Automation and the U.S. Workforce: An Update,” 
study overview, https://www.nationalacademies.org/our-work/automation-and-the-us-workforce-an-update. According 
to the project website, the study is currently ongoing, and the final report has not yet been released.  
42 The National AI Research Resource Task Force released its final report, Strengthening and Democratizing the U.S. 
Artificial Intelligence Innovation Ecosystem: An Implementation Plan for a National Artificial Intelligence Research 
Resource, in January 2023 at https://www.ai.gov/wp-content/uploads/2023/01/NAIRR-TF-Final-Report-2023.pdf.  
Congressional Research Service  
 
7 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
government and collect and publicly publish information regarding federal 
programs, pilots, and other initiatives.43  
•  OMB. The AGA required OMB to issue a memorandum to federal agencies 
regarding the development of AI policies; approaches for removing barriers to 
using AI technologies; and best practices for identifying, assessing, and 
mitigating any discriminatory impact or bias and any unintended consequences of 
using AI. The Advancing American AI Act (Subtitle B of the James M. Inhofe 
National Defense Authorization Act for Fiscal Year 2023, P.L. 117-263) required 
OMB to (1) incorporate additional considerations when developing guidance for 
the use of AI in the federal government; (2) develop an initial means to ensure 
that contracts for acquiring AI address privacy, civil rights and liberties, and the 
protection of government data and information; (3) require the head of each 
federal agency (except DOD) to prepare and maintain an inventory of current and 
planned AI use cases; and (4) lead a pilot program to initiate four new AI use 
case applications to support interagency or intra-agency modernization 
initiatives. Additionally, the AI Training Act (P.L. 117-207) required OMB to 
establish an AI training program for the acquisition workforce of executive 
agencies. 
•  OPM. The AGA required OPM to establish or update an occupational job series 
to include positions with primary duties in AI and to estimate current and future 
numbers of federal employment positions related to AI at each agency.  
NDAAs have also included provisions focused on AI in the defense, national security, and 
intelligence communities each year beginning with the FY2019 John S. McCain NDAA, which 
included the first definition of AI in federal statute.44 These provisions have included a focus on 
AI development, acquisition, and policies; AI data repositories; recruiting and retaining personnel 
in AI; and implementation of recommendations from the 2021 final report of the National 
Security Commission on AI.45 
Additionally, some enacted legislation has focused on AI R&D or the use of AI in particular 
federal programs. For example: 
•  The CHIPS and Science Act (P.L. 117-167) included numerous AI-related 
provisions directing the Department of Energy, NIST, and NSF to support AI and 
ML R&D activities and the development of technical standards and guidelines 
related to safe and trustworthy AI systems. NSF was further directed to (1) 
evaluate the establishment of an AI scholarship-for-service program to recruit 
and train AI professionals to support AI work in federal, state, local, and tribal 
governments; and (2) study AI research capacity at U.S. institutions of higher 
education. 
•  The Countering Human Trafficking Act of 2021 (P.L. 117-322) provided 
statutory authority for the Department of Homeland Security’s (DHS’s) Center 
for Countering Human Trafficking (CCHT) and required the CCHT director to 
develop a strategy and proposal to modify systems and processes throughout 
DHS that are related to CCHT’s mission in order to, among other things, provide 
 
43 The act codified the GSA AI Center of Excellence that was launched in 2019. See https://www.ai.gov/legislation-
and-executive-orders/.  
44 P.L. 115-232, §238; 10 U.S.C. §2358 note. 
45 National Security Commission on AI, Final Report, 2021, https://www.nscai.gov/2021-final-report/.  
Congressional Research Service  
 
8 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
AI and ML to increase system capabilities and enhance data availability, 
reliability, comparability, and verifiability. 
•  The Identifying Outputs of Generative Adversarial Networks Act (P.L. 116-258) 
directed NSF and NIST to support research on generative adversarial networks 
(GANs), including research on manipulated or synthesized content and 
information authenticity and the development of measurements and standards 
necessary to accelerate the development of technical tools to examine the 
function and outputs of GANs. 
Federal AI Legislation Introduced in the 117th and 118th Congresses 
In addition to the enacted legislation, numerous bills for which AI or ML technologies were the 
main focus, or that had at least one provision focused on AI/ML, were introduced in the 117th 
Congress and have been introduced in the 118th Congress in both the Senate and the House of 
Representatives.46  
In the 117th Congress, a search of Congress.gov for all the subject legislation resulted in 235 bills, 
six of which were enacted into law.47 Of those, CRS identified 34 bills that have a main focus on 
AI/ML (six of which are identical bills). Additionally, CRS identified 41 pieces of legislation that 
include provisions related to AI/ML (19 of which were identical bills or had nearly identical 
AI/ML-related provisions). Collectively, the introduced and enacted legislation addressed many 
AI/ML-related topics, including federal planning and coordination; research and education 
investment; job displacement and workforce retraining; commercial technology use; ethics, bias, 
and algorithm accountability; government efficiency and cost savings; and health care, defense, 
national security, and intelligence applications and activities. 
In the 118th Congress, a search of Congress.gov as of June 2023 resulted in 94 bills, none of 
which has been enacted.48 Of those bills, CRS identified 22 pieces of legislation that have a main 
focus on AI/ML (of which six were identical bills) and 18 with provisions related to AI/ML (of 
which 12 were identical or have nearly identical relevant provisions). Collectively, the legislation 
addresses a range of topics, including:  
•  oversight of the federal government’s approach to AI governance and regulation;  
•  AI training for federal employees; 
•  requirements for federal agencies and the private sector to disclose the use of 
GenAI broadly and the use of AI in certain contexts, such as political 
advertisements;  
•  assessing export controls for national interest technologies, including AI, to the 
People’s Republic of China; 
•  prohibiting the use of AI in certain contexts (e.g., decisionmaking for the use of 
nuclear weapons, biometric surveillance, and workplace surveillance); and  
 
46 Legislation that only mentions the terms artificial intelligence or machine learning was also introduced, but those 
bills are not discussed in this report (e.g., if AI/ML is included in a broader list of technologies or research topics, or 
AI/ML is only mentioned in the bill’s findings or sense of Congress). 
47 Based on search results of Congress.gov as of January 11, 2023. 
48 Based on search results of Congress.gov as of June 23, 2023. CRS searched Congress.gov for legislation in the 118th 
Congress that included the term artificial intelligence, machine learning, or facial recognition in the text or title. 
Legislation that only briefly mentioned the terms was excluded (e.g., if one of the terms is included in a broader list of 
technologies or research topics or only mentioned in the bill’s findings or sense of Congress).  
Congressional Research Service  
 
9 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
•  support for the use of AI in various areas, including cybersecurity, classification 
and declassification systems, advanced weather modeling, wildfire detection, 
airport efficiency and safety, precision agriculture, and prescribing certain 
pharmaceuticals. 
Perspectives on Regulating Artificial Intelligence 
As the use of AI technologies has grown, so too have discussions of whether and how to regulate 
them. Internationally, the European Union has proposed the Artificial Intelligence Act (AIA),49 
which would create regulatory oversight for the development and use of a wide range of AI 
applications, with requirements varying by risk category, from banning systems with 
“unacceptable risk” to allowing free use of those with minimal or no risk.50 In one recently passed 
version of the AIA, the European Parliament agreed to changes that included a ban on the use of 
AI in biometric surveillance and a requirement for GenAI systems to disclose that the outputs are 
AI-generated.51 In the United States, legislation such as the Algorithmic Accountability Act of 
2022 (H.R. 6850 and S. 3572, 117th Congress) would have directed the Federal Trade 
Commission to require certain organizations to perform impact assessments of automated 
decision systems—including those using ML and AI—and augmented critical decision processes. 
Some researchers have noted that these proposals “spring from widely different political contexts 
and legislative traditions.”52 The United States has yet to enact legislation to broadly regulate AI, 
and the AIA now enters a trilogue negotiation among representatives of the three main institutions 
of the EU (Parliament, Commission, and Council).53 Depending on the final negotiated text of the 
AIA, the EU and United States might begin to align or diverge on AI regulation, as discussed by 
researchers.54  
Rather than broad regulation of AI technologies that could be used across sectors, some 
stakeholders have suggested a more targeted approach, regulating the use of AI technologies in 
particular sectors. The federal government has taken steps over the past few years to evaluate AI 
regulation in this way. In November 2020, in response to Executive Order 13859, “Maintaining 
American Leadership in Artificial Intelligence,” OMB released a memorandum to the heads of 
federal agencies providing guidance for regulatory and nonregulatory oversight of AI applications 
 
49 European Commission, Proposal for a Regulation of the European Parliament and the Council Laying Down 
Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, 
April 21, 2021, https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/
DOC_1&format=PDF.  
50 See European Commission, “Regulatory Framework Proposal on Artificial Intelligence,” https://digital-
strategy.ec.europa.eu/en/policies/regulatory-framework-ai.  
51 Spencer Feingold, “The European Union’s Artificial Intelligence Act—Explained,” World Economic Forum, June 
30, 2023, https://www.weforum.org/agenda/2023/06/european-union-ai-act-explained/. 
52 Jakob Mokander et al., “The US Algorithmic Accountability Act of 2022 vs. the EU Artificial Intelligence Act: What 
Can They Learn from Each Other?,” Minds and Machines, vol. 32 (August 2022), pp. 751-758, 
https://link.springer.com/article/10.1007/s11023-022-09612-y.  
53 For more information on the EU structure, see CRS In Focus IF11211, The European Parliament and U.S. Interests, 
by Kristin Archick; and James McBride, “How Does the European Union Work?,” Council on Foreign Relations, 
updated March 11, 2022, https://www.cfr.org/backgrounder/how-does-european-union-work. 
54 See for example Alex Engler, “The EU and U.S. Are Starting to Align on AI Regulation,” Brookings Institution, 
February 1, 2022, https://www.brookings.edu/blog/techtank/2022/02/01/the-eu-and-u-s-are-starting-to-align-on-ai-
regulation/; Alex Engler, “The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to 
Alignment,” Brookings Institution, April 25, 2023, https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-
regulation-a-transatlantic-comparison-and-steps-to-alignment/. 
Congressional Research Service  
 
10 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
developed and deployed outside of the federal government.55 The memorandum laid out 10 
principles for the stewardship of AI applications, including risk assessment, fairness and 
nondiscrimination, disclosure and transparency, and interagency coordination. It further touched 
on reducing barriers to the deployment and use of AI, including increasing access to government 
data, communicating benefits and risks to the public, engaging in the development and use of 
voluntary consensus standards, and engaging in international regulatory cooperation efforts. 
Additionally, the memorandum directed federal agencies to provide plans to conform to the 
guidance, including any statutory authorities governing agency regulation of AI applications, 
regulatory barriers to AI applications, and any planned or considered regulatory actions on AI. So 
far, few agencies appear to have provided comprehensive, publicly available responses.56 
Industry has echoed calls for regulation of AI technologies, with some companies putting forth 
recommendations.57 However, some analysts have argued that the push for regulations from 
technology firms is intended to protect companies’ interests and might not align with the priorities 
of other stakeholders.58 Further, some argue that large technology companies continue to 
prioritize the speed of AI technology deployment ahead of concerns about safety and accuracy.59 
In March 2023, the U.S. Chamber of Commerce released a report calling for AI regulation, 
stating, 
Policy  leaders  must  undertake  initiatives  to  develop  thoughtful  laws  and  rules  for  the 
development  of responsible AI  and  its  ethical deployment.  A  failure  to  regulate  AI  will 
harm the economy, potentially diminish individual rights, and constrain the development 
and introduction of beneficial technologies.60 
In July 2023, the Biden Administration convened representatives from seven AI companies—
Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—and announced voluntary 
commitments from each of them “to help move toward safe, secure, and transparent development 
of AI technology.”61 According to the Administration, the voluntary commitments “are consistent 
with existing laws and regulations,” and “companies intend these voluntary commitments to 
remain in effect until regulations covering substantially the same issues come into force.”62 
Additionally, the Administration reportedly consulted on the voluntary commitments with 20 
other countries and has stated that it “will work with allies and partners to establish a strong 
international framework to govern the development and use of AI.”63 
 
55 Russell Vought, Director, OMB, “Guidance for Regulation of Artificial Intelligence Applications,” November 17, 
2020, https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf. 
56 For information on the memorandum, see https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf.   
57 See, for example, Google, Recommendations for Regulating AI, https://ai.google/static/documents/recommendations-
for-regulating-ai.pdf.  
58 Justin Sherman, “Oh Sure, Big Tech Wants Regulation—On Its Own Terms,” Wired, January 28, 2020, 
https://www.wired.com/story/opinion-oh-sure-big-tech-wants-regulationon-its-own-terms/.  
59 Nico Grant and Karen Weise, “In A.I. Race, Microsoft and Google Choose Speed over Caution,” New York Times, 
April 7, 2023, https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html.  
60 U.S. Chamber of Commerce, “Artificial Intelligence Commission Report,” March 9, 2023, 
https://www.uschamber.com/technology/artificial-intelligence-commission-report.  
61 White House Briefing Room, “Fact Sheet: Biden-Harris Administration Secures Voluntary Commitments from 
Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” July 21, 2023, 
https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-
secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.  
62 White House, “Ensuring Safe, Secure, and Trustworthy AI,” July 21, 2023, https://www.whitehouse.gov/wp-content/
uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf.  
63 White House Briefing Room, “Fact Sheet: Biden-Harris Administration Secures Voluntary Commitments.” 
Congressional Research Service  
 
11 
AI: Overview, Recent Advances, and Considerations for the 118th Congress 
 
Other Considerations for the 118th Congress 
Interest in AI—including from the public, industry, and lawmakers—shows no signs of slowing 
in the near future, particularly in light of recent advances and widespread use of GenAI models 
since fall 2022 across a number of sectors. Just as the potential of these and other AI technologies 
is expanding, so too is recognition of potential harms and calls for congressional action. As 
debates among stakeholders continue, cross-cutting issues addressing AI for consideration during 
the 118th Congress might include: 
•  How might Congress approach AI regulation in a way that supports innovation 
and beneficial uses of AI technologies while minimizing current harms and 
potential future negative outcomes? How might such approaches align with 
international efforts to regulate AI technologies or their uses? 
•  Are current mechanisms within the federal government sufficient for AI 
oversight and policymaking? How might broader efforts—such as calls for the 
creation of a new agency focused on AI—fit with the current federal government 
approaches to AI? 
•  What is the role of the federal government in supporting AI R&D and addressing 
calls for the “democratization” of AI—for example, by making R&D resources 
more available to academic researchers and startup businesses? 
•  Additional areas of consideration include the potential impact of AI technologies 
on the workforce; disclosure of AI use; testing and validation of AI systems; and 
potential ways to support the development of trustworthy and responsible AI. 
 
Author Information 
 
Laurie A. Harris 
   
Analyst in Science and Technology Policy 
    
 
 
Disclaimer 
This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan 
shared staff to congressional committees and Members of Congress. It operates solely at the behest of and 
under the direction of Congress. Information in a CRS Report should not be relied upon for purposes other 
than public understanding of information that has been provided by CRS to Members of Congress in 
connection with CRS’s institutional role. CRS Reports, as a work of the United States Government, are not 
subject to copyright protection in the United States. Any CRS Report may be reproduced and distributed in 
its entirety without permission from CRS. However, as a CRS Report may include copyrighted images or 
material from a third party, you may need to obtain the permission of the copyright holder if you wish to 
copy or otherwise use copyrighted material. 
 
Congressional Research Service  
R47644 · VERSION 1 · NEW 
12