INSIGHTi
Law Enforcement Use of Artificial Intelligence
and Directives in the 2023 Executive Order
December 15, 2023
The use of artificial intelligence (AI) has expanded in a variety of arenas, includin
g by law enforcement. AI has been broadly conceptualized as computerized systems operating in ways often thought to require
human intelligence. It is defined in the U.S. Code
(15 U.S.C. §9401(3)) as:
a machine-based system that can, for a given set of human-defined objectives, make predictions,
recommendations or decisions influencing real or virtual environments. Artificial intelligence
systems use machine and human-based inputs to-
(A) perceive real and virtual environments;
(B) abstract such perceptions into models through analysis in an automated manner; and
(C) use model inference to formulate options for information or action.
AI involves a host of technologies and applications. In the law enforcement realm
, researchers note that
while the use of AI is not yet widespread, existing tools may be enhanced with AI to expand law
enforcement
capabilities and increase their
efficiency.
Examples include the following:
•
Automated license plate readers can be leveraged to employ machine, or computer, vision
for capabilities such as automating the issuance of red-light violation tickets.
•
Security cameras outfitted with certain AI-embedded hardware can be used for real-time
facial recognition of potential suspects.
• Facial recognition technology and text analysis tools can
be enhanced with AI to scan
online advertisements to help identify potential crimes such as human trafficking.
• In addition to gunshot detection technology that ca
n detect shots fired, security cameras
can be outfitted with AI-enhanced software t
o detect weapons and alert police before
shots are fired.
•
AI redaction capabilities can be used to reduce possible bias in officers’ narratives by
removing certain identifying characteristics of suspects and victims—such as race—that
could influence charges brought by prosecutors.
• Body-worn cameras can use AI software t
o redact or blur faces or sensitive footage
before it is released to the public.
Congressional Research Service
https://crsreports.congress.gov
IN12289
CRS INSIGHT
Prepared for Members and
Committees of Congress
Congressional Research Service
2
•
Automated speech recognition software can use AI to help properly identify speakers’
voices in audiovisual materials such as witness testimonies or interrogations.
• First responders use computer aided dispatch (CAD) systems to capture data that inform
decisions on resource deployment;
AI-enhanced CAD systems can improve resource
allocation by using historical data, making predictions, and automating workflow.
• AI can be used along wit
h predictive policing models to help identify individuals or
places most at risk of being involved in crime.
• Law enforcement agencies can employ AI to enhance its communications with the public;
they can
use chatbots to respond to questions and push out emergency information.
A number of concerns have been raised about law enforcement use of AI, including whether its use
perpetuates biases;
one criticism is that the data on which the software are trained contain bias, thus
training bias into the AI systems. Another concern is whether reliance on AI technology may lead police
t
o ignore contradictory evidence. Policymakers may consider increased oversight over police use of AI
systems to help evaluate and alleviate some of the shortcomings.
On October 30, 2023, President Biden issue
d Executive Order (E.O.) 14110 on the
Safe, Secure, and
Trustworthy Development and Use of Artificial Intelligence. This E.O. advances a government-wide
approach to “governing the development and use of AI safely and responsibly” and directs efforts in AI
policy areas involving safety and security, innovation and competition, worker support, equity and civil
rights, individual protections, privacy protections, federal AI use, and international leadership.
E.O. 14110 acknowledges the risk of AI exacerbating discrimination and directs federal law enforcement
in various ways. (In doing so, it references accountability focused directives for federal law enforcement
previously outlined in the May 25, 2022,
E.O. 14074 on
Advancing Effective, Accountable Policing and
Criminal Justice Practices to Enhance Public Trust and Public Safety.) Directives in E.O. 14110 include
the following:
• The Attorney General (AG) shall coordinate and support enforcement of federal laws
addressing discrimination and violations of civil rights and civil liberties related to AI.
The Department of Justice’s Civil Rights Division shall also coordinate with other federal
civil rights offices to assess how their offices can prevent and address discrimination in
automated systems—including algorithmic discrimination.
• The AG, with the Homeland Security Secretary and Office of Science and Technology
Policy Director, shall submit a report to the President on the use of AI in the criminal
justice system, including how AI can enhance law enforcement efficiency and accuracy,
consistent with privacy, civil rights, and civil liberties protections. The report should also
recommend best practices for law enforcement, including guidance on AI use, to address
concerns outlined in E.O. 14074 with respect to law enforcement use of “facial
recognition technology, other technologies using biometric information, and predictive
algorithms, as well as data storage and access regarding such technologies.”
• The interagency working group established by E.O. 14074 shall share best practices for
recruiting law enforcement professionals with AI expertise and training them on
responsible AI use. The AG, along with the Homeland Security Secretary, may review
these and recommend best practices for state, local, tribal, and territorial law
enforcement.
• The AG shall review the Justice Department’s capacity to “investigate law enforcement
deprivation of rights under color of law resulting from the use of AI,” including through
increasing or improving training for federal law enforcement officers and prosecutors.
Congressional Research Service
3
Policymakers conducting oversight of executive branch activities to ensure that AI is used in a fair and
equitable manner may examine not only these elements of E.O. 14110 that specifically relate to federal
law enforcement but also other elements—such as the development of industry standards on AI—that
may in turn affect law enforcement use of AI. They may also explore whether there should be specific
standards for AI use in the criminal justice sector or AI-specific requirements for criminal justice entities
receiving federal grants. Additionally, policymakers may continue to debate law enforcement use of
specific AI technologies in its toolbox such a
s facial recognition technology.
Author Information
Kristin Finklea
Specialist in Domestic Security
Disclaimer
This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan shared staff
to congressional committees and Members of Congress. It operates solely at the behest of and under the direction of
Congress. Information in a CRS Report should not be relied upon for purposes other than public understanding of
information that has been provided by CRS to Members of Congress in connection with CRS’s institutional role.
CRS Reports, as a work of the United States Government, are not subject to copyright protection in the United
States. Any CRS Report may be reproduced and distributed in its entirety without permission from CRS. However,
as a CRS Report may include copyrighted images or material from a third party, you may need to obtain the
permission of the copyright holder if you wish to copy or otherwise use copyrighted material.
IN12289 · VERSION 1 · NEW