Deep Fakes and National Security




Updated April 17, 2023
Deep Fakes and National Security
“Deep fakes”—a term that first emerged in 2017 to describe
Indeed, the U.S. intelligence community concluded that
realistic photo, audio, video, and other forgeries generated
Russia engaged in extensive influence operations during the
with artificial intelligence (AI) technologies—could present
2016 presidential election to “undermine public faith in the
a variety of national security challenges in the years to
U.S. democratic process, denigrate Secretary Clinton, and
come. As these technologies continue to mature, they could
harm her electability and potential presidency.” Likewise,
hold significant implications for congressional oversight,
in March 2022, Ukrainian President Volodymyr Zelensky
U.S. defense authorizations and appropriations, and the
announced that a video posted to social media—in which he
regulation of social media platforms.
appeared to direct Ukrainian soldiers to surrender to
Russian forces—was a deep fake. While experts noted that
How Are Deep Fakes Created?
this deep fake was not particularly sophisticated, in the
Though definitions vary, deep fakes are most commonly
future, convincing audio or video forgeries could
described as forgeries created using techniques in machine
potentially strengthen malicious influence operations.
learning (ML)—a subfield of AI—especially generative
adversarial networks (GANs). In the GAN process, two ML
Deep fakes could also be used to embarrass or blackmail
systems called neural networks are trained in competition
elected officials or individuals with access to classified
with each other. The first network, or the generator, is
information. Already there is evidence that foreign
tasked with creating counterfeit data—such as photos, audio
intelligence operatives have used deep fake photos to create
recordings, or video footage—that replicate the properties
fake social media accounts from which they have attempted
of the original data set. The second network, or the
to recruit sources. Some analysts have suggested that deep
discriminator, is tasked with identifying the counterfeit
fakes could similarly be used to generate inflammatory
data. Based on the results of each iteration, the generator
content—such as convincing video of U.S. military
network adjusts to create increasingly realistic data. The
personnel engaged in war crimes—intended to radicalize
networks continue to compete—often for thousands or
populations, recruit terrorists, or incite violence. Section
millions of iterations—until the generator improves its
589F of the FY2021 National Defense Authorization Act
performance such that the discriminator can no longer
(P.L. 116-283) directs the Secretary of Defense to conduct
distinguish between real and counterfeit data.
an intelligence assessment of the threat posed by deep fakes
to servicemembers and their families, including an
Though media manipulation is not a new phenomenon, the
assessment of the maturity of the technology and how it
use of AI to generate deep fakes is causing concern because
might be used to conduct information operations.
the results are increasingly realistic, rapidly created, and
cheaply made with freely available software and the ability
In addition, deep fakes could produce an effect that
to rent processing power through cloud computing. Thus,
professors Danielle Keats Citron and Robert Chesney have
even unskilled operators could download the requisite
termed the “Liar’s Dividend”; it involves the notion that
software tools and, using publically available data, create
individuals could successfully deny the authenticity of
increasingly convincing counterfeit content.
genuine content—particularly if it depicts inappropriate or
criminal behavior—by claiming that the content is a deep
How Could Deep Fakes Be Used?
fake. Citron and Chesney suggest that the Liar’s Dividend
Deep fake technology has been popularized for
could become more powerful as deep fake technology
entertainment purposes—for example, social media users
proliferates and public knowledge of the technology grows.
inserting the actor Nicholas Cage into movies in which he
did not originally appear and a museum generating an
Some reports indicate that such tactics have already been
interactive exhibit with artist Salvador Dalí. Deep fake
used for political purposes. For example, political
technologies have also been used for beneficial purposes.
opponents of Gabon President Ali Bongo asserted that a
For example, medical researchers have reported using
video intended to demonstrate his good health and mental
GANs to synthesize fake medical images to train disease
competency was a deep fake, later citing it as part of the
detection algorithms for rare diseases and to minimize
justification for an attempted coup. Outside experts were
patient privacy concerns.
unable to determine the video’s authenticity, but one expert
noted, “in some ways it doesn’t matter if [the video is] a
Deep fakes could, however, be used for nefarious purposes.
fake… It can be used to just undermine credibility and cast
State adversaries or politically motivated individuals could
doubt.”
release falsified videos of elected officials or other public
figures making incendiary comments or behaving
How Can Deep Fakes Be Detected?
inappropriately. Doing so could, in turn, erode public trust,
Today, deep fakes can often be detected without specialized
negatively affect public discourse, or even sway an election.
detection tools. However, the sophistication of the
https://crsreports.congress.gov

link to page 2
Deep Fakes and National Security
technology is rapidly progressing to a point at which
identified by detection tools. For this reason, they argue that
unaided human detection will be very difficult or
social media platforms—in addition to deploying deep fake
impossible. While commercial industry has been investing
detection tools—may need to expand the means of labeling
in automated deep fake detection tools, this section
and/or authenticating content. This could include a
describes U.S. government investments and activities.
requirement that users identify the time and location at
which the content originated or that they label edited
The Identifying Outputs of Generative Adversarial
content as such.
Networks Act (P.L. 116-258) directed NSF and NIST to
support research on GANs. Specifically, NSF is directed to
Other analysts have expressed concern that regulation of
support research on manipulated or synthesized content and
deep fake technology could impose undue burden on social
information authenticity, and NIST is directed to support
media platforms or lead to unconstitutional restrictions on
research for the development of measurements and
free speech and artistic expression. These analysts have
standards necessary to develop tools to examine the
suggested that existing law is sufficient for managing the
function and outputs of GANs or other technologies that
malicious use of deep fakes. Some experts have asserted
synthesize or manipulate content.
that responding with technical tools alone will be
insufficient and that instead the focus should be on the need
In addition, DARPA has had two programs devoted to the
to educate the public about deep fakes and minimize
detection of deep fakes: Media Forensics (MediFor) and
incentives for creators of malicious deep fakes.
Semantic Forensics (SemaFor). MediFor, which concluded
in FY2021, was to develop algorithms to automatically
Potential Questions for Congress
assess the integrity of photos and videos and to provide
 Do the Department of Defense, the Department of State,
analysts with information about how counterfeit content
and the intelligence community have adequate
was generated. The program reportedly explored techniques
information about the state of foreign deep fake
for identifying the audio-visual inconsistencies present in
technology and the ways in which this technology may
deep fakes, including inconsistencies in pixels (digital
be used to harm U.S. national security?
integrity), inconsistencies with the laws of physics (physical
 How mature are DARPA’s efforts to develop automated
integrity), and inconsistencies with other information
deep fake detection tools? What are the limitations of
sources (semantic integrity). MediFor technologies are
DARPA’s approach, and are any additional efforts
expected to transition to operational commands and the
required to ensure that malicious deep fakes do not harm
intelligence community.
U.S. national security?
 Are federal investments and coordination efforts, across
SemaFor seeks to build upon MediFor technologies and to
defense and nondefense agencies and with the private
develop algorithms that will automatically detect, attribute,
sector, adequate to address research and development
and characterize (i.e., identify as either benign or malicious)
needs and national security concerns regarding deep
various types of deep fakes. This program is to catalog
fake technologies?
semantic inconsistencies—such as the mismatched earrings
 How should national security considerations with regard
seen in the GAN-generated image in Figure 1, or unusual
to deep fakes be balanced with free speech protections,
facial features or backgrounds—and prioritize suspected
artistic expression, and beneficial uses of the underlying
deep fakes for human review. DARPA requested $18
technologies?
million for SemaFor in FY2024, $4 million under the
 Should social media platforms be required to
FY2023 appropriation. Technologies developed by both
authenticate or label content? Should users be required
SemaFor and MediFor are intended to improve defenses
to submit information about the provenance of content?
against adversary information operations.
What secondary effects could this have for social media
platforms and the safety, security, and privacy of users?
Figure 1. Example of Semantic Inconsistency in a
 To what extent and in what manner, if at all, should
GAN-Generated Image
social media platforms and users be held accountable for
the dissemination and impacts of malicious deep fake
content?
 What efforts, if any, should the U.S. government
undertake to ensure that the public is educated about
deep fakes?
CRS Products
CRS Report R46795, Artificial Intelligence: Background, Selected
Issues, and Policy Considerations
, by Laurie A. Harris
CRS Report R45178, Artificial Intelligence and National Security,

by Kelley M. Sayler
Source: https://www.darpa.mil/news-events/2019-09-03a.
CRS Report R45142, Information Warfare: Issues for Congress,

by Catherine A. Theohary
Policy Considerations
Some analysts have noted that algorithm-based detection
tools could lead to a cat-and-mouse game, in which the
deep fake generators are rapidly updated to address flaws
https://crsreports.congress.gov

Deep Fakes and National Security

Laurie A. Harris, Analyst in Science and Technology
Policy
Kelley M. Sayler, Analyst in Advanced Technology and
Global Security
IF11333


Disclaimer
This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan shared staff to
congressional committees and Members of Congress. It operates solely at the behest of and under the direction of Congress.
Information in a CRS Report should not be relied upon for purposes other than public understanding of information that has
been provided by CRS to Members of Congress in connection with CRS’s institutional role. CRS Reports, as a work of the
United States Government, are not subject to copyright protection in the United States. Any CRS Report may be
reproduced and distributed in its entirety without permission from CRS. However, as a CRS Report may include
copyrighted images or material from a third party, you may need to obtain the permission of the copyright holder if you
wish to copy or otherwise use copyrighted material.

https://crsreports.congress.gov | IF11333 · VERSION 7 · UPDATED