Updated August 26, 2020
Deep Fakes and National Security
“Deep fakes”—a term that first emerged in 2017 to describe
Indeed, the U.S. intelligence community concluded that
realistic photo, audio, video, and other forgeries generated
Russia engaged in extensive influence operations during the
with artificial intelligence (AI) technologies—could present
2016 presidential election to “undermine public faith in the
a variety of national security challenges in the years to
U.S. democratic process, denigrate Secretary Clinton, and
come. As these technologies continue to mature, they could
harm her electability and potential presidency.” In the
hold significant implications for congressional oversight,
future, convincing audio or video forgeries could
U.S. defense authorizations and appropriations, and the
potentially strengthen similar efforts.
regulation of social media platforms.
Deep fakes could also be used to embarrass or blackmail
How Are Deep Fakes Created?
elected officials or individuals with access to classified
Though definitions vary, deep fakes are most commonly
information. Already there is evidence that foreign
described as forgeries created using techniques in machine
intelligence operatives have used deep fake photos to create
learning (ML)—a subfield of AI—especially generative
fake social media accounts from which they have attempted
adversarial networks (GANs). In the GAN process, two ML
to recruit Western sources. Some analysts have suggested
systems called neural networks are trained in competition
that deep fakes could similarly be used to generate
with each other. The first network, or the generator, is
inflammatory content—such as convincing video of U.S.
tasked with creating counterfeit data—such as photos, audio
military personnel engaged in war crimes—intended to
recordings, or video footage—that replicate the properties
radicalize populations, recruit terrorists, or incite violence.
of the original data set. The second network, or the
discriminator, is tasked with identifying the counterfeit
In addition, deep fakes could produce an effect that
data. Based on the results of each iteration, the generator
professors Danielle Keats Citron and Robert Chesney have
network adjusts to create increasingly realistic data. The
termed the “Liar’s Dividend”; it involves the notion that
networks continue to compete—often for thousands or
individuals could successfully deny the authenticity of
millions of iterations—until the generator improves its
genuine content—particularly if it depicts inappropriate or
performance such that the discriminator can no longer
criminal behavior—by claiming that the content is a deep
distinguish between real and counterfeit data.
fake. Citron and Chesney suggest that the Liar’s Dividend
could become more powerful as deep fake technology
Though media manipulation is not a new phenomenon, the
proliferates and public knowledge of the technology grows.
use of AI to generate deep fakes is causing concern because
the results are increasingly realistic, rapidly created, and
Some reports indicate that such tactics have already been
cheaply made with freely available software and the ability
used for political purposes. For example, political
to rent processing power through cloud computing. Thus,
opponents of Gabon President Ali Bongo asserted that a
even unskilled operators could download the requisite
video intended to demonstrate his good health and mental
software tools and, using publically available data, create
competency was a deep fake, later citing it as part of the
increasingly convincing counterfeit content.
justification for an attempted coup. Outside experts were
unable to determine the video’s authenticity, but one expert
How Could Deep Fakes Be Used?
noted, “in some ways it doesn’t matter if [the video is] a
Deep fake technology has been popularized for
fake… It can be used to just undermine credibility and cast
entertainment purposes—for example, social media users
doubt.”
inserting the actor Nicholas Cage into movies in which he
did not originally appear and a museum generating an
How Can Deep Fakes Be Detected?
interactive exhibit with artist Salvador Dalí. Deep fake
Today, deep fakes can often be detected without specialized
technologies have also been used for beneficial purposes.
detection tools. However, the sophistication of the
For example, medical researchers have reported using
technology is rapidly progressing to a point at which
GANs to synthesize fake medical images to train disease
unaided human detection will be very difficult or
detection algorithms for rare diseases and to minimize
impossible. While commercial industry has been investing
patient privacy concerns.
in automated deep fake detection tools, this section
describes the U.S. government investments at the Defense
Deep fakes could also be used for nefarious purposes. State
Advanced Research Projects Agency (DARPA).
adversaries or politically motivated individuals could
release falsified videos of elected officials or other public
DARPA currently has two programs devoted to the
figures making incendiary comments or behaving
detection of deep fakes: Media Forensics (MediFor) and
inappropriately. Doing so could, in turn, erode public trust,
Semantic Forensics (SemaFor). MediFor is developing
negatively affect public discourse, or even sway an election.
algorithms to automatically assess the integrity of photos
https://crsreports.congress.gov

link to page 2
Deep Fakes and National Security
and videos and to provide analysts with information about
to educate the public about deep fakes and minimize
how counterfeit content was generated. The program has
incentives for creators of malicious deep fakes.
reportedly explored techniques for identifying the audio-
visual inconsistencies present in deep fakes, including
Potential Questions for Congress
inconsistencies in pixels (digital integrity), inconsistencies
 Does the Department of Defense, the Department of
with the laws of physics (physical integrity), and
State, and the intelligence community have adequate
inconsistencies with other information sources (semantic
information about the state of foreign deep fake
integrity). MediFor received $17.5 million in FY2019 and
technology and the ways in which this technology may
$5.3 million in FY2020. After the program is completed in
be used to harm U.S. national security?
FY2021, it is expected to transition to operational
commands and the intelligence community.
 How mature are DARPA’s efforts to develop automated
deep fake detection tools? What are the limitations of
Similarly, SemaFor seeks to develop algorithms that will
DARPA’s approach, and are any additional efforts
automatically detect, attribute, and characterize (i.e.,
required to ensure that malicious deep fakes do not harm
identify as either benign or malicious) various types of deep
U.S. national security?
fakes. This program will catalog semantic inconsistencies—
such as the mismatched earrings seen in the GAN-generated
 Are federal investments and coordination efforts, across
image in Figure 1, or unusual facial features or
defense and nondefense agencies and with the private
backgrounds—and prioritize suspected deep fakes for
sector, adequate to address research and development
human review. SemaFor received $9.7 million in FY2020
needs and national security concerns regarding deep
and is slated to receive $17.6 million in FY2021. Both
fake technologies?
SemaFor and MediFor are intended to improve defenses
against adversary information operations.
 How should national security considerations with regard
Figure 1. Example of Semantic Inconsistency in a
to deep fakes be balanced with free speech protections,
GAN-Generated Image
artistic expression, and beneficial uses of the underlying
technologies?
 Should social media platforms be required to
authenticate or label content? Should users be required
to submit information about the provenance of content?
What secondary effects could this have for social media
platforms and the safety, security, and privacy of users?
 To what extent and in what manner, if at all, should
social media platforms and users be held accountable for
the dissemination and impacts of malicious deep fake
content?
 What efforts, if any, should the U.S. government

undertake to ensure that the public is educated about
Source: https://www.darpa.mil/news-events/2019-09-03a.
deep fakes?

Policy Considerations

CRS Products
Some analysts have noted that algorithm-based detection
CRS Report R45178, Artificial Intelligence and National Security,
tools could lead to a cat-and-mouse game, in which the
by Kelley M. Sayler
deep fake generators are rapidly updated to address flaws
identified by detection tools. For this reason, they argue that
CRS In Focus IF10608, Overview of Artificial Intelligence, by
social media platforms—in addition to deploying deep fake
Laurie A. Harris
detection tools—may need to expand the means of labeling
CRS Report R45142, Information Warfare: Issues for Congress,
and/or authenticating content. This could include a
by Catherine A. Theohary
requirement that users identify the time and location at
which the content originated or that they label edited
Other Resources
content as such.
Office of the Director of National Intelligence,
Background to “Assessing Russian Activities and
Other analysts have expressed concern that regulation of
Intentions in Recent US Elections”, January 6, 2017,
deep fake technology could impose undue burden on social
https://www.dni.gov/files/documents/ICA_2017_01.pdf
media platforms or lead to unconstitutional restrictions on
free speech and artistic expression. These analysts have
suggested that existing law is sufficient for managing the

malicious use of deep fakes. Some experts have asserted
that responding with technical tools alone will be
Kelley M. Sayler, Analyst in Advanced Technology and
insufficient and that instead the focus should be on the need
Global Security
https://crsreports.congress.gov

Deep Fakes and National Security

IF11333
Laurie A. Harris, Analyst in Science and Technology
Policy


Disclaimer
This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan shared staff to
congressional committees and Members of Congress. It operates solely at the behest of and under the direction of Congress.
Information in a CRS Report should not be relied upon for purposes other than public understanding of information that has
been provided by CRS to Members of Congress in connection with CRS’s institutional role. CRS Reports, as a work of the
United States Government, are not subject to copyright protection in the United States. Any CRS Report may be
reproduced and distributed in its entirety without permission from CRS. However, as a CRS Report may include
copyrighted images or material from a third party, you may need to obtain the permission of the copyright holder if you
wish to copy or otherwise use copyrighted material.

https://crsreports.congress.gov | IF11333 · VERSION 2 · UPDATED