October 14, 2019
Deep Fakes and National Security
“Deep fakes”—a term that first emerged in 2017 to describe
Indeed, the U.S. intelligence community concluded that
realistic photo, audio, video, and other forgeries generated
Russia engaged in extensive influence operations during the
with artificial intelligence (AI) technologies—could present
2016 presidential election to “undermine public faith in the
a variety of national security challenges in the years to
U.S. democratic process, denigrate Secretary Clinton, and
come. As these technologies continue to mature, they could
harm her electability and potential presidency.” In the
hold significant implications for congressional oversight,
future, convincing audio or video forgeries could
U.S. defense authorizations and appropriations, and the
potentially strengthen similar efforts.
regulation of social media platforms.
Deep fakes could also be used to embarrass or blackmail
How Are Deep Fakes Created?
elected officials or individuals with access to classified
Though definitions vary, deep fakes are most commonly
information. Already there is evidence that foreign
described as forgeries created using techniques in machine
intelligence operatives have used deep fake photos to create
learning (ML)—a subfield of AI—especially generative
fake social media accounts from which they have attempted
adversarial networks (GANs). In the GAN process, two ML
to recruit Western sources. Some analysts have suggested
systems called neural networks are trained in competition
that deep fakes could similarly be used to generate
with each other. The first network, or the generator, is
inflammatory content—such as convincing video of U.S.
tasked with creating counterfeit data—such as photos, audio
military personnel engaged in war crimes—intended to
recordings, or video footage—that replicate the properties
radicalize populations, recruit terrorists, or incite violence.
of the original data set. The second network, or the
discriminator, is tasked with identifying the counterfeit
In addition, deep fakes could produce an effect that
data. Based on the results of each iteration, the generator
professors Danielle Keats Citron and Robert Chesney have
network adjusts to create increasingly realistic data. The
termed the “Liar’s Dividend”; it involves the notion that
networks continue to compete—often for thousands or
individuals could successfully deny the authenticity of
millions of iterations—until the generator improves its
genuine content—particularly if it depicts inappropriate or
performance such that the discriminator can no longer
criminal behavior—by claiming that the content is a deep
distinguish between real and counterfeit data.
fake. Citron and Chesney suggest that the Liar’s Dividend
could become more powerful as deep fake technology
Though media manipulation is not a new phenomenon, the
proliferates and public knowledge of the technology grows.
use of AI to generate deep fakes is causing concern because
the results are increasingly realistic, rapidly created, and
Some reports indicate that such tactics have already been
cheaply made with freely available software and the ability
used for political purposes. For example, political
to rent processing power through cloud computing. Thus,
opponents of Gabon President Ali Bongo asserted that a
even unskilled operators could download the requisite
video intended to demonstrate his good health and mental
software tools and, using publically available data, create
competency was a deep fake, later citing it as part of the
increasingly convincing counterfeit content.
justification for an attempted coup. Outside experts were
unable to determine the video’s authenticity, but one expert
How Could Deep Fakes Be Used?
noted, “in some ways it doesn’t matter if [the video is] a
Deep fake technology has been popularized for
fake… It can be used to just undermine credibility and cast
entertainment purposes—for example, social media users
doubt.”
inserting the actor Nicholas Cage into movies in which he
did not originally appear and a museum generating an
How Can Deep Fakes Be Detected?
interactive exhibit with artist Salvador Dalí. Deep fake
Today, deep fakes can often be detected without specialized
technologies have also been used for beneficial purposes.
detection tools. However, the sophistication of the
For example, medical researchers have reported using
technology is rapidly progressing to a point at which
GANs to synthesize fake medical images to train disease
unaided human detection will be very difficult or
detection algorithms for rare diseases and to minimize
impossible. While commercial industry has been investing
patient privacy concerns.
in automated deep fake detection tools, this section
describes the U.S. government investments at the Defense
Deep fakes could also be used for nefarious purposes. State
Advanced Research Projects Agency (DARPA).
adversaries or politically motivated individuals could
release falsified videos of elected officials or other public
DARPA currently has two programs devoted to the
figures making incendiary comments or behaving
detection of deep fakes: Media Forensics (MediFor) and
inappropriately. Doing so could, in turn, erode public trust,
Semantic Forensics (SemaFor). MediFor is developing
negatively affect public discourse, or even sway an election.
algorithms to automatically assess the integrity of photos
https://crsreports.congress.gov

link to page 2
Deep Fakes and National Security
and videos and to provide analysts with information about
Potential Questions for Congress
how counterfeit content was generated. The program has
 Does the Department of Defense, the Department of
reportedly explored techniques for identifying the audio-
State, and the intelligence community have adequate
visual inconsistencies present in deep fakes, including
information about the state of foreign deep fake
inconsistencies in pixels (digital integrity), inconsistencies
technology and the ways in which this technology may
with the laws of physics (physical integrity), and
be used to harm U.S. national security?
inconsistencies with other information sources (semantic
integrity). MediFor received $17.5 million in FY2019 and
 How mature are DARPA’s efforts to develop automated
is slated to receive $5.3 million in FY2020 as the program
deep fake detection tools? What are the limitations of
begins to transition to operational commands and the
DARPA’s approach, and are any additional efforts
intelligence community.
required to ensure that malicious deep fakes do not harm
U.S. national security?
Similarly, SemaFor seeks to develop algorithms that will
automatically detect, attribute, and characterize (i.e.,
 Are federal investments and coordination efforts, across
identify as either benign or malicious) various types of deep
defense and nondefense agencies and with the private
fakes. This program will catalog semantic inconsistencies—
sector, adequate to address research and development
such as the mismatched earrings seen in the GAN-generated
needs and national security concerns regarding deep
image in Figure 1, or unusual facial features or
fake technologies?
backgrounds—and prioritize suspected deep fakes for
human review. Both SemaFor and MediFor are intended to
 How should national security considerations with regard
improve defenses against adversary information operations.
to deep fakes be balanced with free speech protections,
Figure 1. Example of Semantic Inconsistency in a
artistic expression, and beneficial uses of the underlying
GAN-Generated Image
technologies?
 Should social media platforms be required to
authenticate or label content? Should users be required
to submit information about the provenance of content?
What secondary effects could this have for social media
platforms and the safety, security, and privacy of users?
 To what extent and in what manner, if at all, should
social media platforms and users be held accountable for
the dissemination and impacts of malicious deep fake
content?
 What efforts, if any, should the U.S. government
undertake to ensure that the public is educated about

deep fakes?
Source: https://www.darpa.mil/news-events/2019-09-03a.

CRS Products
Policy Considerations
CRS Report R45178, Artificial Intelligence and National Security,
Some analysts have noted that algorithm-based detection
by Kelley M. Sayler
tools could lead to a cat-and-mouse game, in which the
deep fake generators are rapidly updated to address flaws
CRS In Focus IF10608, Overview of Artificial Intelligence, by
identified by detection tools. For this reason, they argue that
Laurie A. Harris
social media platforms—in addition to deploying deep fake
CRS Report R45142, Information Warfare: Issues for Congress,
detection tools—may need to provide a means of labeling
by Catherine A. Theohary
and/or authenticating content. This could include a
requirement that users identify the time and location at
Other Resources
which the content originated or that they label edited
Office of the Director of National Intelligence,
content as such.
Background to “Assessing Russian Activities and
Intentions in Recent US Elections”
, January 6, 2017,
Other analysts have expressed concern that regulation of
https://www.dni.gov/files/documents/ICA_2017_01.pdf
deep fake technology could impose undue burden on social
media platforms or lead to unconstitutional restrictions on
free speech and artistic expression. These analysts have

suggested that existing law is sufficient for managing the
malicious use of deep fakes. Some experts have asserted
Kelley M. Sayler, Analyst in Advanced Technology and
that responding with technical tools alone will be
Global Security
insufficient and that instead the focus should be on the need
Laurie A. Harris, Analyst in Science and Technology
to educate the public about deep fakes and minimize
incentives for creators of malicious deep fakes.
Policy
https://crsreports.congress.gov

Deep Fakes and National Security

IF11333


Disclaimer
This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan shared staff to
congressional committees and Members of Congress. It operates solely at the behest of and under the direction of Congress.
Information in a CRS Report should not be relied upon for purposes other than public understanding of information that has
been provided by CRS to Members of Congress in connection with CRS’s institutional role. CRS Reports, as a work of the
United States Government, are not subject to copyright protection in the United States. Any CRS Report may be
reproduced and distributed in its entirety without permission from CRS. However, as a CRS Report may include
copyrighted images or material from a third party, you may need to obtain the permission of the copyright holder if you
wish to copy or otherwise use copyrighted material.

https://crsreports.congress.gov | IF11333 · VERSION 1 · NEW