July 27, 2023
Social Media Algorithms: Content Recommendation,
Moderation, and Congressional Considerations

Social media plays an integral role in modern life for many.
Definitions
It facilitates the spread of information and serves as a key
source of news, entertainment, and financial opportunity. In
Algorithm: A specific process or sequence of
2022, over 70% of Americans received some of their news
computational steps fol owed by a computer in performing a
from social media, according to the Pew Research Center.
task or problem-solving operation. Algorithms vary in
Recently, social media companies have faced criticism for
complexity depending on the task and context.
potentially enabling the spread of harmful content,
Recommendation Systems: Systems that use algorithms
suppressing certain viewpoints, contributing to social
to personalize the sorting, ranking, and displaying of content
polarization and radicalization, collecting and monetizing
for a user based on their previous engagements and other
personal data, and adversely affecting children. As part of
col ected data. Also called recommendation engines or
broader discussions around social media, some stakeholders
recommendation algorithms.
and policymakers have taken interest in legislative
Moderation Systems: Systems that use algorithms to
proposals to regulate or address “social media algorithms.”
identify, filter, and flag undesirable or il egal content for
removal, demonetization, downranking, or other forms of
This In Focus provides a high-level overview of content
moderation.
recommendation and moderation algorithms employed by
social media platforms. It examines issues that arise from
Section 230 and Algorithms
the use of social media algorithms and discusses
considerations for Congress.
Section 230 of the Communications Act of 1934 (47 U.S.C.
§230), enacted as part of the Communications Decency Act
Overview of Social Media Algorithms
of 1996, broadly protects providers of interactive computer
Social media companies use a number of algorithms and
services from liability for information provided by a third
artificial intelligence (AI) systems to recommend or
party and content moderation decisions. There has been
moderate content on their platforms and perform a variety
debate whether Section 230 liability protections should
of other functions. Algorithmic recommendation systems
extend to the use of recommendation algorithms. So far,
sort, curate, and disseminate content deemed relevant to
courts have held that recommendation algorithms are
specific users. Algorithmic content moderation systems are
protected under Section 230.
often used, along with human moderators, to identify and
restrict illegal material and content that violates a
In 2023, the Supreme Court declined to weigh in on the
company’s policies and terms of use and service. Social
Section 230 issue in two cases: Twitter v. Taamneh, No.
media companies may also use algorithms for other
21-1496, and Gonzalez v. Google LLC, No. 21-1333. Both
purposes, such as targeting and delivering digital
cases considered whether social media companies could be
advertising or providing in-app search functions.
liable for recommending terrorist content. The Court did
not rule on whether Section 230 granted immunity to the
Due to definitional ambiguity, algorithms are often
companies’ recommendation algorithms. Instead, it
conflated with a variety of different technologies and
concluded that the companies—regardless of Section 230—
applications. For example, “algorithms” are often
were not liable under the relevant antiterrorism federal
colloquially used to refer to “artificial intelligence,” but the
statute because their conduct did not amount to aiding and
two terms are not synonymous. Certain algorithms may fall
abetting an act of international terrorism. For more
under the broad category of AI (such as machine learning
information on Section 230, see CRS Report R46751,
algorithms), while others do not use the predictive or data-
Section 230: An Overview, by Valerie C. Brannon and Eric
mining techniques that are characteristic of AI. Due to this
N. Holmes.
definitional ambiguity, some scholars and policymakers
Issues and Concerns
have opted to instead use the language of “automated
decision-making systems” or “automated systems” in
Algorithms are a key component of social media platforms.
broader policy discussions—such as in the White House’s
They help sort, moderate, and disseminate massive volumes
Blueprint for an AI Bill of Rights. Additionally, some
of user-generated content to individuals. This in turn
scholars and policymakers instead focus on specific
facilitates targeted digital advertising, a major source of
outcomes and impacts regardless of what technology or
revenue for social media platforms. However,
technologies are implicated—such as discriminatory or
policymakers, stakeholders, and researchers have raised
disparate impacts. Congress may consider what language
concerns about their use.
and terms are best suited for legislation targeting social
media, or other technologies.
https://crsreports.congress.gov

Social Media Algorithms: Content Recommendation, Moderation, and Congressional Considerations
Algorithmic Amplification of Harmful Content
Other critics have expressed the concern that social media
Many social media platforms use algorithms to recommend
companies do not remove enough harmful content, and that
content to maximize user engagement—measured through
automated moderation systems fail to adequately filter
“likes,” time spent on the platform, reposts, and other
certain types of harmful content. For example, research has
metrics. However, there is debate whether these systems
found that content moderation systems may be less
thereby increase the spread of, or amplify, harmful content,
effective addressing non-English content. This has led to
often called algorithmic amplification. There is concern that
claims that non-English misinformation and hate speech
social media algorithms may amplify harmful content,
may be under-moderated and therefore more prevalent in
create echo-chambers (or filter bubbles) that may contribute
certain online language communities. This may be due to a
to user radicalization and polarization, or drive social media
confluence of factors. Automated moderation systems may
addiction in children.
lack the necessary training data for a particular language;
companies may not employ enough people fluent in a
Algorithmic amplification on multiple social media
particular language to address language nuances or
platforms has been a recent topic of inquiry and research.
evolution; or companies may not provide sufficient
While amplification effects are difficult to measure by
resources for moderation in specific countries and
outside researchers without access to social media
languages. Failures in automated moderation may coincide
platforms’ proprietary data, some recently released
with failures in company practices and policies—leading to
company research supports some of these concerns. In
inaccurate or undesirable moderation outcomes.
2021, a former Facebook employee leaked company
documentation that revealed it weighted interactive
Congressional Considerations
elements known as “reactions” more than “likes” in content
Some Members of Congress have introduced legislation in
ranking. By prioritizing content that received emotional
the 117th and 118th Congresses to ban or significantly limit
reactions (such as an angry reaction) rather than likes,
the use of recommendation algorithms. Some recently
critics believe the company amplified divisive or
proposed bills would ban their use for children, restrict the
sensational content. Other Facebook documents reviewed
use of personal data in recommendation algorithms, or
by the Wall Street Journal in 2020 found that 64% of users
require companies to provide disclosures or offer
who joined extremist groups on Facebook’s platform did so
alternative versions of the platforms without algorithmic
“due to [Facebook’s] recommendation tools.” According to
recommendations. If the 118th Congress considers
the Mozilla Foundation’s “YouTube Regrets” report, 12%
restricting recommendation algorithms in certain contexts
of content recommended by YouTube’s algorithms violates
or for certain users, it may consider how to target
the company’s community standards.
interventions given the ubiquity of recommendation
algorithms on other online platforms, such as search,
Some experts also allege that hostile foreign actors can
marketplaces, and video and music streaming services.
manipulate social media recommendation algorithms or
skirt automated moderation systems to conduct influence
The 118th Congress may also consider legislative
operations and spread propaganda. For example, in 2021,
approaches to increase the transparency of social media
the New York Times found that Chinese information
algorithms modeled on provisions in previously introduced
campaigns utilized bot-like accounts to manufacture virality
bills. S. 5339 in the 117th Congress, for example, would
(the rapid spread of online content between users) through
have created disclosure requirements for social media
liking and reposting government and state media posts. The
platforms. Other recent bills would require third-party risk
Times found that, “The contrived flurry of traffic can make
and impact assessments and audits of social media
the posts more likely to be shown by recommendation
algorithms that could be submitted to agencies, such as the
algorithms on many social media sites and search engines.”
Federal Trade Commission, for review and investigation.
Recently, the popular video-sharing app TikTok has faced
criticism for possibly amplifying propaganda and censoring
Congress could consider amending Section 230 to address
content critical of the government of the People’s Republic
perceived risks associated with algorithms. For example,
of China.
bills—such as H.R. 2154 in the 117th Congress—would
have removed Section 230’s protections for
Removal of Content
recommendation algorithms in certain lawsuits involving
Some critics are concerned that social media platforms
terrorism or civil rights. Amending Section 230 could have
remove lawful speech and suppress certain viewpoints
unintended consequences for existing online ecosystems,
through moderation policies and automated moderation
potentially by incentivizing platforms to over- or under-
systems. Some groups and online communities contend
moderate content in an attempt to avoid legal jeopardy or
their posts have been disproportionately flagged, removed,
by providing incumbent platforms that have the resources to
or downranked, meaning the platform adjusted their
fight legal challenges, an advantage over new market
algorithms to make the content less visible or prominent.
entrants that do not.
Companies use various automated demotion or reduction
practices, which are difficult to measure through external
Kristen E. Busch, Analyst in Science and Technology
research without access to social media companies’
Policy
proprietary systems, documentation, research, or internal
policies that guide moderation decisions.
IF12462


https://crsreports.congress.gov

Social Media Algorithms: Content Recommendation, Moderation, and Congressional Considerations


Disclaimer
This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan shared staff to
congressional committees and Members of Congress. It operates solely at the behest of and under the direction of Congress.
Information in a CRS Report should not be relied upon for purposes other than public understanding of information that has
been provided by CRS to Members of Congress in connection with CRS’s institutional role. CRS Reports, as a work of the
United States Government, are not subject to copyright protection in the United States. Any CRS Report may be
reproduced and distributed in its entirety without permission from CRS. However, as a CRS Report may include
copyrighted images or material from a third party, you may need to obtain the permission of the copyright holder if you
wish to copy or otherwise use copyrighted material.

https://crsreports.congress.gov | IF12462 · VERSION 1 · NEW