Liability for Algorithmic Recommendations
October 12, 2023
A key feature of many websites and online services is the use of algorithm-based systems to
select content that may be of interest to the website’s users (
recommendation systems or
Eric N. Holmes
recommender systems). These systems rely on a variety of inputs, including those that are
Attorney-Advisor
explicitly directed by a user (such as a search term or a user’s decision to “follow” another user
account) as well as inputs derived from user behavior or sitewide trends. As these systems
become more ubiquitous, a frequent question before courts throughout the United States is
whether a website or online service may be held legally liable for using recommendation systems
to recommend content. This issue has arisen in cases brought against social media providers that have allegedly
recommended terrorist content and consequently advanced terrorist causes.
Section 230 of the Communications Act provides legal immunity to providers and users of “interactive computer services”
for claims based on third-party content. A robust and largely consistent body of caselaw interprets the scope of Section 230’s
protections. The Supreme Court agreed to hear a case interpreting Section 230 in 2022, but its decision did not resolve any
questions about Section 230. In the absence of Supreme Court precedent interpreting Section 230, federal and state courts
largely rely on analyses undertaken by federal appellate courts. Section 230 caselaw supports a broad reading of the statute,
conferring legal immunity upon interactive computer service providers for most claims involving third-party content.
Social media providers facing claims for recommending content have thus far successfully relied on Section 230 to avoid
liability for their recommendations. The few federal appellate decisions addressing Section 230’s applicability to
recommendation systems suggest that courts may be reluctant to limit Section 230’s broad reach. Federal judges have
nonetheless shown concern about applying Section 230 to technology that, while now commonplace, was less widespread
when the law was adopted.
Section 230 caselaw has developed based on statutory text that is mostly unchanged from the law’s enactment in 1996. Some
Members of Congress have introduced bills that would amend Section 230 to alter or limit its applicability to algorithmic
recommendations. Any amendment to Section 230 would raise several questions. These questions include, but are not limited
to, (1) how amendments to Section 230 would alter judicial applications of existing caselaw, (2) the scope of any
amendments, and (3) whether altering Section 230’s protections may unconstitutionally abridge speech in violation of the
Free Speech Clause of the First Amendment.
Congressional Research Service
link to page 5 link to page 6 link to page 7 link to page 8 link to page 8 link to page 9 link to page 10 link to page 11 link to page 11 link to page 12 link to page 13 link to page 14 link to page 15 link to page 16 link to page 16 link to page 17 link to page 18 link to page 19 link to page 19 link to page 21 link to page 23 link to page 25 link to page 27 link to page 17 link to page 17 link to page 27
Liability for Algorithmic Recommendations
Contents
Background ..................................................................................................................................... 2
Statutory Background: Section 230 ................................................................................................. 3
Interactive Computer Service .................................................................................................... 4
Role as Publisher or Speaker ..................................................................................................... 5
Treatment As “Publisher or Speaker” ................................................................................. 5
Non-Publisher Activity ....................................................................................................... 6
Information Provided by Another Information Content Provider ............................................. 7
“Material Contribution” and “Neutral Tools” ..................................................................... 8
Section 230 Immunity and Algorithmic Recommendations ............................................................ 8
Algorithmic Recommendations as Non-Publisher Activity ...................................................... 9
Algorithms as Content Development ...................................................................................... 10
Judicial Challenges to Section 230’s Scope ............................................................................. 11
Considerations for Congress.......................................................................................................... 12
Supreme Court Decisions and Section 230 ............................................................................. 13
Legislation ............................................................................................................................... 13
Relevance of Existing Legal Doctrine .............................................................................. 14
Defining the Covered Activity .......................................................................................... 15
Free Speech Considerations .................................................................................................... 16
Overview of Free Speech Principles ................................................................................. 16
Whether Hosting or Promoting Third-Party Speech is Protected Speech ......................... 18
Whether Withholding Section 230 Protection Restricts Speech ....................................... 20
Content-Based vs. Content-Neutral Speech Regulations .................................................. 22
Key Takeaways ....................................................................................................................... 24
Tables
Table 1. Proposed Legislation Addressing Section 230 and Algorithmic
Recommendations ...................................................................................................................... 14
Contacts
Author Information ........................................................................................................................ 24
Congressional Research Service
link to page 4
Liability for Algorithmic Recommendations
hen an individual first visits video sharing platform TikTok, TikTok starts playing a
stream of videos that may seem random without any input from the individual. Over
W time, however, TikTok will “learn” a viewer’s preferences and start showing more
specific videos tailored to those preferences.
Journalists have written extensively about TikTok’s “For You” landing page and the process by
which the platform hones its video recommendations. Frequently, this process is described in
lofty language, as in an article from the
New York Times titled “How TikTok Reads Your Mind.”1
An investigation from the
Wall Street Journal, which involved creating 100 automated accounts
programmed with certain interests, reported that some of its accounts “ended up lost in rabbit
holes of similar content, including one that just watched videos about depression.”2 For all the
mystery and bravado used to characterize TikTok’s For You page, computer scientists have
described TikTok’s systems as “pretty normal.”3 The use of algorithms to recommend content is
widespread on the internet, including outside social media.
In recent years, Congress has paid special attention to the interaction between algorithms and
content hosted and displayed by online services. One topic of recurring interest is the role online
services play in “promoting” or “amplifying” content, particularly content that may be unlawful
or harmful. Congress has focused on TikTok’s For You page specifically,4 but has also expressed
concern with algorithm-based recommendation systems more generally.5
A question relating to the amplification of harmful material is whether an online service may face
legal liability for using algorithms to recommend content to the service’s users. The victims and
families of victims of terrorist attacks have filed several lawsuits against social media providers
alleging that the providers contributed to terrorism by recommending terrorist content. None of
these lawsuits has been successful. One obstacle is Section 230 of the Communications Act,6 a
federal law that provides legal immunity for providers of “interactive computer services” (ICS) in
certain circumstances. Social media providers have successfully argued that algorithm-based
recommendations are protected from liability under Section 230.
This report provides an overview of the current landscape of legal liability for algorithm-based
recommendations. It begins with a discussion of algorithms and recommendation systems,
followed by a discussion of Section 230 and relevant caselaw involving recommendations by
algorithm. The report concludes with a discussion of considerations for Congress, including
implications of recent Supreme Court decisions involving Section 230, possible questions related
1 Ben Smith,
How TikTok Reads Your Mind, N.Y. TIMES (Dec. 5, 2021),
https://www.nytimes.com/2021/12/05/business/media/tiktok-algorithm.html.
2 WSJ Staff,
Inside TikTok’s Algorithm: A WSJ Video Investigation, WALL STREET JOURNAL (July 21, 2021),
https://www.wsj.com/articles/tiktok-algorithm-video-investigation-11626877477.
3
See Smith,
supra no
te 1 (comments from Julian McAuley, computer science professor at University of California San
Diego).
4
TikTok: How Congress Can Safeguard American Data Privacy and Protect Children from Online Harms: Hearing
Before the H. Energy & Com. Comm., 118th Cong. (2023) (statement of Rep. Cathy McMorris Rodgers, Chair, H.
Energy & Com. Comm.), https://plus.cq.com/doc/congressionaltranscripts-7698802?0&searchId=z0GqdZLl (“Within
minutes of creating an account, [TikTok’s] algorithm can promote suicide, self-harm and eating disorders to
children.”).
5
Platform Accountability: Gonzalez and Reform, Hearing Before the Subcomm. on Privacy, Tech., & the L. of the S.
Comm. on the Judiciary, 118th Cong. (2023) (statement of Sen. Richard Blumenthal, Chair, Subcomm. on Privacy,
Tech, & the L.), https://plus.cq.com/doc/congressionaltranscripts-7688525?4&searchId=xMxvTdMX (“we need to look
at . . . the personalization of algorithms, recommendations that drive content.”).
6 47 U.S.C. § 230.
Congressional Research Service
1
link to page 21 link to page 21 link to page 11 link to page 21
Liability for Algorithmic Recommendations
to legislative proposals, and potential issues raised by the Free Speech Clause of the First
Amendment.
Background
As the term is commonly used in relation to online services, an algorithm is a problem-solving
process undertaken by a computer.7 A variety of automated tasks are accomplished via
algorithms. These tasks can include producing expressive outputs (such as algorithms that
determine a response provided by a virtual assistant or chatbot)8 as well as performing
mechanical tasks (such as an algorithm that determines when a car’s antilock braking system will
trigger). As discussed below, how an algorithm operates, and the outputs it generates, may impact
whether certain legal protections are available for the algorithm’s operator.9
In 2023, the internet features a wide variety of online platforms10 that offer content to the
platforms’ users. The content available on these platforms may be either
first-party content created by the platform or
third-party content created by someone other than the platform itself,
including
user-generated content created by users of the platform. Some platforms may host both
first-party and user-generated content, such as an online publication that allows users to leave
comments on articles. Other platforms, such as online marketplaces and video streaming services,
may host both first-party content and third-party content not provided by users (such as product
listings by third-party sellers or video programming provided by third-party licensees).11
Many platforms use algorithms to organize and recommend content. One popular example that
has persisted since the early days of the internet is the use of algorithms by search engines to
provide the most relevant results to a user’s search query.12 Different search engines may give
different weights to a variety of factors in ordering their results, including user-specific
information like geographic location and past searches as well as general information like how a
search is worded and the popularity of particular websites in response to similar searches.13 In
addition to search engines, which use algorithms to filter third-party content, platforms that host
both first-party and third-party content may use algorithms to sort their content. For example,
online marketplaces may use algorithms to determine how products are displayed, and video
streaming services may use algorithms to determine what programming to suggest to users.
7
See Stuart Minor Benjamin,
Algorithms and Speech, 161 U. PA. L. REV. 1445, 1447 n.4 (2013) (observing that “[t]here
is no single accepted definition of ‘algorithm’” and interpreting the term “as instructions or rules implemented by a
computer”).
8 For a discussion of what makes an output “expressive” and why this matters, see
infra “Whether Hosting or
Promoting Third-Party Speech is Protected Speech.”
9
See infra “Section 230 Immunity and Algorithmic Recommendations” (statutory protections for algorithms);
“Whether Hosting or Promoting Third-Party Speech is Protected Speech” (constitutional protections for algorithms).
10 The term “online platform” or “platform” is used in this report to refer to “a digital service that facilitates interactions
between two or more distinct but interdependent sets of users (whether firms or individuals) who interact through the
service via the Internet.”
What Is an “Online Platform”?, OECD LIBRARY, https://www.oecd-ilibrary.org/science-and-
technology/an-introduction-to-online-platforms-and-their-role-in-the-digital-transformation_19e6a0f0-en (last visited
Oct. 11, 2023).
11 For a discussion of regulation of online platforms more broadly, see CRS Report R47662,
Defining and Regulating
Online Platforms, coordinated by Clare Y. Cho.
12
See, e.g.,
How Results Are Automatically Generated, GOOGLE,
https://www.google.com/search/howsearchworks/how-search-works/ranking-results/ (last visited Oct. 11, 2023).
13
E.g.,
id.
Congressional Research Service
2
link to page 14
Liability for Algorithmic Recommendations
Platforms that primarily distribute user-generated content, such as social media platforms, may
use algorithms to determine what content is displayed to a particular user.14 These algorithms can
support various forms of content distribution, ranging from allowing users to sort content by how
often it has been viewed to targeted online advertisement relying on a wide range of factors.
Social media platforms may rely on more complicated algorithms in combination with the use of
specific user-selected criteria to determine how content is displayed. For example, social media
platform Instagram shows its users a “Feed” that prioritizes content from individuals “followed”
by the user.15 Some platforms may place less weight on user-selected criteria: for example,
TikTok’s For You page displays content regardless of whether the user has followed any specific
accounts.16
Social media platforms are seen as operating along a spectrum of algorithmic curation, with some
platforms relying wholly or mostly on more complex processes to determine how content is
displayed and others relying on user-defined inputs such as “follows” or user-selected topics.17
Computer science experts use the term “recommendation system” or “recommender system” to
refer to these collections of algorithms.18 Some commentators have used the term “algorithmic
amplification,” referring to the use of algorithms to increase the prominence of particular
content.19
Statutory Background: Section 230
Courts have largely rejected attempts to sue platforms for algorithms that organize and
recommend user content. Section 230 of the Communications Act of 1934, enacted as part of the
Communications Decency Act of 1996, provides legal immunity to providers and users of
interactive computer services (ICS), a defined term discussed below.20 Two provisions of Section
230 provide the primary framework for this immunity. The first of these provisions, 47 U.S.C.
§ 230(c)(1), specifies that ICS providers and users may not “be treated as the publisher or speaker
of any information provided by another information content provider.” The second provision,
Section 230(c)(2), states that ICS providers and users may not “be held liable” for voluntary,
“good faith” actions “to restrict access to or availability of material that the provider or user
14
See, e.g.,
How Facebook Distributes Content, META,
https://www.facebook.com/business/help/718033381901819?id=208060977200861 (last visited Oct. 11, 2023).
15
How Instagram Feed Works, INSTAGRAM, https://help.instagram.com/1986234648360433/(last visited Oct. 11,
2023). Earlier versions of Instagram’s Feed placed even greater emphasis on content from accounts followed by the
user. Adam Mosseri,
Shedding More Light on How Instagram Works, INSTAGRAM (June 8, 2021),
https://about.instagram.com/blog/announcements/shedding-more-light-on-how-instagram-works.
16
See Alex Hern,
How TikTok’s Algorithm Made It a Success: ‘It Pushes the Boundaries’, THE GUARDIAN (Oct. 24,
2022, 1:00 AM), https://www.theguardian.com/technology/2022/oct/23/tiktok-rise-algorithm-popularity (contrasting
TikTok’s For You Page with other platforms that rely on a user’s friends or followed accounts).
17
See generally Arvind Narayanan,
Understanding Social Media Recommendation Algorithms, KNIGHT FIRST
AMENDMENT INST. (Mar. 9, 2023), https://knightcolumbia.org/content/understanding-social-media-recommendation-
algorithms.
18
E.g.,
id. Platforms may use algorithms to organize content in a way that does not directly “recommend” it, such as by
determining where content is placed on a webpage. For a discussion of whether these two uses of algorithms are legally
distinguishable, see
“Judicial Challenges to Section 230’s Scope” infra.
19
See David McCabe,
Lawmakers Target Big Tech ‘Amplification.’ What Does That Mean? N.Y. TIMES (Dec. 1, 2021),
https://www.nytimes.com/2021/12/01/technology/big-tech-amplification.html. For more discussion of these concepts,
see CRS In Focus IF12462,
Social Media Algorithms: Content Recommendation, Moderation, and Congressional
Considerations, by Kristen E. Busch.
20 47 U.S.C. § 230. For a more detailed summary of Section 230 and cases interpreting the law, see CRS Report
R46751,
Section 230: An Overview, by Valerie C. Brannon and Eric N. Holmes.
Congressional Research Service
3
Liability for Algorithmic Recommendations
considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise
objectionable, whether or not such material is constitutionally protected.”
Thus, Section 230(c)(2) more narrowly focuses on actions restricting certain types of
objectionable content, while Section 230(c)(1) provides broader immunity for acting as a
“publisher or speaker” of another’s content.21 Defendants invoke Section 230(c)(1)’s broader
immunity much more frequently, particularly because a number of courts have interpreted Section
230(c)(1) to also encompass actions to remove or restrict content.22 Accordingly, although an
algorithm may be used both to promote and restrict access to content, most cases considering
whether Section 230 protects the use of recommendation algorithms have focused on Section
230(c)(1).
A court’s decision to apply Section 230(c)(1) to bar legal liability depends on the presence of
three conditions.23 First, the defendant must be a user or provider of an interactive computer
service. Second, the liability must arise from the defendant acting as a publisher or speaker.
Third, the liability must arise from information provided by
another person.
Courts around the country have written decisions in Section 230 cases addressing how these
conditions might be satisfied. Without Supreme Court precedent interpreting Section 230, federal
and state courts frequently rely on interpretations and analyses undertaken by federal appellate
courts.24
Interactive Computer Service
Section 230 defines an ICS as “any information service, system, or access software provider that
provides or enables computer access by multiple users to a computer server.”25 This definition is
broad and applies to more than just websites or social media platforms. Courts have considered
various online platforms such as Craigslist,26 Facebook,27 GoDaddy,28 Yahoo!,29 and Zoom30 to be
ICS providers.31 They have also held that providers of broadband services (e.g., AT&T, Verizon)32
21 47 U.S.C. § 230.
22
See, e.g., King v. Facebook, Inc., 572 F.Supp.3d 776, 796 (N.D. Cal. 2021).
23
See, e.g., Universal Commc’n Sys., Inc. v. Lycos, Inc., 478 F.3d 413, 418 (1st Cir. 2007); Jones v. Dirty World Ent.
Recordings LLC, 755 F.3d 398, 409 (6th Cir. 2014).
24
E.g., Zeran v. Am. Online, Inc., 129 F.3d 327 (4th Cir. 1997) (interpreting the scope of Section 230(c)(1)’s
“publisher or speaker” language); Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008)
(interpreting the scope of Section 230(c)(1)’s “information provided by another information content provider”
language).
25 47 U.S.C. § 230(f)(2). The definition includes “specifically a service or system that provides access to the Internet
and such systems operated or services offered by libraries or educational institutions.”
Id. 26 Chicago Laws.’ Comm. for C.R. Under Law, Inc. v. Craigslist, Inc., 519 F.3d 666, 671 (7th Cir. 2008).
27 Klayman v. Zuckerberg, 753 F.3d 1354, 1357 (D.C. Cir. 2014).
28 Ricci v. Teamsters Union Local 456, 781 F.3d 25, 28 (2d Cir. 2015).
29 Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1101 (9th Cir. 2009).
30
In re Zoom Video Commc’ns Privacy Litig., 525 F. Supp. 3d 1017, 1029 (N.D. Cal. 2021).
31
See also Universal Commc’n Sys., Inc. v. Lycos, Inc., 478 F.3d 413, 419 (1st Cir. 2007) (“Providing access to the
Internet is . . . not the only way to be an interactive computer service provider.”).
32 Winter v. Bassett, No. 1:02CV00382, 2003 U.S. Dist. LEXIS 26904, at *21 (M.D.N.C. Aug. 22, 2003),
aff’d,
157 F.
App’x 653, 654 (4th Cir. 2005) (per curiam).
Congressional Research Service
4
link to page 14
Liability for Algorithmic Recommendations
and search engines (e.g., Google)33 qualify as ICS providers.34 Because courts have construed the
definition of interactive computer service broadly, the success of a Section 230(c)(1) defense
more often turns on whether the plaintiff seeks to hold the defendant liable as a publisher or
speaker and whether the plaintiff’s claim arises from information provided by another
information content provider.
Role as Publisher or Speaker
Section 230(c)(1) prohibits courts from treating an ICS provider as a “publisher or speaker” of
third-party content.35 Courts have interpreted this provision to apply broadly to bar any claim
arising from third-party content. Courts have declined to apply Section 230(c)(1) to claims that
rely on a provider’s own unlawful conduct, rather than its publication of third-party information.
Treatment As “Publisher or Speaker”
In determining whether a legal claim treats a service provider as a “publisher or speaker,” courts
often look to
Zeran v. America Online, an early Fourth Circuit case applying Section 230.36 The
Zeran court determined that Section 230(c)(1) bars “lawsuits seeking to hold a service provider
liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to
publish, withdraw, postpone, or alter content.”37 In reaching this conclusion, the
Zeran court
determined that use of the term “publisher” in Section 230(c)(1) was meant to extend protection
to all entities engaged in such functions.38 In
Zeran, the Fourth Circuit held that an ICS provider’s
hosting of defamatory messages and failure to remove them upon receiving notice of their
allegedly defamatory nature was protected activity under Section 230(c)(1).39
Many courts have
used the
Zeran court’s description of “traditional editorial functions”40 to determine whether a
claim would impermissibly treat a service provider or user as a publisher or speaker of another’s
content.41
Although courts have continued to rely on
Zeran, the breadth of this description of “publisher”
activity has concerned some jurists. A number of judges have questioned whether Section
230(c)(1)’s application has expanded beyond its intended scope—although few of these judges
have altered the prevailing legal standard.42 In a statement respecting a denial of certiorari in a
33 Lewis v. Google, Inc., No. 20-1784, 2021 U.S. Dist. LEXIS 11609, at *5–6 (W.D. Pa. Jan. 21, 2021).
34
See Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 406 n.2 (6th Cir. 2014) (observing that the term
“interactive computer service” covers “broadband providers, hosting companies, and website operators”).
35 47 U.S.C. § 230(c)(1).
36 129 F.3d 327 (4th Cir. 1997). For purposes of brevity, references to a particular circuit in this report (e.g., the Fourth
Circuit) refer to the U.S. Court of Appeals for that particular circuit (e.g., the U.S. Court of Appeals for the Fourth
Circuit).
37
Zeran, 129 F.3d
at 330.
38
Id. at 332;
see also Force v. Facebook, Inc., 934 F.3d 53, 65 (2d Cir. 2019) (holding that the “generally broad
construction” from
Zeran is consistent with the “ordinary meaning” of the term publisher).
39
Zeran, 129 F.3d at 333.
40
Id. at 330.
41
See, e.g., Hassell v. Bird, 420 P.3d 776, 789 (Cal. 2018); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398,
407 (6th Cir. 2014); Barnes v. Yahoo! Inc., 570 F.3d 1096, 1102 (9th Cir. 2009).
42
E.g., Force v. Facebook, Inc., 934 F.3d 53, 84 (2d Cir. 2019) (Katzmann, J., concurring in part) (opining that Section
230 as applied creates “extensive immunity . . . for activities that were undreamt of in 1996”); Gonzalez v. Google,
LLC, 2 F.4th 871, 915 (9th Cir. 2021) (Berzon, J., concurring) (arguing that the legislative history of Section 230 does
not support a broad reading of publisher functions).
See infra “Judicial Challenges to Section 230’s Scope” for a more
detailed discussion of these concurring opinions.
Congressional Research Service
5
Liability for Algorithmic Recommendations
Section 230 case, Justice Clarence Thomas suggested that
Zeran and other cases employing its
traditional editorial functions analysis had “extend[ed] § 230 beyond the natural reading of the
text . . . .”43 Justice Thomas’s analysis relied among other things on the use of the terms
“publisher” and “distributor” in
Stratton Oakmont, Inc. v. Prodigy Services Co., a pre-Section 230
case that at least in part provided the impetus for Section 230’s passage.44 A defamation case,
Stratton Oakmont distinguished between “publishers” liable for defamatory statements and
“distributors” liable only if they know or have reason to know of the defamatory statements.45
Zeran explicitly rejected the distinction between publishers and distributors and held that Section
230(c)(1)’s protection encompasses “distributor” activity.46
Apart from these individual judges, one recent federal appeals court opinion appeared to narrow
Zeran’s conception of “publisher” activity. In
Henderson v. Source for Public Data,
the Fourth
Circuit held that to treat a service provider as a publisher or speaker, a claim must hold a service
provider liable based on the improper content of the disseminated information.47 The court drew
this requirement from defamation law, under which a defendant’s liability as a publisher depends
on the improper, “false and defamatory” nature of the material published.48 Under this view,
Section 230 did not bar claims alleging that a website had failed to comply with the Fair Credit
Reporting Act.49 Although the claims would have held the site liable for improperly disseminating
information, they did not depend on the information’s
content being improper.50 The opinion cited
and purported to apply
Zeran, but
Henderson appeared to add a new requirement since
Zeran
made no reference to the content of information.51 Several decisions from state and federal courts
outside of the Fourth Circuit have declined to follow
Henderson, observing that the decision
conflicts with binding precedent in their jurisdiction that reads Section 230(c)(1) more broadly.52
Non-Publisher Activity
Section 230(c)(1) does not bar claims arising from a platform’s own non-publisher conduct,
though courts have disagreed over the exact boundaries between actionable non-publisher
conduct and protected publisher activity. In one case, the Ninth Circuit determined that claims
brought against the maker of Snapchat for negligently designing its platform to include a “speed
filter” that encouraged users to drive at recklessly high speeds would not be barred by Section
43
See Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S.Ct. 13, 18 (2020) (statement of Thomas, J.).
44 No. 31063/94, 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995);
see S.Rep. No. 104-230, at 86–87 (1996) (“One of
the specific purposes of this section is to overrule Stratton-Oakmont v. Prodigy . . . .”).
45
Stratton Oakmont, 1995 WL 323710, at *3.
46
Zeran, 129 F.3d at 332.
47 53 F.4th 110, 122 (4th Cir. 2022).
48
Id. (citing RESTATEMENT (SECOND) OF TORTS § 558(a) (AM. L. INST. 1965)).
49
Id. at 117.
50
Id. at 123–24.
51
See Zeran, 129 F.3d at 330 (referencing the exercise of “traditional editorial functions” without reference to the
content of information). Because the material at issue in
Zeran was allegedly defamatory, the Fourth Circuit’s decision
in
Henderson does not call into question the outcome of
Zeran.
Id. (“Zeran seeks to hold AOL liable for defamatory
speech initiated by a third party.”).
52
E.g., Divino Grp. LLC v. Google LLC, No. 19-04749, 2023 WL 218966, at *2 (N.D. Cal. Jan. 17, 2023)
(“
Henderson is not binding on this Court; and . . . the Fourth Circuit’s narrow construction of Section 230(c)(1) appears
to be at odds with Ninth Circuit decisions indicating that the scope of the statute’s protection is much broader.”); Prager
Univ. v. Google LLC, 85 Cal. App. 5th 1022, 1033 n.4 (2022) (“
Henderson’s narrow interpretation of section 230(c)(1)
is in tension with the California Supreme Court’s broader view, which we follow, absent a contrary ruling by the
United States Supreme Court.”).
Congressional Research Service
6
Liability for Algorithmic Recommendations
230(c)(1).53 The Ninth Circuit determined that the claims based on Snapchat’s speed filter did not
treat the platform as a “publisher or speaker,” because the claims “treat[ed] Snap as a products
manufacturer, accusing it of negligently designing a product (Snapchat) with a defect . . . .”54 A
state court in Georgia reached a similar conclusion, holding that claims based on Snapchat’s
speed filter “do not seek to hold Snapchat liable for publishing” and therefore could proceed.55
When a product feature determines how user content is displayed or sorted, courts are more likely
to determine that a claim treats a service provider as a “publisher or speaker” entitled to
protection under Section 230(c)(1).56 In an early case on the topic, the Fifth Circuit affirmed the
dismissal of a lawsuit alleging that MySpace acted negligently in failing “to implement basic
safety measures to prevent sexual predators from communicating with minors on its Web site.”57
The court concluded that the plaintiff’s allegations were “merely another way of claiming that
MySpace was liable for publishing [predators’] communications.”58 In the court’s view, the
negligence claims hinged on MySpace’s publisher functions: its decisions relating to the
“monitoring, screening, and deletion” of third-party content.59
Information Provided by Another Information Content Provider
The third criteria for Section 230(c)(1) immunity is that liability arises from information
“provided by another information content provider.”60 Put another way, a user or provider of an
interactive computer service cannot claim Section 230(c)(1)’s protection for its own content.61
Under Section 230, a user or provider is considered an “information content provider” of
particular content if the user or provider is “responsible, in whole or in part, for the creation or
development” of the content.62 A defendant cannot rely on Section 230(c)(1) for claims based on
content it has created or developed, as such content is not provided by “another” information
content provider. Notably, a service provider or user can be merely a publisher of another’s
information in some circumstances but a content provider in others.63 Whether Section 230(c)(1)
applies depends on the particular content being challenged.64
53 Lemmon v. Snap, Inc., 995 F.3d 1085, 1091–94 (9th Cir. 2021).
54
Id. at 1092.
55 Maynard v. Snapchat, Inc., 816 S.E.2d 77, 81 (Ga. Ct. App. 2018).
56
E.g., Doe v. Backpage.com, LLC, 817 F.3d 12, 21 (1st Cir. 2016) (holding that claims based on “overall design and
operation” of a website, when design choices “reflect choices about what content can appear on the website and in what
form,” are protected by Section 230(c)(1)).
57 Doe v. MySpace, 528 F.3d 413, 416 (5th Cir. 2008).
58
Id. at 420.
59
See id. (quoting Green v. Am. Online (AOL), 318 F.3d 465, 471 (3rd Cir. 2003)).
60
See, e.g., Force v. Facebook, Inc., 934 F.3d 53, 68 (2d Cir. 2019) (analyzing whether claims against Facebook for
promoting particular content would make Facebook liable for information provided by another information content
provider).
61
See, e.g., Maffick, LLC v. Facebook, Inc., No. 20-05222, 2020 WL 5257853 (N.D. Cal. Sept. 3, 2020) (ignoring
Section 230 entirely in a case based on Facebook’s labeling of user accounts as “Russia state-controlled media”).
62 47 U.S.C. § 230(f)(3).
63
See Jones v. Dirty World Enter. Recordings LLC, 755 F.3d 398, 408 (6th Cir. 2014) (“A website operator can
simultaneously act as both a service provider and a content provider”).
64
See, e.g., Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157, 1166, 1173–74 (9th Cir. 2008) (holding that
website was an information content provider with respect to user preferences the website helped “develop” through
mandatory questionnaires, but was not an information content provider with respect to information provided in a
freeform text box).
Congressional Research Service
7
Liability for Algorithmic Recommendations
“Material Contribution” and “Neutral Tools”
A foundational case on whether a service provider is responsible for particular content is the
Ninth Circuit’s decision in
Fair Housing Council of San Fernando Valley v. Roommates.com,
LLC (
Roommates).65 That opinion said “a website helps to develop unlawful content [and is
therefore unable to claim protection under Section 230(c)(1)] if it contributes materially to the
alleged illegality of the conduct.”66
In a later Ninth Circuit opinion, the court clarified that this
“material contribution” test “draw[s] the line at ‘the crucial distinction between, on the one hand,
taking actions (traditional to publishers) that are necessary to the display of unwelcome and
actionable content and, on the other hand, responsibility for what makes the displayed content
illegal or actionable.’”67 Even the act of publishing itself may materially contribute to the
unlawfulness of conduct if the content published is private information legally protected from
disclosure.68
In another portion of the Ninth Circuit’s
Roommates decision, the court opined that “passive
conduits” or “
neutral tools,” such as a search engine that filters content only by user-generated
criteria, would not be responsible for developing content where they do not enhance the
unlawfulness of the content.69 By contrast, for example, the court said that where a website edited
user-generated content in a way that made the message libelous, the site would be “directly
involved in the alleged illegality and thus not immune.”70 In one application of this “neutral
tools” analysis, the Ninth Circuit held that customer review aggregator Yelp’s rating system,
which transforms aggregated user input into a 0-5 star rating, did not amount to development.71 In
another case, the D.C. Circuit determined that social media platform Facebook “provides a
neutral means by which third parties can post information of their own independent choosing
online,” and Facebook was therefore not responsible for developing third-party content merely
for failing to remove it.72
Section 230 Immunity and Algorithmic
Recommendations
Individuals have occasionally sought to hold social media platforms and search engines—both of
which federal courts have held to be providers of “interactive computer services” under Section
23073—liable for their use of algorithms or other automated systems to recommend or organize
content. Such claims often cast a website’s use of algorithms either as non-publisher activity to
which Section 230(c)(1) does not apply, or as “development” of third-party content that renders
65 521 F.3d 1157 (9th Cir. 2008).
66
Id. at 1168.
67 Kimzey v. Yelp! Inc., 836 F.3d 1263, 1269 n.4 (9th Cir. 2016) (quoting Jones v. Dirty World Ent. Recordings LLC,
755 F.3d 398, 413–14 (6th Cir. 2014).
68
See FTC v. Accusearch, Inc., 570 F.3d 1187, 1200 (10th Cir. 2008) (holding that website materially contributed to
alleged illegality of conduct when it collected and published confidential telephone records).
69
Id. at 1167–69.
70
Id. at 1169.
71
Kimzey, 836 F.3d at 1270.
72 Klayman v. Zuckerberg, 753 F.3d 1354, 1358 (D.C. Cir. 2014).
73
See Marshall’s Locksmith Serv. v. Google, LLC, 925 F.3d 1263, 1268 (D.C. Cir. 2019) (applying the term
“interactive computer service” to a search engine);
Klayman, 753 F.3d at 1357 (applying the term to a social media
provider).
Congressional Research Service
8
link to page 16 link to page 16 link to page 9
Liability for Algorithmic Recommendations
the service an “information content provider.” Federal courts of appeals that have considered this
issue thus far have mostly rejected these theories.74
The Second Circuit’s decision in
Force v. Facebook75
and the Ninth Circuit’s decision in
Gonzalez v. Google76
each offer detailed analyses of these theories, as discussed in more detail
below. Both
Force and
Gonzalez involved claims seeking to hold social media platforms liable
for terrorist attacks under the Anti-Terrorism Act (ATA), a statute that permits legal recovery
against someone who commits or supports the commission of international terrorism.77 Each case
presented a similar theory: in short, that social media platforms had made friend or content
suggestions to users, and these suggestions helped advance the cause of terrorist groups using the
platforms.78 Though the courts in both
Gonzalez and
Force held that Section 230(c)(1) protects a
social media platform’s use of algorithms to make recommendations or suggestions,79 partial
concurrences and dissents in both cases challenge the reasoning that led to this conclusion.80 The
Supreme Court vacated the Ninth Circuit’s decision in
Gonzalez, but the Court did so without
disagreeing with or otherwise addressing the Ninth Circuit’s Section 230 analysis.81
Algorithmic Recommendations as Non-Publisher Activity
A frequent point of contention in lawsuits brought against platforms is whether claims based on a
platform’s algorithmic recommendation of third-party content treat the platform as a publisher or
speaker.82 As discussed above, website features that determine how content is displayed are more
likely to be considered publisher activity protected by Section 230(c)(1). For example, the Ninth
Circuit held that a message board would be treated as a “publisher or speaker” by claims
challenging the message board’s use of algorithms to recommend and notify users of potential
topics of interest, which allegedly connected a user with a drug dealer.83
The Second Circuit reached the same conclusion in
Force.84 The plaintiffs argued their claims did
not treat Facebook as a publisher or speaker because Facebook’s algorithms matching content
with users went beyond publisher activity.85 However, two of the judges on the Second Circuit’s
three-judge panel in
Force determined that Facebook’s use of algorithms to suggest friends and
content to Facebook users was publisher activity immunized under Section 230(c)(1).86 The
74
E.g.,
Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093, 1098–99 (9th Cir. 2019) (opining that plaintiffs could
not frame “website features as content” and that the site’s recommendation and notification functions did not materially
contribute to alleged unlawfulness of content);
Marshall’s Locksmith Serv., 925 F.3d at 1271 (declining to treat search
engines’ conversion of fraudulent addresses from webpages into “map pinpoints” as developing content).
75 Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
76 2 F.4th 871 (9th Cir. 2021),
vacated, 143 S. Ct. 1191 (2023) (per curiam).
77 18 U.S.C. § 2333.
78
See Force, 934 F.3d at 59;
Gonzalez, 2 F.4th at 881.
79
Force, 934 F.3d at 66–69 (rejecting theories that algorithmic sorting rendered website a non-publisher or materially
contributed to development of content);
Gonzalez, 2 F.4th at 892–94 (same).
80
Force, 934 F.3d at 76 (Katzmann, C.J., concurring in part);
Gonzalez, 2 F.4th at 913 (Berzon, J., concurring);
id. at
918 (Gould, J., concurring in part).
81 Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023) (per curiam). For more discussion, see
infra “Supreme Court
Decisions and Section 230.”
82
See supra “Non-Publisher Activity.”
83 Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093, 1099 (9th Cir. 2019).
84
Force, 934 F.3d at 66 (holding that “arranging and distributing third-party information . . . is an essential result of
publishing,” whether or not algorithms are used).
85
Id. at 65.
86
Id. at 66.
Congressional Research Service
9
Liability for Algorithmic Recommendations
majority analogized Facebook’s use of algorithmic suggestion to more traditional publisher
activities of “arranging and distributing third-party information,” such as placing content on a
homepage.87 The court concluded any act of “arranging and distributing third-party information,”
including by way of algorithms, “inherently forms ‘connections’ and ‘matches’ among speakers,
content, and viewers of content, whether in interactive internet forums or in more traditional
media.”88
The Ninth Circuit reached a similar conclusion in
Gonzalez. The
Gonzalez plaintiffs had argued
that Google was liable for allowing the terrorist group known as ISIS to use and access the video
sharing platform YouTube.89 The court held that these claims “[sought] to impose liability for
allowing ISIS to place content on the YouTube platform” and therefore treated Google as a
publisher.90
Algorithms as Content Development
Plaintiffs may seek to sidestep Section 230(c)(1) by arguing that algorithmically amplifying
content “develops” the content and renders the platform responsible for it.91 Arguments that a
platform is responsible for developing third-party content frequently rely on the material
contribution and neutral tools tests articulated in
Roommates.
The
Roommates court itself cautioned that the use of “an ordinary search engine” should not
constitute “development” under Section 230.92 Other federal courts hearing claims brought
against search engines have agreed. In
O’Kroley v. Fastcase, Inc., the Sixth Circuit held that
Google’s display of allegedly defamatory content in its search results did not “develop” the
content.93 The court added that Google’s alterations to the content did not “materially contribute”
to its unlawfulness.94 A federal district court similarly held that a search engine’s alleged
“manipulation” of search results to promote defamatory content did not develop that content.95
Several courts hearing claims premised on the use of algorithms have adopted the “neutral tools”
analysis from
Roommates. In
Marshall’s Locksmith Service v. Google, a case involving search
engines that automatically converted addresses provided by third parties into “pinpoints”
appearing on the search engines’ mapping websites, the D.C. Circuit emphasized that the search
87
Id. at 66–67.
88
Id. at 66.
89 Gonzalez v. Google, LLC, 2 F.4th 871, 891 (9th Cir. 2021),
vacated, 143 S. Ct. 1191 (2023) (per curiam).
90
Id. at 892.
91 Plaintiffs have also argued that their claims hold platforms liable for the “content” of the platforms’ algorithms,
rather than third-party content. Courts thus far appear unreceptive to this argument.
See, e.g., Prager Univ. v. Google
LLC, 301 Cal. Rptr. 3d 836, 848 (Cal. Ct. App. 2022) (rejecting this theory and holding that claims based on platform’s
use of recommendation algorithms “turn not on the creation of algorithms, but on the defendants’ curation of
[content]”);
cf. Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093, 1098 (9th Cir. 2019) (“recommendations . . . are
tools meant to facilitate the communication and content of others. They are not content in and of themselves.”).
92 Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157, 1169 (9th Cir. 2008).
93 831 F.3d 352, 355 (6th Cir. 2016).
94
Id. Though the plaintiff in
O’Kroley alleged that an ellipsis added by Google altered the meaning of the search result
at issue, the court observed that “Google did not add the ellipsis to the text.”
Id. The Court therefore did not address
whether the addition of the ellipsis, if made by Google, would have materially contributed to the alleged unlawfulness
of the search result.
95 Obado v. Magedson, No. 13-2382, 2014 WL 3778261, at *5 (D.N.J. July 31, 2014),
aff’d, 612 F. App’x 90 (3d Cir.
2015).
Congressional Research Service
10
Liability for Algorithmic Recommendations
engines’ tools did “not distinguish” between different types of user content.96 Instead, the
algorithms translated all types of information in the same manner.97
The Second Circuit reached a similar conclusion in
Force.98 The plaintiffs in
Force argued
Section 230(c)(1) did not apply because Facebook’s algorithms helped create or develop terrorist
content by directing that content to the site’s most interested users.99 Looking to both the material
contribution and neutral tools tests, the
Force majority determined that Facebook’s involvement
in user content was “neutral.”100 The court observed that Facebook’s algorithms matched content
to users “based on objective factors applicable to any content” and did not “augment[] terrorist-
supporting content primarily on the basis of its subject matter.”101 These neutral algorithms were
insufficient to render Facebook a developer of the user content.102 The Ninth Circuit applied this
analysis in
Gonzalez, reasoning that YouTube’s recommendation system, while “more
sophisticated than a traditional search engine,” was still “neutral” in that it treated terrorist-
created content the same as other third-party content.103
Judicial Challenges to Section 230’s Scope
No court has thus far ruled that a platform may be held liable for using algorithms to recommend
content. However, several judges have expressed concern over applying Section 230(c)(1) to
recommendation systems. In both
Force and
Gonzalez, one member of the three-judge panel
partially dissented and argued that Section 230 should not bar lawsuits under the ATA against
platforms for using algorithms to amplify or recommend terrorist content. In
Gonzalez, one of the
members of the panel “reluctantly” joined the majority and wrote separately to explain her
misgivings about the scope of Section 230(c)(1)’s immunity.
Chief Judge Katzmann challenged the
Force majority’s reasoning in a partial dissent. According
to the chief judge, the claims did not treat Facebook as a publisher, because they were based “not
on the content of the information shown but rather on the connections Facebook’s algorithms
make between individuals.”104 Facebook was not merely publishing content, the dissent argued,
but “proactively creating networks of people.”105 Chief Judge Katzmann also argued that
Facebook’s friend and content suggestion algorithms communicate a message: that Facebook
believes the specific individual viewing the suggestions will like the suggested content or be
interested in connecting with the suggested person.106 By analogy, the chief judge opined that
Section 230 would not protect a third party that analyzed Facebook user data using an algorithm
and then sent users messages recommending particular content.107
96 925 F.3d 1263, 1271 (D.C. Cir. 2019).
97
Id. 98 Force v. Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019) (holding that friend and content suggestion algorithms were
“neutral” when suggestions were made “based on objective factors applicable to any content”).
99
Id. at 68.
100
Id. at 70.
101
Id. at 70 & n.4.
102
Id. at 70.
103 Gonzalez v. Google, LLC, 2 F.4th 871, 894–96 (9th Cir. 2021),
vacated, 143 S. Ct. 1191 (2023) (per curiam).
104
Force, 934 F.3d at 77 (Katzmann, C.J., concurring in part).
105
Id. at 83.
106
Id. at 82.
107
Id. But cf. Batzel v. Smith, 333 F.3d 1018, 1031 (9th Cir. 2003) (holding that forwarding an email to a listserv was
protected by Section 230).
Congressional Research Service
11
Liability for Algorithmic Recommendations
Chief Judge Katzmann cabined the reach of his dissent by observing that the claims in
Force are
“atypical” in that defendants are liable under the ATA for providing services to terrorist
organizations.108 He suggested his approach to Section 230 would not render Facebook liable for
“common torts” like defamation in which Facebook’s use of an algorithm to boost or recommend
content would be immaterial to the claim.109 In a defamation claim, “the mere act of publishing . .
. creates liability,” whereas under the ATA, it was the operation of the algorithms that allegedly
provided illegal material support to the terrorist organizations.110
Members of the Ninth Circuit’s panel in
Gonzalez similarly challenged the reasoning behind
extending Section 230’s protections to algorithmic amplification. In a concurring opinion, Judge
Berzon wrote that if Ninth Circuit precedent did not require otherwise, she would hold that
promoting or recommending content is not publisher activity.111 Citing favorably to Chief Judge
Katzmann’s dissent in
Force, Judge Berzon concluded that recommendations by algorithm “are
well outside the scope of traditional publication,” which has never included selecting material to
display to each individual reader.112 Instead, platforms using algorithms to recommend content
communicate their own messages to users about what content they might like.113 Despite reaching
this conclusion, Judge Berzon determined that Ninth Circuit precedent “squarely and irrefutably”
held that recommending content is publisher activity.114 She therefore “reluctantly” joined the
majority opinion, but urged the full Ninth Circuit to reconsider its precedent.115
Judge Gould dissented in part, stating that he would hold that a website’s use of otherwise
“neutral tools” is unprotected by Section 230 if the website “(1) knowingly amplifies a message
designed to recruit individuals for a criminal purpose, and (2) the dissemination of that message
materially contributes to a centralized cause giving rise to a probability of grave harm.”116 Like
Judge Berzon, he expressed support for Chief Judge Katzmann’s
Force dissent.117 Judge Gould
also urged the full Ninth Circuit or the Supreme Court to address Section 230’s applicability to
algorithmic recommendations.118
Considerations for Congress
While few courts have explored Section 230’s application to algorithmic recommendations in
depth, the decisions in
Force and
Gonzalez may indicate judicial reluctance to limit the broad
reach of Section 230 embraced by
Zeran and subsequent decisions. Congress may consider
whether the broad immunity currently recognized by courts should apply to algorithmically sorted
content or, alternatively, whether certain behavior or content should warrant different treatment
108
Force, 934 F.3d at 83–84 (Katzmann, C.J., concurring in part).
109
Id. 110
Id. at 84. Engaging in defamation typically requires only the publication of a false and defamatory statement.
See
RESTATEMENT (SECOND) OF TORTS § 558 (AM. L. INST. 1977) (setting out the elements of defamation).
111 Gonzalez v. Google, LLC, 2 F.4th 871, 913 (9th Cir. 2021) (Berzon, J., concurring).
112
Id. at 914.
113
Id. at 915.
114
Id. (citing Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093, 1098 (9th Cir. 2019)).
115
Id. at 917.
116
Id. at 923 (Gould, J., dissenting in part). Judge Gould would also have held that “a lack of reasonable review of
content posted that can be expected to be harmful to the public” would render an otherwise neutral tool unprotected by
Section 230.
Id.
117
Id. at 920.
118
Id. at 925.
Congressional Research Service
12
Liability for Algorithmic Recommendations
under Section 230. Any changes to Section 230’s protection may also raise concerns under the
First Amendment’s Free Speech Clause.
Supreme Court Decisions and Section 230
In the more than 25 years since Section 230 was enacted, the Supreme Court has never decided
any cases interpreting the law. The Court agreed to hear a Section 230 case for the first time in
Gonzalez. The Supreme Court had the opportunity in
Gonzalez either to narrow Section 230’s
scope as it applies to such recommendations or to ratify the appellate court consensus that
algorithmic recommendations are protected by Section 230. Instead, the Court vacated the Ninth
Circuit’s decision without addressing Section 230.119
Twitter v. Taamneh, a companion case to
Gonzalez, focuses on liability under the ATA
notwithstanding Section 230.120 The claims in
Taamneh rely on a similar theory as those in
Gonzalez: in short, that the defendants aided or abetted acts of international terrorism by allowing
ISIS to recruit individuals and spread their message using social media.121 The Court held in
Taamneh that the claims against social media companies did not give rise to liability under the
ATA.122 Consequently, the Court held in
Gonzalez that its decision in
Taamneh foreclosed liability
for many of the claims.123 The Court “decline[d] to address the application of § 230” and instead
vacated the Ninth Circuit’s opinion and remanded the case for reconsideration in light of
Taamneh.124 As a consequence of the Supreme Court’s decision, the Ninth Circuit’s decision in
Gonzalez is no longer binding precedent in the Ninth Circuit. Because the Supreme Court did not
disagree with (or even address) the Ninth Circuit’s Section 230 analysis, the Ninth Circuit may
choose to reaffirm its analysis on remand.
The outcome in
Taamneh offers a reminder that a platform’s exposure to legal liability for its
recommendations does not depend only on whether the platform receives protection from Section
230. Removing Section 230’s protections for certain activity, such as promoting or amplifying
content, may not subject a platform to liability if the platform’s activity is not legally actionable.
In other words, an individual alleging harm due to a recommendation would have to state a
legally recognized basis for a lawsuit in order to hold the platform liable.
Legislation
Some Members of the 117th Congress introduced several bills that would have addressed Section
230’s relationship with algorithmically sorted or recommended content. These bills generally
would have restricted the availability of Section 230’s protections for platforms that
“recommend” or “promote” certain content. To date, one of these bills, the DISCOURSE Act, has
been reintroduced in the 118th Congress.125
119 Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023) (per curiam).
120 143 S. Ct. 1206 (2023).
121
See Stipulation with Proposed Order, Taamneh v. Twitter, Inc., No. 343 F. Supp. 3d 904 (N.D. Cal. 2018) (No. 17-
4107), ECF No. 87 (parties agreeing that the claims in
Taamneh are “materially identical” to the claims in
Gonzalez).
122
Taamneh, 143 S. Ct. at 1215.
123
Gonzalez, 143 S. Ct. at 1192.
124
Id. 125 S. 2228, 117th Cong. (2021);
reintroduced as S. 921, 118th Cong. (2023).
Congressional Research Service
13
link to page 8
Liability for Algorithmic Recommendations
Table 1. Proposed Legislation Addressing Section 230 and Algorithmic
Recommendations
117th Congress
Bill No.
Short Title
Summary
H.R.
Platform
Would have amended Section 230(c)(1) so that it does not apply if a “provider or
9695
Integrity Act
user has promoted, suggested, amplified, or otherwise recommended” the
information at issue.
H.R.
Justice Against
Would have provided that Section 230(c)(1) immunity does not apply to certain
5596
Malicious
service providers who knowingly or recklessly made a “personalized
Algorithms Act recommendation” of information that “materially contributed to a physical or
severe emotional injury to any person.” This exception would not have applied to
recommendations “made directly in response to a user-specified search.”
S. 2335
Don’t Push My
Would have denied immunity when a provider “(i) col ects information regarding
Buttons Act
the habits, preferences, or beliefs of a user of the service; and (ii) uses an
automated function to deliver content to the user described in clause (i) that
corresponds with the habits, preferences, or beliefs identified as a result of the
action taken under that clause with respect to that user.” Would have provided
that this new exception does not apply when a user “uses an automated function to
deliver content to that user” or “knowingly and intentionally elects to receive the
content.”
S. 2228
DISCOURSE
Among other things, would have amended the definition of “information content
Act
provider” to include a provider “with a dominant market share” that uses certain
algorithms to target information. Reintroduced in the 118th Congress as S. 921.
H.R.
Protecting
Would have denied immunity to interactive computer services in specified federal
2154
Americans
civil actions relating to civil rights and terrorism if the services used algorithms to
from
sort, recommend, or rank third-party content, with certain exceptions.
Dangerous
Algorithms Act
S. 2448
Health
Would have denied immunity when a provider “promotes . . . health misinformation
Misinformation through an algorithm” during a public health emergency.
Act
Source: CRS analysis of bil s.
Notes: This table does not include proposals that would have more broadly amended Section 230 or proposals
that would have altered Section 230’s applicability to a platform’s restrictions on access to content.
Although these bills take varying approaches, any revisions to Section 230 may raise several
interpretative questions. Select issues are considered below.
Relevance of Existing Legal Doctrine
As discussed above, a robust and largely consistent body of caselaw interprets the scope of
Section 230’s protections. Some approaches from federal appellate decisions, such as those from
Zeran and
Roommates, have enjoyed widespread application even in state and federal courts
where the decisions have no binding precedential effect.126 One consideration to revising Section
230 is how any changes to the law will interact with these existing legal frameworks. Proposals
that explicitly alter Section 230’s application might render certain analytical frameworks wholly
or partially inoperative. For example, the DISCOURSE Act would have added provisions to
126
E.g.,
supra no
te 41 (cases applying
Zeran); FTC v. Accusearch Inc., 570 F.3d 1187, 1200 (10th Cir. 2009) (applying
analysis similar to
Roommates); Hill v. StubHub, Inc., 727 S.E. 2d. 550, 561 (N.C. Ct. App. 2012) (applying
Roommates and
Accusearch); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 414 (6th Cir. 2014) (applying
Roommates).
Congressional Research Service
14
link to page 5
Liability for Algorithmic Recommendations
Section 230 explaining that platforms engaged in certain activity should be considered
information content providers.127 Courts may have concluded that this should supplant the
“material contribution” analysis most courts now use to determine whether a platform developed
challenged content—or they might have tried to integrate the existing analysis into the new
statutory framework. Either way, courts would then have to decide what cases decided under the
prior framework could still be relied on under the new framework, and flesh out the meaning of
the new statutory provisions. Existing frameworks may remain relevant for proposals that
condition Section 230’s protections but do not otherwise alter Section 230(c)(1) or its definitions.
Congress could also choose to affirm or disavow expressly any existing judicial interpretations of
Section 230. If Congress wishes to enshrine any existing analytical frameworks, it may do so
explicitly in the text of Section 230.
Defining the Covered Activity
Many of the proposals from the 117th Congress focus on “recommendation” or “amplification” of
particular content as a basis for limiting Section 230’s protection, with some variation in the exact
terminology used. Some commentators have suggested that determining whether a platform has
“amplified” content may be difficult because of the complexity in assessing the role
recommendation systems play in directing content to users.128 Defenders of Section 230 have
argued that any act of arranging or publishing content may necessarily “amplify” certain content,
because some pieces of content will be displayed more prominently than others as a matter of
design.129 The courts in
Force and
Gonzalez relied on similar reasoning in holding that a
platform’s use of algorithms is publisher activity under Section 230(c)(1).130 A potential
consideration is how or whether a Section 230 carveout might apply to internet search functions,
which by design “amplify” content that the platform determines is most responsive to a user’s
search query, or other functions that promote certain content at a user’s request.131 Some past
proposals, such as the Protecting Americans from Dangerous Algorithms Act, explicitly except
search functions from their coverage.132
127 S. 2228, sec. 2(a), 117th Cong. (2021).
128
See Daphne Keller,
Amplification and Its Discontents, 1 J. FREE SPEECH L. 227, 232–33 (2021);
see also Manoel
Horta Ribeiro, Veniamin Veselovsky, and Robert West,
The Amplification Paradox in Recommender Systems (arXiv:
2302.11225 [cs.CY]), https://arxiv.org/pdf/2302.11225.pdf (arguing that understandings of algorithmic amplification
should account for how users interact with recommended content and suggesting that recommendation systems are “not
the primary driver of attention toward extreme content”).
129
See, e.g., Transcript of Oral Argument at 115, Gonzalez v. Google LLC, No. 21-1333 (U.S. Feb. 21, 2023)
(argument of Google that “[a]ll publishing requires organization and inherently conveys [the] implicit message” that
the viewer might like the published content);
cf. Nabiha Syed,
Section 230 Is a Load-Bearing Wall: Is It Coming
Down? THE MARKUP (Feb. 25, 2023), https://themarkup.org/hello-world/2023/02/25/section-230-is-a-load-bearing-
wall-is-it-coming-down (commentary from Professor James Grimmelmann that “there’s not a sharp dividing line
between search and recommendation” and noting that a “truly neutral” search engine would not be functional);
Eric
Goldman,
Search Engine Bias and the Demise of Search Engine Utopianism, 8 YALE J.L. & TECH. 188, 195–96 (2006)
(arguing that search engine providers exhibit biases in how they order results and “search engines simply cannot
passively and neutrally redistribute third party content”).
130
See Force v. Facebook, Inc., 934 F.3d 53, 66 (2d Cir. 2019) (declining to treat “matchmaking” by algorithm as non-
publisher activity because organizing and displaying content “inherently forms ‘connections’ and ‘matches' . . . .”);
Gonzalez v. Google LLC, 2 F.4th 871, 892 (9th Cir. 2021) (holding that claims against Google for “fail[ing] to prevent
ISIS from using its platform” treated Google as a publisher),
vacated, 143 S. Ct. 1191 (2023) (per curiam).
131
See supra “Background.” 132 H.R. 2154, sec. 2, 117th Cong. (2021). The Don’t Push My Buttons Act takes a different approach, excepting
situations in which a user makes use of an “automated function” to deliver content to that user or “knowingly and
intentionally elects to receive” content based on information collected from the user. S. 2335, 117th Cong. (2021).
Congressional Research Service
15
link to page 6
Liability for Algorithmic Recommendations
Some of the past proposals further limited their covered activity in other ways, such as requiring a
platform to have a particular mental state (for example, the Justice Against Malicious Algorithms
Act’s limitation to reckless or knowing recommendations)133 or describing with more particularity
what types of recommendations are covered (such as the Don’t Push My Buttons Act’s focus on
“automated functions” that rely on user information collected by the platform).134 Defendants
claiming Section 230(c)(1)’s protection currently need only to satisfy the three criteria discussed
above.135 Exceptions to Section 230(c)(1) that would create additional criteria—such as by
requiring that a defendant demonstrate that it has not acted with a particular mental state, or that it
does not use “automated functions” to recommend content—may remove some of the procedural
benefits of Section 230 by making it harder for platforms to claim its protections without
additional judicial factfinding.136 Additionally, proposals that focus on automated functions may
create a situation where a recommendation made with human input would be entitled to Section
230’s protection when the same recommendation made automatically would not be.
Free Speech Considerations
The Free Speech Clause of the First Amendment to the U.S. Constitution limits the government's
ability to regulate speech.137 Reforms to Section 230 may raise several free speech concerns
generally.138 Proposals that make Section 230’s protections unavailable for certain algorithmic
operations raise at least three questions. One question is whether, if Section 230 is unavailable,
hosting or promoting others’ speech on the internet is itself protected under the First Amendment.
If it is, the First Amendment might restrict liability. The other questions relate more directly to the
constitutionality of reform proposals: the second question is whether modifying an existing
liability regime raises the same First Amendment concerns as enacting a law that directly
prohibits or restricts speech. A final question is, if such a proposal does raise First Amendment
concerns, whether withholding Section 230’s protections for certain algorithmic operations
impacts speech based on its content.
Overview of Free Speech Principles
The Free Speech Clause of the First Amendment limits the government’s ability to regulate
speech.139 Courts have long recognized that this protection may extend beyond written or verbal
communication.140 However, the Free Speech Clause does not provide the same degree of
protection for conduct as it does for what the Supreme Court calls “pure speech.”141 For conduct
133 H.R. 5596, 117th Cong. (2021).
134 S. 2335, 117th Cong. (2021).
135
See generally supra “Statutory Background: Section 230” for discussion of these three criteria.
136
Cf. Eric Goldman,
Why Section 230 Is Better Than the First Amendment, 95 NOTRE DAME L. REV. REFLECTION 33,
40 (2019) (arguing that if Section 230 hinged on a defendant’s mental state, “plaintiffs could allege that [mental state]
and often survive a motion to dismiss, get into discovery, and delay resolution of the case to summary judgment or
later”).
137 U.S. CONST. amend. I. For more discussion of the Free Speech Clause generally,
see Cong. Rsch. Serv
., First
Amendment, CONSTITUTION ANNOTATED, https://constitution.congress.gov/browse/amendment-1/ (last visited Oct. 11,
2023).
138
See CRS Report R46751,
Section 230: An Overview, by Valerie C. Brannon and Eric N. Holmes, for a discussion of
these considerations.
139 U.S. CONST. amend. I.
140
See Texas v. Johnson, 491 U.S. 397, 404 (1989).
141 Cox v. Louisiana, 379 U.S. 536, 555 (1965).
Congressional Research Service
16
Liability for Algorithmic Recommendations
to receive First Amendment protection, the conduct must be “expressive.”142 In other words, the
person engaging in the conduct must have “[a]n intent to convey a particularized message.”143
The Free Speech Clause generally prohibits the government from regulating speech based on its
content,144 with limited exceptions for certain narrowly defined categories of “unprotected”
speech such as defamation.145 Laws that restrict or burden protected speech based on its topic or
subject matter (
content-based laws) are “presumptively unconstitutional” and subject to strict
judicial scrutiny.146 Under strict scrutiny, the government must show that the law is the least
restrictive means of serving a compelling governmental interest.147 It is “rare” for a law to survive
this test.148 Courts treat laws that restrict or burden speech based on the expression of particular
views (
viewpoint-based laws) as a particularly egregious subset of content-based laws.149 Unlike
other content-based laws, viewpoint-based laws are usually categorically unconstitutional and are
not justifiable even under strict scrutiny.150 One instance in which governments may be able to
differentiate among viewpoints is when the government itself is acting as a speaker151 or,
similarly, when a government chooses to fund certain activities but not others.152
Laws that regulate only the time, place, or manner of speech without regard to its content
(
content-neutral laws), are subject to a lower bar known as intermediate scrutiny.153 The test for
time, place, or manner restrictions requires the government to show that the law is “narrowly
tailored” to serve a “significant governmental interest” and “leave[s] open ample alternative
channels” to communicate that speech.154
A law is facially content-based if it “applies to particular speech because of the topic discussed or
the idea or message expressed.”155 A law that is not facially content-based may still be treated as
such if there was a content-discriminatory purpose behind the law.156 A law that merely requires
142
See, Barnes v. Glen Theatre, Inc., 501 U.S. 560, 564 (1991).
143
Johnson, 491 U.S. at 404 (citing Spence v. Washington, 419 U.S. 405, 410 (1974)).
144 Reed v. Town of Gilbert, 576 U.S. 155, 163 (2015).
See generally Cong. Rsch. Serv.,
Overview of Content-Based
and Content-Neutral Regulation of Speech, CONSTITUTION ANNOTATED,
https://constitution.congress.gov/browse/essay/amdt1-7-3-1/ALDE_00013695/ (last visited Oct. 11, 2023); CRS In
Focus IF12308,
Free Speech: When and Why Content-Based Laws Are Presumptively Unconstitutional, by Victoria L.
Killion.
145 CRS In Focus IF11072,
The First Amendment: Categories of Speech, by Victoria L. Killion.
146
Reed, 576 U.S. at 163;
see also City of Austin v. Reagan Nat’l Advert. of Austin, LLC, 142 S. Ct. 1464, 1471
(2022).
147 United States v. Playboy Ent. Grp., 529 U.S. 803, 813 (2000).
148 Williams-Yulee v. Fla. Bar, 575 U.S. 433, 444 (2015).
149 Rosenberger v. Rector & Visitors of the Univ. of Va., 515 U.S. 819, 829 (1995).
150
Id. (citing R.A.V. v. City of Saint Paul, 505 U.S. 377, 391 (1992));
see Pleasant Grove City v. Summum, 555 U.S.
460, 469 (2009) (“any restriction based on the content of speech must satisfy strict scrutiny . . . and restrictions based
on viewpoint are prohibited”).
151
See Rosenberger, 515 U.S. at 829.
152 Rust v. Sullivan, 500 U.S. 173, 193 (1991).
153 Clark v. Cmty. for Creative Non-Violence, 468 U.S. 288, 293 (1984).
154
Id. Though both strict and intermediate scrutiny require that a law be “narrowly tailored,” courts treat this
requirement differently under each test. Under strict scrutiny, a law is narrowly tailored if it is the least restrictive
means of achieving the law’s purpose. United States v. Playboy Ent. Grp., Inc., 529 U.S. 803, 813 (2000). Under
intermediate scrutiny, a law may be narrowly tailored even if it is not the least restrictive means “[s]o long as the means
chosen are not substantially broader than necessary to achieve the government’s interest . . . .” Ward v. Rock Against
Racism, 491 U.S. 781, 800 (1989).
155 Reed v. Town of Gilbert, 576 U.S. 155, 163 (2015).
156
Id. at 164.
Congressional Research Service
17
link to page 16
Liability for Algorithmic Recommendations
an enforcer to look to the content of speech is not necessarily content-based. In
City of Austin v.
Reagan National Advertising of Austin, the Supreme Court held that a city law treating “on-
premises” and “off-premises” signs differently was not content-based, even though determining
whether a particular sign was on- or off-premises required reference to the message on the sign.157
As the Court explained, a sign’s content mattered under the regulation “only to the extent that it
informs the sign’s relative location” and the law was therefore akin to a content-neutral time,
place, or manner restriction.158
Just as a facially neutral law may be content-based, such a law may be viewpoint-based if the law
favors or disfavors a particular point of view in practice. A law that is aimed at particular
speakers, the purpose of which is to suppress a viewpoint associated with those speakers, may be
viewpoint-based even if the law makes no reference to particular viewpoints.159 Similarly, a law
that excepts some speakers from its application may suggest that the law is targeting particular
viewpoints.160
Whether Hosting or Promoting Third-Party Speech is Protected Speech
As discussed above, if Congress were to amend Section 230 by creating exceptions for certain
algorithmic operations, ICS providers would not necessarily be liable for those algorithmic
operations.161 Critically, even if a plaintiff could state a claim under some other source of law, the
First Amendment might limit liability for protected speech activity. Thus, the First Amendment
might provide some immunity even if Section 230 no longer applies.
A number of lower courts have held that websites’ decisions about hosting or presenting third-
party content are protected by the First Amendment.162 For example, one trial court concluded
that a lawsuit challenging search engine results was barred by the First Amendment.163 The
plaintiffs argued that Baidu, a Chinese search engine, had violated federal and state civil rights
laws by blocking “from its search results . . . information concerning ‘the Democracy movement
in China’ and related topics.”164 The trial court said these allegations would “hold Baidu liable
for, and thus punish Baidu for, a conscious decision to design its search-engine algorithms to
favor certain expression on core political subjects.”165 In the court’s view, allowing “such a suit to
157 City of Austin v. Reagan Nat’l Advert. of Austin, LLC, 142 S.Ct. 1464, 1472–73 (2022).
158
Id. at 1473. For more background on
City of Austin and the Supreme Court’s content-based and content-neutral
jurisprudence, see CRS In Focus IF12308,
Free Speech: When and Why Content-Based Laws Are Presumptively
Unconstitutional, by Victoria L. Killion.
159
See Sorrell v. IMS Health, Inc., 564 U.S. 552, 565 (2011) (holding that a law targeted at pharmaceutical marketing
professionals was viewpoint-based).
160
See Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 802 (2011) (holding that a law prohibiting violent video game sales
to minors “singled out the purveyors of video games for disfavored treatment,” which suggested that the government
may be disfavoring particular viewpoints).
161
See supra “Supreme Court Decisions and Section 230.”
162
See, e.g., O’Handley v. Padilla, 579 F. Supp. 3d 1163, 1186–87 (N.D. Cal. 2022) (concluding Twitter’s decisions
“about what content to include, exclude, moderate, filter, label, restrict, or promote . . . are protected by the First
Amendment” and collecting cases with similar holdings),
aff’d on other grounds sub nom. O’Handley v. Weber, 62
F.4th 1145 (9th Cir 2023).
163 Zhang v. Baidu.com, Inc., 10 F. Supp. 3d 433, 435 (S.D.N.Y. 2014).
164
Id. at 434–35.
165
Id. at 440.
Congressional Research Service
18
Liability for Algorithmic Recommendations
proceed would plainly ‘violate[] the fundamental rule of protection under the First Amendment,
that a speaker has the autonomy to choose the content of his own message.’”166
These lower court rulings have built on Supreme Court cases recognizing that “when a private
entity provides a forum for speech, . . . . [t]he private entity may . . . exercise editorial discretion
over the speech and speakers in the forum.”167 This constitutionally protected “editorial
discretion” entails the right to choose what material to host and how to present it.168 While the
Supreme Court has not specifically weighed in on how protections for editorial discretion apply
to online platforms for third-party speech, the Court has the opportunity to consider the question
in two cases being argued in the Court’s October 2023 term.169 While both cases dispute
platforms’ ability to
restrict third-party speech, any Supreme Court discussion shedding light on
the general right of editorial control could nonetheless be relevant to determining protections for
hosting or promoting third-party speech.
The cases involve conflicting rulings from the Fifth and Eleventh Circuits on state laws that limit
platforms’ ability to moderate user content.170 The Eleventh Circuit, in line with the trial court
rulings mentioned above, concluded that when social media platforms “‘disclos[e],’ ‘publish[],’ or
‘disseminat[e]’ information, they engage in ‘speech within the meaning of the First
Amendment.’”171 The court accordingly ruled unconstitutional portions of a Florida law that
unduly “restrict[ed] platforms’ ability to speak through content moderation.”172 In contrast, the
Fifth Circuit upheld a Texas law that, in the court’s description, “generally prohibits large social
media platforms from censoring speech based on the viewpoint of its speaker.”173 That court
concluded that social media platforms “exercise virtually no editorial control or judgment,” and
further ruled that “the Supreme Court’s cases do not carve out ‘editorial discretion’ as a special
category of First-Amendment-protected expression.”174
As mentioned, the Supreme Court has granted certiorari in both cases.175 At least until the
Supreme Court weighs in, lower courts outside the Fifth Circuit may be more likely to conclude
166
Id. (quoting Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Boston, 515 U.S. 557, 573 (1995)).
See also, e.g.,
Search King, Inc. v. Google Tech., Inc., No. CIV-02-1457-M, 2003 U.S. Dist. LEXIS 27193, at *12 (W.D. Okla. May
27, 2003) (holding that Google’s PageRanks “are constitutionally protected opinions”).
167 Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1930 (2020). For more information on how the
Supreme Court’s cases on compelled speech and editorial discretion have been applied in current disputes over
business’ rights of editorial control, see CRS Report WPD00041,
State Non-Discrimination Laws and the First
Amendment, by David Gunter and Valerie C. Brannon.
168 Miami Herald Publ’g Co. v. Tornillo, 418 U.S. 241, 258 (1974) (“The choice of material to go into a newspaper, and
the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public
officials—whether fair or unfair—constitute the exercise of editorial control and judgment.”).
169 For more information on the background of these cases, see CRS Legal Sidebar LSB10748,
Free Speech Challenges
to Florida and Texas Social Media Laws, by Valerie C. Brannon.
170 NetChoice, LLC v. Att’y Gen., Fla., 34 F.4th 1196 (11th Cir. 2022); NetChoice, LLC v. Paxton, 49 F.4th 439 (5th
Cir. 2022).
171
NetChoice, LLC, 34 F.4th at 1210 (quoting Sorrell v. IMS Health Inc., 564 U.S. 552, 570 (2011)).
172
Id. at 1210, 1232.
173
NetChoice, LLC, 49 F.4th at 444.
174
Id. at 459, 463.
175 Moody v. NetChoice, LLC, No. 22-277 (U.S. Sept. 29, 2023); NetChoice, LLC v. Paxton, No. 22-555 (U.S. Sept.
29, 2023).
Congressional Research Service
19
link to page 5
Liability for Algorithmic Recommendations
that at least some sorting and promotion of speech qualifies as constitutionally protected editorial
activity.176
Even if a platform’s editorial control over third-party content qualifies as speech under the First
Amendment, another unresolved question is whether certain algorithmic processes might fall
outside this First Amendment protection. The district court in
Zhang, discussed above, held that a
search engine’s use of algorithms that suppressed certain content was protected First Amendment
activity, but the claims
in
Zhang arose from the search engine’s “conscious decision” to disfavor a
particular political message.177 Another district court confronting these issues suggested that
search engine rankings would be protected First Amendment activity “no matter the motive.”178
As discussed above, algorithms can generate a range of outputs, some more communicative than
others.179 Where on the spectrum of expressiveness a particular algorithm falls may inform a
court’s decision of how the First Amendment may apply.180 Platforms may use algorithms to
arrange content without consciously deciding to communicate a particular message. For example,
a platform may choose to arrange content in a way that maximizes user engagement with the
platform.181 The limited caselaw on how the First Amendment applies to algorithms suggests that
using algorithms to arrange or rank speech for any reason may be sufficient to warrant First
Amendment protection.182
Whether Withholding Section 230 Protection Restricts Speech
One question in a First Amendment analysis is whether the government has infringed on
protected speech at all. Section 230 grants a statutory protection against liability for publishing
third-party speech. Accordingly, it does not restrict that speech, and arguably makes publishing
that speech
less burdensome in terms of legal exposure. In certain contexts the Supreme Court has
recognized that the government does not violate the First Amendment simply by giving a benefit
to certain speakers and “declining to subsidize [others’] First Amendment activities.”183 Under
this theory, Congress’s decisions to extend immunity only to certain speakers might not implicate
significant First Amendment concerns.
Nonetheless, amendments to Section 230 that extend its protection in some instances, but not
others, likely could still implicate free speech concerns.184 Certain proposals that grant immunity
176
See, e.g., Zhang v. Baidu.com, Inc., 10 F. Supp. 3d 433, 438–39 (S.D.N.Y. 2014) (ruling that even though search
engine results “may be produced algorithmically,” the First Amendment still protects the judgments of the company’s
engineers, encoded in the algorithms, about how to select and arrange third-party speech).
177
Id. at 440.
178 e-Ventures Worldwide, LLC v. Google, Inc., No.14-646, 2017 WL 2210029, at *4 (M.D. Fla. Feb. 8, 2017).
179
See supra “Background.” 180
Cf. U.S. Telecom Ass’n v. FCC, 825 F.3d 674, 742 (D.C. Cir. 2016) (holding that broadband providers are “engaged
in indiscriminate, neutral transmission” and therefore do not exercise editorial discretion). Courts in other contexts have
held that computer code alone may be sufficiently expressive to warrant some First Amendment protection regardless
of its function.
See Junger v. Daley, 209 F.3d 481, 485 (6th Cir. 2000); Universal City Studios, Inc. v. Corley, 273 F.3d
429, 445–48 (2d Cir. 2001).
181
See generally Arvind Narayanan,
Understanding Social Media Recommendation Algorithms, KNIGHT FIRST AMEND.
INST. (Mar. 9, 2023), https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms (“the
primary objective of almost every recommendation on social media platforms is to rank the available content according
to how likely it is that the user in question will engage with it”).
182
E.g.,
e-Ventures, 2017 WL 2210029, at *4.
183 Regan v. Taxation with Representation of Wash., 461 U.S. 540, 548 (1983).
184 For more discussion of this issue, see CRS Report R46751,
Section 230: An Overview, by Valerie C. Brannon and
Eric N. Holmes.
Congressional Research Service
20
link to page 20
Liability for Algorithmic Recommendations
only to particular speakers could transgress a core First Amendment principle that the government
“may not deny a benefit to a person on a basis that infringes . . . his interest in freedom of
speech.”185 Section 230 could be seen to implicate the First Amendment to the extent it denies
immunity to those “who engage in certain forms of speech.”186 Certain types of Section 230
amendments could raise the concern that Congress has “discriminate[d] invidiously in its
subsidies in such a way as to ‘[aim] at the suppression of dangerous ideas.’”187 These “ideas”
could include a platform’s choices about what speech to recommend, as discussed above, as well
as a user’s exercise of speech if a proposal encourages platforms to remove content they
otherwise would not remove.188
Several Supreme Court cases suggest that government may sometimes impose content- or even
viewpoint-based conditions on government benefits.189 A more recent Supreme Court case
suggests that these precedents might not extend to a non-monetary government benefit such as
Section 230’s liability protections.190 In
Matal v. Tam, the Supreme Court struck down a provision
of a federal trademark statute prohibiting the registration of certain “disparag[ing]” marks.191 The
Court held that this provision violated the First Amendment because it was impermissibly
viewpoint-based.192 In defending the provision, the government argued that trademark registration
is a benefit provided by the government and that the government “is not required to subsidize
activities that it does not wish to promote.”193 A plurality of the Court rejected the analogy to the
line of cases allowing viewpoint-based distinctions when providing government benefits,194
reasoning that those cases “all involved cash subsidies or their equivalent.”195 Trademark
registration is different, the plurality reasoned, because it does not involve the payment of money
185 Perry v. Sindermann, 408 U.S. 593, 597 (1972); Speiser v. Randall, 357 U.S. 513, 518 (1958) (“It cannot be
gainsaid that a discriminatory denial of a tax exemption for engaging in speech is a limitation on free speech. . . . To
deny an exemption to claimants who engage in certain forms of speech is in effect to penalize them for such speech. Its
deterrent effect is the same as if the State were to fine them for this speech.”).
See generally Cong. Rsch. Serv.
Overview of Unconstitutional Conditions Doctrine, CONSTITUTION ANNOTATED,
https://constitution.congress.gov/browse/essay/amdt1-2-11-2-2-1/ALDE_00000771/ (last visited Oct. 11, 2023).
186 Speiser v. Randall, 357 U.S. 513, 518 (1958).
187
Regan, 461 U.S. at 548 (quoting Cammarano v. United States, 358 U.S. 498, 513 (1959)) (second alteration in
original).
188
See, e.g.,
Zeran v. Am. Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997) (noting that Section 230 was passed to
address the danger of a provider “choos[ing] to severely restrict the number and type of messages posted”); Derek E.
Bambauer,
How Section 230 Reform Endangers Internet Free Speech, BROOKINGS (July 1, 2020),
https://www.brookings.edu/techstream/how-section-230-reform-endangers-internet-free-speech/; Adam Thierer & Neil
Alan Chilson,
FCC’s O’Rielly on First Amendment & Fairness Doctrine Dangers, FEDERALIST SOC’Y (Aug. 6, 2020),
https://fedsoc.org/commentary/fedsoc-blog/fcc-s-o-rielly-on-first-amendment-fairness-doctrine-dangers.
189
See, e.g., Rust v. Sullivan, 500 U.S. 173, 194–95 (1991) (holding that prohibition on funding recipients from
engaging in abortion-related advocacy did not violate First Amendment).
But see Legal Servs. Corp. v. Velazquez, 531
U.S. 533, 541–43 (2001) (holding that prohibition on funding recipients from challenging validity of existing welfare
laws violated the First Amendment).
190
See Matal v. Tam, 582 U.S. 218, 241 (2017) (plurality opinion).
191
Id. at 227 (plurality opinion) (quoting 15 U.S.C. § 1052(a)).
192
Id. at 247 (majority opinion);
see also id. (Kennedy, J., concurring). As discussed
supra, viewpoint-based
restrictions on speech are categorically unconstitutional.
See supra no
te 150 and accompanying text. The Supreme
Court thus “le[ft] open” whether the Court’s content-based speech jurisprudence would apply to free speech challenges
to trademark registration.
Tam, 582 U.S. at 244 n.16.
193
Id. at 240 (plurality opinion).
194
See generally Cong. Rsch. Serv.,
Conditions on Federal Funding, CONSTITUTION ANNOTATED,
https://constitution.congress.gov/browse/essay/amdt1-2-11-2-2-4-1/ALDE_00001276/ (last visited Oct. 11, 2023).
195
Tam, 582 U.S. at 240 (plurality opinion).
Congressional Research Service
21
link to page 25 link to page 19
Liability for Algorithmic Recommendations
by the government
to a private party.196 Those cases could not justify viewpoint discrimination in
a “government registration scheme” involving non-monetary benefits.197 The plurality suggested
that a more appropriate analogy might be to government “program[s]” or “limited public forums”
for private speech where some “content- and speaker-based restrictions are allowed.”198 Still, the
plurality did not resolve whether these cases provided the correct framework because the
trademark provision involved
viewpoint discrimination, which is “forbidden” even in such
forums.199
As with government subsidies, Section 230 provides a type of benefit to private parties; here, in
the form of a statutory liability shield. However, as with trademark registration, Section 230’s
liability protection does not involve payments from the government to private parties. Thus,
under the reasoning of
Tam, cases authorizing conditions on cash subsidies may not authorize
conditions on Section 230: Congress may not be able to claim that conditioning Section 230’s
protection is necessary to avoid “subsidizing” speech that it does not support. Because the law
struck down in
Tam was viewpoint-based and not merely content-based, it is unclear how a court
might analyze content-based modifications to Section 230’s protection—if, for example, a court
might find Section 230’s immunity analogous to a “government program” or “limited public
forum” where some content-based restrictions may be permitted.200
Tam still makes clear that
selectively withholding Section 230’s immunity based on speech’s content or viewpoint could
raise First Amendment concerns even if the law does not directly prohibit speech.201
Content-Based vs. Content-Neutral Speech Regulations
Assuming that Section 230’s protections are analogous to neither a monetary subsidy nor a
limited public forum, another question is whether an exception to Section 230—such as one that
prevents service providers from using Section 230 when they have recommended content using
an algorithm—would be considered a content-based or content-neutral regulation of speech. As
discussed above, content-based laws are rarely constitutional, whereas content-neutral laws are
subject to a lower standard of judicial scrutiny.202 While “intermediate scrutiny” is easier to
satisfy than strict scrutiny, courts may still strike down a content-neutral law if the law burdens
more speech than is necessary to achieve its legislative purpose.203
A law that removes Section 230’s protection for certain types of content, but not others, could be
subject to challenges in court as a content-based restriction on speech.204 Some proposals from the
117th Congress would have taken this approach, such as a bill that would have made Section 230’s
196
Id. 197
Id. 198
Id. at 244 & n.16.
199
Id. 200
See supra no
te 198 and accompanying text.
201
See Tam, 582 U.S. at 223;
see also United States v. Playboy Enter. Grp., 529 U.S. 803, 809, 827 (2000) (holding
that federal statute restricting the availability of “sexually explicit [cable] channel[s]” discriminated on the basis of
content and was unconstitutional); Sorrell v. IMS Health Inc., 564 U.S. 552, 566 (2011) (“Lawmakers may no more
silence unwanted speech by burdening its utterance than by censoring its content.”).
202
See supra “Overview of Free Speech Principles.”
203
See, e.g., Packingham v. North Carolina, 582 U.S. 98, 105–106 (2017) (holding that a state law prohibiting sex
offenders from accessing social media websites violates the First Amendment “[e]ven making the assumption that the
statute is content neutral”).
204
Cf. Playboy, 529 U.S. at 811 (holding that law requiring operators of cable television channels “primarily dedicated
to sexually-oriented programming” to scramble channels and limit transmission times was a content-based burden on
speech).
Congressional Research Service
22
link to page 17 link to page 18 link to page 21 link to page 22
Liability for Algorithmic Recommendations
protections unavailable for a website that uses algorithms to promote “health misinformation.”205
Several proposals from the 117th Congress would have removed Section 230’s protection for
all claims when a provider uses an algorithm to recommend or amplify content.206 Some
commentators have suggested that this approach would more likely be assessed by reviewing
courts as “content-neutral” and therefore subject to a more relaxed standard of constitutional
scrutiny.207
Although some modifications to Section 230’s immunity regime based only on the use of
algorithmic recommendation or amplification may be content-neutral with respect to user content,
a separate question is whether such a change would be content-neutral with respect to a service
providers’ speech. As discussed above, courts have generally held that a platform’s ranking
choices—such as the ordering of search results—are “speech” protected by the First
Amendment.208 Determining whether this “speech” has recommended or amplified third-party
content requires reference to the content of the platform’s ranking choice and therefore may be
content-based. As the Supreme Court suggested in
City of Austin, a law is content-based when its
application depends on the “substantive message” of the material regulated.209 Courts might
determine that a law regulating recommendation systems, but not imposing any subject-matter or
viewpoint restrictions on the material recommended, does not turn on any substantive message
and may be assessed as a content-neutral law under
City of Austin.
A similar consideration is whether a law withholding Section 230’s protections for
algorithmically recommended speech might be viewpoint-based. Even facially neutral laws may
be viewpoint-based if they discriminate between viewpoints in operation or are motivated by a
discriminatory purpose.210 A law that affects “recommendations” by algorithm but not processes
that reduce the visibility of content may also be viewpoint-based because it would penalize
messages in support of recommended content, but not messages that disfavor that content.211 A
counterargument may be that a law that applies to
all algorithmic recommendations, irrespective
of the content recommended, does not disfavor any particular viewpoint.212
The Supreme Court has struck down laws that, in the Court’s view, target particular speakers for
differential treatment, essentially treating these laws as content-based burdens on speech subject
to strict constitutional scrutiny.213 Courts may also ask whether a law targeting particular
speakers—such as a subset of platforms, like social media platforms—is aimed at suppressing
particular viewpoints.214 Challengers to Florida’s and Texas’s social media laws, discussed
above,215 made these arguments, though both the Eleventh and Fifth Circuits rejected the
205 S. 2448, 117th Cong. (2021).
206
See supra Table 1.
207
See Keller,
supra no
te 128, at 254–55.
208
See supra “Whether Hosting or Promoting Third-Party Speech is Protected Speech.” For a detailed discussion of the
application of First Amendment doctrine to online platforms, see CRS Report R45650,
Free Speech and the Regulation
of Social Media Content, by Valerie C. Brannon.
209 City of Austin v. Reagan Nat’l Advert. of Austin, LLC, 142 S. Ct. 1464, 1472 (2022).
210
See Sorrell v. IMS Health, Inc., 564 U.S. 552, 565 (2011).
211
Cf. Matal v. Tam, 582 U.S. 218, 249 (2017) (Kennedy, J., concurring) (suggesting that a law expressing a preference
for “positive or benign” messages over “derogatory” messages is “the essence of viewpoint discrimination”).
212
But see id. (“To prohibit all sides from criticizing their opponents makes a law more viewpoint based, not less so.”).
213
See Grosjean v. Am. Press Co., 297 U.S. 233, 250–51 (1936) (voiding a tax aimed at newspapers with a certain
number of subscribers); Minneapolis Star & Tribune Co. v. Minn. Comm’r of Revenue, 460 U.S. 575, 592–93 (1983)
(invalidating state tax on paper and ink that fell mostly on a small group of newspapers).
214
See Sorrell, 564 U.S. at 565.
215
See supra no
te 169 and accompanying text.
Congressional Research Service
23
Liability for Algorithmic Recommendations
arguments.216 Platforms might also argue that a law that discourages platforms from ordering
results or arranging content based on an automated process, but that does not discourage ordering
results based on direct human input, discriminates against speakers that use automated processes
and is therefore akin to a content-based law.217 In
Turner Broadcasting System v. FCC, the
Supreme Court declined to treat a law targeted at cable television operators as content-based,
reasoning that the law at issue distinguished among speakers “based only upon the manner in
which speakers transmit their messages . . . and not upon the messages they carry.”218 Courts
might apply this reasoning to any legal provision aimed at algorithmic amplification to determine
that such a law is subject only to intermediate scrutiny.
Key Takeaways
Platform liability for recommendation systems is an emerging issue in the American legal system.
Federal appellate courts that have addressed the issue have unanimously held that platforms may
not be held liable for algorithmically recommending third-party content, based on the robust
protections provided by Section 230. The caselaw interpreting Section 230 extends back to the
time of the statute’s original drafting and interprets statutory language that remains largely
unchanged since 1996. If Congress wishes to amend Section 230 to directly address a platform’s
liability for recommendation algorithms, questions may emerge about how to reconcile Section
230 caselaw with any new language. Amendments to Section 230 might be subject to
constitutional challenges, and platforms might also argue that their recommendation choices are
protected by the First Amendment.
Author Information
Eric N. Holmes
Attorney-Advisor
216
See Netchoice, LLC v. Att’y Gen., Fla., 34 F.4th 1196, 1224–25 (11th Cir. 2022); Netchoice, LLC v. Paxton, 49
F.4th 439, 480–82 (5th Cir. 2022).
217
See Citizens United v. Fed. Election Comm’n, 558 U.S. 310, 340–41 (2010) (observing that laws that “identif[y]
certain preferred speakers” are constitutionally suspect).
218 Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 641 (1994).
Congressional Research Service
24
Liability for Algorithmic Recommendations
Disclaimer
This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan
shared staff to congressional committees and Members of Congress. It operates solely at the behest of and
under the direction of Congress. Information in a CRS Report should not be relied upon for purposes other
than public understanding of information that has been provided by CRS to Members of Congress in
connection with CRS’s institutional role. CRS Reports, as a work of the United States Government, are not
subject to copyright protection in the United States. Any CRS Report may be reproduced and distributed in
its entirety without permission from CRS. However, as a CRS Report may include copyrighted images or
material from a third party, you may need to obtain the permission of the copyright holder if you wish to
copy or otherwise use copyrighted material.
Congressional Research Service
R47753
· VERSION 1 · NEW
25