Congressional Research Service
https://crsreports.congress.gov
R46662
Congressional Research Service
Social media platforms disseminate information quickly to billions of global users. One of the main features of social media is the primacy of user-generated content: users can act as producers and consumers of content. Users can create individual profiles; post text, images, or videos; and interact with content by commenting on, reacting to, and sharing it with others. Thus, social media platforms benefit from network effects—an increase in the number of users of a platform increases its perceived value for users.
Social media operators (i.e., companies that operate social media platforms) have economic incentives to increase the number of users on their platforms and to increase user engagement, such as clicking links or commenting on posts. Most operators do not charge users to establish accounts and use at least portions of the platform. Instead, these operators rely on revenue from online advertising (ads). Operators may be able to increase their online advertising revenue by incentivizing users to spend more time on the platform. By increasing user engagement with content, operators can collect more data about each user and offer personalized ads.
Social media operators disseminate and moderate content on their platforms to enhance user engagement, expand their active user base, strengthen their network effects, and increase their revenue through online advertising. Operators manage and distribute the continuous influx of user-generated content through their network structure and algorithms. Users can establish connections to other users of the platform, creating social networks or communities that can be based on common interests, relationships that exist offline, employment, or other factors. While some platforms prioritize content from a user’s network connections, they also typically use algorithms to prioritize content based on its potential relevance to the user’s interests, regardless of whether the content was generated by someone in the user’s network.
Algorithms identify and filter content that violate social media platforms’ policies. Operators balance the goal of prioritizing content that increases user engagement and moderating content that violate their policies, such as content that may be illegal, harmful, or objectionable, including child sexual abuse material, content that may incite violence, misinformation, and spam. Algorithms also prioritize content on social media platforms based on users’ online behavior, such as content that is clicked on or shared with other users.
Operators may choose to moderate content differently across platforms; there is no uniform standard for content moderation. Content that violates social media platforms’ policies is identified by users and automated systems, such as algorithms and machine learning techniques. Some of the content is subsequently reviewed by human content moderators. Automated systems can quickly review large volumes of content but might not always remove content in accordance with stated policies. Some operators have altered their content moderation practices in efforts to balance trade-offs between free expression and removing objectionable content that might be harmful.
Some Members of Congress have considered addressing concerns related to social media platforms’ content moderation practices. Some bills would have incentivized platforms to moderate content and prevent the spread of harmful content, misinformation, or other objectionable content. Other bills would have discouraged or prevented platforms from certain forms of content moderation. Introduced legislation has largely focused on Section 230 of the Communications Act of 1934 (47 U.S.C. §230). Section 230 protects interactive computer services providers and users from liability for publishing—and in some instances, restricting access to or availability of—another user’s content.
Congress might consider various options to address content moderation practices on social media platforms. Congress might choose to take no action, in which case social media operators may continue to voluntarily adjust their algorithms and content moderation practices. Options to address content moderation practices could include urging operators to implement changes (e.g., by holding hearings or sending letters to operators), which may or may not lead operators to implement changes sufficient to address congressional concerns. Legislative actions could include amending Section 230, requiring operators to increase transparency about their content moderation practices, regulating operators’ content moderation practices, or implementing federal advisory or regulatory oversight of social media platforms. Any legislative efforts might raise a range of legal, social, and economic considerations.
January 8, 2025
Clare Y. Cho Specialist in Industrial Organization and Business Policy
Ling Zhu Analyst in Telecommunications Policy
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service
Introduction ..................................................................................................................................... 1 Overview of Social Media ............................................................................................................... 2
U.S. Social Media Use .............................................................................................................. 4 Social Media Revenue: Online Advertising .............................................................................. 6
Content Dissemination and Moderation .......................................................................................... 8
Social Media Network Structure ............................................................................................... 9 Algorithmic Filtering and Prioritization .................................................................................. 10
Content Moderation ................................................................................................................. 11
Context for Congressional Consideration ..................................................................................... 14
Section 230 .............................................................................................................................. 15
Federal Proposals to Amend Section 230 ......................................................................... 15
Policy Considerations for Congress .............................................................................................. 17
Potential Options for Congress ............................................................................................... 17 Concluding Thoughts .............................................................................................................. 19
Figure 1. Social Media Users in the United States by Platform ...................................................... 5 Figure 2. Social Media Advertising Revenue in the United States.................................................. 7
Table A-1. Selected Legislation Related to Content Moderation Practices of Social
Media Platforms ......................................................................................................................... 20
Appendix. Legislation Related to Content Moderation Practices of Social Media
Platforms .................................................................................................................................... 20
Author Information ........................................................................................................................ 27
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 1
Social media platforms have become major channels for the dissemination, exchange, and circulation of information to billions of users around the world over the internet. For years, Congress has been concerned with the use of the internet to host, distribute, and exchange content that may be illegal, harmful, or objectionable, including child sexual abuse material, content that may incite violence, and foreign propaganda. Attention has often focused on social media platforms’ ability to disseminate information quickly and widely, their use of algorithms to identify and amplify content that is likely to generate high levels of user engagement, and their practice of restricting certain content.1
Social media platforms have received scrutiny for their content moderation practices, specifically for removing certain content and allowing harmful content to spread. For example, some policymakers have expressed concern about censorship of conservative viewpoints,2 while others have expressed concern about the spread of misinformation and material harmful to minors.3 Some studies and internal documents suggest that some minors, particularly girls, may be harmed from using social media platforms, although others may benefit from using the platforms.4
Some Members of Congress have considered addressing concerns related to social media platforms’ content moderation practices. Some bills would have incentivized platforms to moderate and prevent the spread of misinformation, harmful, or otherwise objectionable content.5 Other bills would have discouraged or prevented platforms from certain forms of content moderation.6 Introduced legislation has largely focused on Section 230 of the Communications Act of 1934 (47 U.S.C. §230, hereinafter Section 230), enacted as part of the Communications
1 Algorithms are computer processes that set rules for the data social media platforms receive. They help operators sort and prioritize content and can be used to tailor what a user sees at a particular time.
2 For example, see U.S. Congress, House Committee on Energy and Commerce, Subcommittee on Communications and Technology, Preserving Free Speech and Reining in Big Tech Censorship, hearing, 118th Cong., 1st sess., March 28, 2023, https://docs.house.gov/Committee/Calendar/ByEvent.aspx?EventID=115561.
3 For example, see U.S. Congress, House Committee on Energy and Commerce, Subcommittee on Communications and Technology, Fanning the Flames: Disinformation and Extremism in the Media, hearing, 117th Cong., 2nd sess., February 24, 2021, https://docs.house.gov/Committee/Calendar/ByEvent.aspx?EventID=111229; and U.S. Congress, Senate Committee on the Judiciary, Big Tech and the Online Child Sexual Exploitation Crisis, hearing, 118th Cong., 2nd sess., January 31, 2024, https://www.judiciary.senate.gov/committee-activity/hearings/big-tech-and-the-online-child- sexual-exploitation-crisis.
4 For example, see National Academies of Sciences, Engineering, and Medicine, Social Media and Adolescent Health (Washington, DC: The National Academies Press, 2024), https://doi.org/10.17226/27396; and Georgia Wells et al., “Facebook Knows Instagram is Toxic for Teen Girls, Company Documents Show,” Wall Street Journal, September 14, 2021, https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show- 11631620739.
5 For example, see CASE-IT Act (H.R. 573, 118th Congress). In this report, “misinformation” refers to incorrect or inaccurate information, regardless of its origin or the intent of the individual who disseminates it. Others sometimes use misinformation to mean incorrect or inaccurate information spread by someone believing it to be true, as distinct from disinformation, a term reserved for false information deliberately spread to gain some advantage. For additional information on the definitions of misinformation and disinformation, see CRS In Focus IF10771, Defense Primer: Operations in the Information Environment, by Catherine A. Theohary; and Caroline Jack, Lexicon of Lies: Terms for Problematic Information, Data & Society Research Institute, August 9, 2017, https://datasociety.net/pubs/oh/ DataAndSociety_LexiconofLies.pdf.
6 For example, see DISCOURSE Act (S. 921, 118th Congress).
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 2
Decency Act of 1996.7 Section 230 protects interactive computer service providers,8 including social media platforms, and their users from liability for publishing—and in some instances, restricting access to or availability of—another user’s content.
This report provides an overview of social media platforms and their content moderation practices. It provides a brief overview of social media use and online advertising—currently the main source of revenue for many social media platforms. It also discusses how content is disseminated on the platforms, specifically discussing social media network structures and the use of algorithms to filter and prioritize content, as well as how content is moderated on the platforms. The report concludes with a discussion of Section 230 and potential options for Congress.
In the Trafficking Victims Prevention and Protection Reauthorization Act of 2022,9 Congress defined social media platform as
a website or internet medium that—
(A) permits a person to become a registered user, establish an account, or create a profile for the purpose of allowing users to create, share, and view user-generated content through such an account or profile;
(B) enables 1 or more users to generate content that can be viewed by other users of the medium; and
(C) primarily serves as a medium for users to interact with content generated by other users of the medium.
This definition includes one of the main features of social media—the primacy of user-generated content.10 The definition might also include platforms that host user-generated content but typically are not considered to be social media (e.g., Roblox, an online platform hosting multiplayer video games,11 and Tinder, an online dating app).
Social media users can act as both producers and consumers of online content. They can post text, images, and videos as content producers and may also view, share, or react to others’ content as consumers.12 Users can include individuals, organizations, government agencies, and business
7 47 U.S.C. §230. While this provision is often referred to as “Section 230” of the Communications Decency Act of 1996 (P.L. 104-104), it was enacted as Section 509 of the Telecommunications Act of 1996, which amended Section 230 of the Communications Act of 1934. For more information about Section 230, see CRS In Focus IF12584, Section 230: A Brief Overview, by Peter J. Benson and Valerie C. Brannon.
8 47 U.S.C. §230(f)(2) defines an interactive computer service as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”
9 42 U.S.C. §1862w; P.L. 117-348, Title I, §124.
10 Jonathan Obar and Steve Wildman, “Social Media Definition and the Governance Challenge: An Introduction to the Special Issue,” Telecommunications Policy, vol. 39, no. 9 (2015), pp. 745-750, https://doi.org/10.1016/ j.telpol.2015.07.014 (hereinafter Obar and Wildman, “Social Media Definition and the Governance Challenge,” 2015).
11 Fortnite and Roblox are also considered to be “proto-metaverses” (e.g., Edd Gent, “What Can the Metaverse Learn from Second Life?,” IEEE Spectrum, November 29, 2021, https://spectrum.ieee.org/metaverse-second-life). For more information about the metaverse, see CRS Report R47224, The Metaverse: Concepts and Issues for Congress, by Ling Zhu.
12 Users can react to content by commenting on it or by “liking” it, indicating that the user supports or “likes” the post. (continued...)
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 3
entities, including traditional news media (e.g., Washington Post, Fox News, and New York Times). A 2024 Pew Research Center survey found that 25% of U.S. adult respondents often get their news from social media and an additional 29% sometimes get their news from social media; 46% of U.S. adult respondents rarely or never get their news from social media.13
Users typically access social media platforms through websites and mobile apps. Social media operators—that is, companies that operate social media platforms—host user-generated content on their platforms and “organize it, make it searchable, and [ ... ] algorithmically select some subset of it to deliver as front-page offerings, news feeds, subscribed channels, or personalized recommendations.”14 Many social media platforms enable connections to other sites and apps and allow third-party developers to build apps and services that integrate with platforms. This practice could provide third parties access to some user data and potentially increase traffic between a platform and third-party websites.15
Social media platforms benefit from network effects; that is, an increasing number of users increases the value of a platform as perceived by users.16 This means that as the number of active users on the platform increases, existing users are more willing to stay on the platform, and more individuals are willing to start using it. Many operators strive to achieve network effects, which often results in one or a small number of operators gaining a competitive advantage. Some experts argue that when network effects are present, “they are among the most important reasons” users will pick one platform over another.17
A social media platform can strengthen its network effects by facilitating the exchange of information and user engagement. Expanding the number of users increases the number of possible connections between users and content recommendations, which can encourage more individuals to join and provide more opportunities to deliver advertisements (ads) that generate revenue for the operator. A greater number of users might also result in more user-generated content that can be shared with other users. A user can have accounts with multiple social media platforms, which means increased usage of one platform may reduce the amount of time the user spends on another, although some users may use different platforms for different purposes.
Operators have economic incentives to increase the number of users and their engagement with social media platforms. Most social media operators do not charge users to establish accounts and access at least portions of the platform. Instead, these operators rely on revenue from ads that are
Some social media sites allow users to express different reactions as well. For example, Facebook allows users to select an emoji (an icon expressing the emotion of the user), including a thumbs-up, smiling face, frowning face, and a heart.
13 Pew Research Center, “Social Media and News Fact Sheet,” September 17, 2024, https://www.pewresearch.org/ journalism/fact-sheet/social-media-and-news-fact-sheet/. For more information on the relationship between newspapers and social media platforms, see CRS Report R47018, Stop the Presses? Newspapers in the Digital Age, by Dana A. Scherer and Clare Y. Cho.
14 Tarleton Gillespie, “Platforms Are Not Intermediaries,” Georgetown Technology Law Review, vol. 2, no. 2 (2018), pp. 198-216, https://georgetownlawtechreview.org/wp-content/uploads/2018/07/2.2-Gilespie-pp-198-216.pdf.
15 L. DeNardis and A.M. Hackl, “Internet Governance by Social Media Platforms,” Telecommunications Policy, vol. 39, no. 9 (October 2015), pp. 761-770, https://doi.org/10.1016/j.telpol.2015.04.003; Tarleton Gillsepie, “The Politics of ‘Platforms,’” New Media & Society, vol. 12, no. 3 (May 1, 2010), pp. 347-364, https://doi.org/10.1177/ 1461444809342738; and Anne Helmond, “The Platformization of the Web: Making Web Data Platform Ready,” Social Media + Society, July 2015, https://doi.org/10.1177/2056305115603080.
16 Arjun Sundararajan, “Network Effects,” author’s website, New York University (NYU) Stern School of Business, http://oz.stern.nyu.edu/io/network.html. For more information on the evolution of online content and the characteristics of online platforms, see “Online Platform Concepts and Characteristics” in CRS Report R47662, Defining and Regulating Online Platforms, coordinated by Clare Y. Cho.
17 John Gallaugher, Information Systems: A Manager’s Guide to Harnessing Technology, 10th ed. (Boston, MA: FlatWorld, 2024), p. 329.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 4
targeted to certain users based on a user’s data, as discussed under “Social Media Revenue: Online Advertising.”
Some operators offer their platforms or additional features on their platforms through subscription services. For example, X (formerly Twitter) offers three paid subscription options: (1) Basic, which includes features such as allowing users to edit posts and upload longer posts and videos for $3/month; (2) Premium, which includes the Basic features in addition to others, such as placing a checkmark next to the user’s name and offering fewer ads for $8/month; and (3) Premium+, which includes the Premium features in addition to others, such as providing the largest reply prioritization and offering no ads, although occasional promoted content may appear, for $16/month.18 In November 2024, Meta Platforms announced that users of its social media platforms—Facebook and Instagram—who are located in the European Union would have the option to choose between a paid subscription “for an ad-free experience” or to continue accessing the platforms with personalized ads at no cost.19
The majority of Americans use social media, according to estimates from various firms. In May 2024, the market research firm eMarketer forecast that about 232 million Americans (about 68% of the U.S. population) would use social media during 2024.20 The firm estimated that of those users, about 178 million Americans (52%) would use Facebook, 143 million (42%) would use Instagram, and 112 million (33%) would use TikTok (Figure 1).21 A 2024 Pew Research Center survey of 5,626 U.S. adults revealed that 85% of respondents reportedly use YouTube, 70% use Facebook, 50% use Instagram, and 36% use Pinterest.22 The results showed that platform usage varied based on the user’s age. For example, a greater percentage of U.S. adults ages 30-49 reported using Facebook (78%) compared with adults ages 18-29 (68%), 50-64 (70%), and over 65 (59%).23 In contrast, a greater percentage of U.S. adults ages 18-29 reported using Instagram (76%) compared with adults ages 30-49 (66%), 50-64 (36%), and over 65 (19%).24
18 X, “About X Premium,” X Help Center, https://help.x.com/en/using-x/x-premium.
19 Meta Platforms, “Facebook and Instagram to Offer Subscription for No Ads in Europe,” Meta Newsroom, November 12, 2024, https://about.fb.com/news/2024/11/facebook-and-instagram-to-offer-subscription-for-no-ads-in-europe/.
20 EMarketer Forecast, “Social Network Users, US,” May 2024.
21 The percentages were calculated by CRS using the number and percentage of U.S. social media users reported by eMarketer. Specifically, based on eMarketer’s estimate that 232,149,715 U.S. social media users made up 67.92% of the U.S. population in 2024, CRS determined that eMarketer estimates the U.S. population in 2024 to be 341,798,756. CRS used this value to estimate the percentages for each platform.
22 Pew Research Center, “Social Media Fact Sheet,” November 13, 2024, https://www.pewresearch.org/internet/fact- sheet/social-media/.
23 Ibid.
24 Ibid.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 5
Figure 1. Social Media Users in the United States by Platform
in millions, 2024
Source: eMarketer Forecast, “Social Network Users, by Platform, US,” May 2024. Notes: eMarketer reports that the “estimates are based on the analysis of survey and traffic data from research firms and regulatory agencies; the growth trajectory of major social networks; historical trends; internet and mobile adoption trends; and country-specific demographic and socioeconomic factors.” The estimates indicate the number of “internet users of any age who use social networks via any device at least once a month.”
Some social media operators are publicly traded companies that report estimates for the number of users of their platforms in their annual filings with the Securities and Exchange Commission; examples include the following:
• Meta Platforms, Inc., reported an average of 205 million daily active users and 272 million monthly active users on Facebook or Messenger in the United States and Canada in December 2023.25
• Snap Inc. reported an average of 100 million daily active users on Snapchat in North America during the third quarter of 2024.26
• Pinterest, Inc., reported an average of 99 million monthly active users on its namesake platform in the U.S. and Canada during the third quarter of 2024.27
Companies use different methods to estimate the number of active users; a uniform industry standard does not exist. For example, Meta Platforms reports the number of registered users who visit Facebook or Messenger through a website or mobile app; it does not include duplicate and
25 Meta Platforms, Inc., U.S. Securities and Exchange Commission (SEC) Form 10-K for the year ending December 31, 2023, pp. 67-68. Meta Platforms does not report monthly or daily active users for Facebook and Messenger in its 2024 SEC quarterly reports; it reports estimates for daily active users for its “Family” of products, which includes Facebook, Instagram, Messenger, and WhatsApp.
26 Snap Inc., SEC Form 10-Q for the quarter ending September 30, 2024, p. 29. North America includes Mexico, the Caribbean, and Central America.
27 Pinterest, Inc., SEC Form 10-Q for the quarter ending September 30, 2024, p. 23.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 6
false accounts identified by the user’s data (e.g., identical IP addresses, similar usernames) and behaviors that appear to be inauthentic.28 Snap reports the number of registered users who visit its namesake platform through a website or mobile app and has implemented technical measures to prevent, detect, and suppress individuals from creating accounts for malicious purposes but does not estimate the number of these accounts.29 Pinterest reports the number of authenticated users that visit the website, open the mobile app, or interact with one of the Pinterest browser or site extensions, such as the save button.30
Online advertising has been the primary source of revenue for most social media operators. In 2023, global online advertising provided about 98% ($132 billion) of Meta Platforms’ annual revenue,31 77% ($238 billion) of Alphabet’s,32 and all of Snap’s ($5 billion) and Pinterest’s ($3 billion).33 A report from the Interactive Advertising Bureau, an industry trade association, estimates that total revenue from advertising on social media in the United States increased from $35.6 billion in 2019 to $64.9 billion in 2023 (Figure 2). Based on data provided in the report, social media made up about 29% of U.S. internet advertising revenue in 2023.34 EMarketer estimated that spending on social media ads would be about $90 billion in 2024.35
28 Meta Platforms, Inc., SEC Form 10-K for the year ending December 31, 2023, pp. 5, 67-68.
29 Snap Inc., SEC Form 10-Q for the quarter ending September 30, 2024, pp. 5, 28.
30 Pinterest, Inc., SEC Form 10-Q for the quarter ending September 30, 2024, pp. 7, 23.
31 Meta Platforms, Inc., SEC Form 10-K for the year ending December 31, 2023, p. 75. Other sources of revenue include the sale of consumer hardware products (e.g., Meta’s virtual reality headset), revenue from its WhatsApp Business Platform, and fees from developers using Meta’s payments infrastructure.
32 Alphabet Inc., SEC Form 10-K for the year ending December 31, 2023, p. 35. Alphabet Inc. is the parent company of Google LLC. Other sources of revenue include Google subscriptions, the sale of consumer devices, and Google Cloud.
33 Snap Inc., SEC Form 10-K for the year ending December 31, 2023, pp. 59, 61; and Pinterest, Inc., SEC Form 10-K for the year ending December 31, 2023, pp. 51-52.
34 This estimate was calculated by CRS using the estimate for total internet advertising revenue reported on p. 13 and the estimate for social media advertising revenue reported on p. 20 in Interactive Advertising Bureau, Internet Advertising Revenue Report, April 2024, prepared by PwC, https://www.iab.com/wp-content/uploads/2024/04/ IAB_PwC_Internet_Ad_Revenue_Report_2024.pdf.
35 EMarketer Forecast, “Social Network Ad Spending, US,” November 2024.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 7
Figure 2. Social Media Advertising Revenue in the United States
in billions
Source: Interactive Advertising Bureau, Internet Advertising Revenue Report, April 2024, prepared by PwC, https://www.iab.com/wp-content/uploads/2024/04/IAB_PwC_Internet_Ad_Revenue_Report_2024.pdf, p. 20. Note: Revenue includes advertisements that reach targeted audiences through social media platforms, messaging apps, and social media news feeds. CRS calculated the total for the year by adding together the estimate for the first six months and the estimate for the last six months.
Ads on social media platforms are often displayed as posts, generally distinguishable through labels such as “sponsored.” Social media operators can use various pricing models, including a cost-per-click (CPC) and cost-per-impression (CPM) model.36 Many text-based ads are billed under the CPC model—advertisers pay the operator each time a user clicks on the ad. Most graphical display ads are billed under the CPM model—advertisers pay a specific rate for every 1,000 impressions of the ad, that is, every 1,000 times the ad appears on users’ screens, regardless of whether the users click on the ad.
To provide users with online ads, operators run instantaneous auctions through services such as Meta Ads and Snapchat Ads. Advertisers provide information such as their budget and target audience; operators provide information such as how many people are expected to view the ad and metrics about the ad’s performance.37 Based on the auction results and user profiles, different users may receive different ads. Targeted advertising has made it possible for advertisers to customize their messages and reach potential consumers more easily and quickly, potentially advertising products differently to different individuals.38 Some advertisers may also partner with “influencers” (i.e., users with a large number of followers) to endorse their products.
Social media operators may be able to increase their online advertising revenue by incentivizing users to spend more time on the platform. By amplifying content that increases the amount of time a user spends on the platform, operators can increase the time during which a user is able to view ads through the platform. Operators may be able to better predict content that is of interest
36 John Gallaugher, Information Systems: A Manager’s Guide to Harnessing Technology, 10th ed. (Boston, MA: FlatWorld, 2024), pp. 298-299.
37 For more information, see Meta, “Meta Ads,” https://www.facebook.com/business/ads; and Snapchat, “Reach Gen Z and Millennials with Snapchat Ads,” https://forbusiness.snapchat.com/.
38 Todd Powers et al., “Digital and Social Media in the Purchase Decision Process,” Journal of Advertising Research, vol. 52, no. 4 (December 2012), pp. 479-489, https://doi.org/10.2501/JAR-52-4-479-489.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 8
to each user if they can increase user engagement, such as when users comment on, react to, or share content. Increasing user engagement allows operators to collect more data about each user.
Collecting user data allows operators to personalize ads, which means offering different ads to different users based on potential relevance to the specific user.39 User data can include personally identifiable information provided by users when setting up accounts and information about an individual’s characteristics, preferences, and opinions based on posted content and online behaviors. The data amassed by social media operators enable them to build complex profiles for each user’s characteristics and revealed preferences and sell advertising spaces targeting specific user categories to companies, organizations, and political campaigns.40 This can increase the likelihood that the user will click on the ads. It also gives established operators an advantage over market entrants, as entrants are likely to have less user data and therefore may be less effective with their targeted advertising.
Some social media platforms allow users to promote their posts for a fee. For example, Facebook and Snapchat allow users, including commercial entities, to “boost” or “promote” a post by turning it into an ad that can be spread to those who do not follow their accounts, increasing the likelihood that the post is shared, liked, or commented on.41 Some platforms—including Facebook and Snapchat—allow users to adjust their ad preferences, including opting out of targeted ads.42 While this option means that users may not see targeted ads, it does not change the number of ads the user sees and does not ensure that a social media operator is no longer collecting the user’s data.
A user’s experience on a social media platform is shaped by the structure of the platform’s user networks and content dissemination techniques, such as algorithmic filtering, which often drive user engagement. Through network structure and algorithms, operators manage the continuous influx of user-generated content and its distribution to other users.
Social media operators disseminate and moderate content to enhance user engagement, expand the active user base, achieve network effects, and ultimately increase revenue, often through online ads. Enabling and facilitating users to post, comment, and share content, sometimes virally (characterized by the rapid and widespread dissemination of information and content), may increase the risk of spreading harmful content and misinformation online. Operators may strive to balance the goals of prioritizing content that increases user engagement and revenue and moderating harmful content, particularly when they receive scrutiny from the public and policymakers, such as in hearings, comments to the press, and letters to the companies.
39 Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (New Haven & London: Yale University Press, 2018).
40 Brian O’Connell, “How Does Facebook Make Money? Six Primary Revenue Streams,” The Street, October 23, 2018 https://www.thestreet.com/technology/how-does-facebook-make-money-14754098; Johannes Knoll, “Advertising in Social Media: A Review of Empirical Evidence,” International Journal of Advertising, vol. 35, no. 2 (2016), pp. 266- 300, http://dx.doi.org/10.1080/02650487.2015.1021898.
41 Meta Platforms, “About Boosted Posts,” Business Help Center, https://www.facebook.com/business/help/ 240208966080581; and Snapchat, “Grow Your Following with Snap Promote,” https://forbusiness.snapchat.com/ advertising/snap-promote.
42 Meta Platforms, “About Ad Preferences and How You Can Adjust Them on Facebook,” https://www.facebook.com/ help/247395082112892; and Snapchat, “How Do I Change My Advertising and Interest Preferences on Snapchat?,” Snapchat Support, https://help.snapchat.com/hc/en-us/articles/7012345515796-How-do-I-change-my-advertising-and- interest-preferences-on-Snapchat.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 9
A social media network structure refers to the ways in which users connect with one another and information spreads on a social media platform. Users can establish connections to other users of the platform, creating social networks or communities that can be based on common interests, relationships that exist offline, employment, or other factors. The structure of these networks affects how individuals search for one another and how connections are initiated and established.43 Operators may provide users with various levels of privacy control, allowing them to choose how much personal information to share. For example, some social media platforms allow users to choose whether to make their profiles open to the public or only to those who have established connections by mutual consent.
On some social media platforms, users can control the content they see through the networks they choose to build. Each user can choose to follow or unfollow other users; some users might choose to unfollow those who post or share content with which a user disagrees. This networking feature enables users to “quickly find like-minded people and perspectives,” which facilitates an information exchange phenomenon called “echo chambers,” where users predominantly encounter “information or opinions that reflect and reinforce their own.”44 Some research has shown that the overlap in network connections between two users increases the likelihood that one user will share content from the other user through the network.45 Echo chambers can therefore enhance the spread of information, including misinformation.46
Due to the benefits of network effects and potential to increase revenue, social media operators are often incentivized to facilitate the expansion of users’ network connections. For example, some social media platforms recommend new connections based on peripheral relationships (e.g., someone on a network connection’s contact list) and allow users to search names, email addresses, occupations, or other personal or demographic information to find new connections.47
A user’s network connections were a fundamental aspect of social media platforms, particularly in the 2000s and early 2010s.48 Users would see only content posted or shared by their network connections. Nowadays, while some platforms continue to prioritize content from a user’s network connections, platforms typically use algorithms to prioritize content based on its potential relevance to the user’s interests, regardless of whether the content was generated by someone in the user’s network.49
43 Michael Bosetta, “The Digital Architectures of Social Media: Comparing Political Campaigning on Facebook, Twitter, Instagram, and Snapchat in the 2016 U.S. Election,” Journalism & Mass Communication Quarterly, vol. 95, no. 2 (2018), pp. 471-496, https://doi.org/10.1177/1077699018763307 (hereinafter Bosetta, “The Digital Architectures of Social Media,” 2018); and Danah Boyd, “Social Network Sites as Networked Publics: Affordances, Dynamics, and Implications,” in A Networked Self: Identity, Community, and Culture on Social Network Sites, ed. Zizi Papacharissi (New York, NY: Routledge, 2011) (hereinafter Boyd, “Social Network Sites as Networked Publics,” 2011).
44 “Digital Media Literacy—What Is an Echo Chamber?,” Goodwill Community Foundation Inc., https://edu.gcfglobal.org/en/digital-media-literacy/what-is-an-echo-chamber/1/.
45 Jing Peng et al., “Network Overlap and Content Sharing on Social Media Platforms,” Journal of Marketing Research, vol. 55 (August 2018), pp. 571-585, https://journals.sagepub.com/doi/10.1509/jmr.14.0643.
46 Petter Törnberg, “Echo Chambers and Viral Misinformation: Modeling Fake News as Complex Contagion,” PLOS ONE, vol. 13, no. 9 (2018); Michela Del Vicario et al., “The Spreading of Misinformation Online,” Proceedings of the National Academy of Sciences, vol. 113, no. 3 (January 19, 2016), pp. 554-559, https://www.pnas.org/content/113/3/ 554.
47 Boyd, “Social Network Sites as Networked Publics,” 2011.
48 Obar and Wildman, “Social Media Definition and the Governance Challenge,” 2015, pp. 745-750.
49 For example, see Ramya Sethuraman, “Why Am I Seeing This? We Have an Answer for You,” March 31, 2019, https://about.fb.com/news/2019/03/why-am-i-seeing-this/.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 10
Social media platforms host vast amounts of user-generated content.50 Operators use algorithmic filtering to determine what content to deliver to users.51 Specifically, operators use algorithms to sort, index, curate, and prioritize content, as well as to identify and moderate illegal and other content that operators do not wish to publish.52 These algorithms rely on data such as a user’s online behavior and revealed preferences (e.g., a user’s profile, clicks, likes, shares, and search history). Operators can modify or fine-tune their algorithms to meet evolving business goals driven by internal incentives (e.g., maximizing engagement and advertising revenue) and external pressures (e.g., user complaints and stakeholder demands). As a result, these algorithms affect what content is promoted and removed, as well as what rapidly spreads across the platform (i.e., “goes viral”).
While detailed information about these algorithms and their parameters are considered proprietary and not publicly disclosed, academic research, industry analyses, and information released by operators provide a general understanding of how they work.53 For example, a social media platform can measure its users’ online activities and use algorithms to analyze the associated quantitative data and customize the selection, order, and visibility of posts for each user to increase user engagement.54 Some studies suggest that social media platforms prioritize content, regardless of its veracity, that is likely to prompt user engagement by eliciting strong emotions, which may contribute to divisiveness and polarization.55 In a 2018 presentation, a Facebook team reportedly told senior executives that its algorithms “exploit the human brain’s attraction to divisiveness” and that these algorithms could promote “more and more divisive content in an effort to gain user attention and increase time on the platform.”56 Meta Platforms states that its Facebook News Feed prioritizes recent, relevant content for the user, based on factors such as the user’s previous engagement with the content provider.57
50 Obar and Wildman, “Social Media Definition and the Governance Challenge,” 2015, pp. 745-750.
51 For more information on the use of algorithms to filter or moderate content, see Giovanni Sartor and Andrea Loreggia, The Impact of Algorithms for Online Content Filtering or Moderation, European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs, September 2020, https://www.europarl.europa.eu/RegData/ etudes/STUD/2020/657101/IPOL_STU(2020)657101_EN.pdf.
52 For more information on social media algorithms, see CRS In Focus IF12462, Social Media Algorithms: Content Recommendation, Moderation, and Congressional Considerations, by Laurie Harris and Clare Y. Cho.
53 See, for example, Jose van Dijck and Thomas Poell, “Understanding Social Media Logic,” Media and Communciation, vol. 1, no. 1 (2013), pp. 2-14, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2309065; and Hannah Trivette, “A Guide to Social Media Algorithms and SEO,” Forbes, October 14, 2022, https://www.forbes.com/ councils/forbesagencycouncil/2022/10/14/a-guide-to-social-media-algorithms-and-seo/.
54 Taina Bucher, “Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook,” New Media & Society, vol. 14, no. 7 (2012), pp. 1164-1180, https://journals.sagepub.com/doi/abs/10.1177/1461444812440159; Bosetta, “The Digital Architectures of Social Media,” 2018, pp. 471-496, https://doi.org/10.1177/1077699018763307.
55 For example, see Daniel Mochan and Janet Schwartz, “The Confrontation Effect: When Users Engage More with Ideology-Inconsistent Content Online,” Organizational Behavior and Human Decision Processes, vol. 185 (November 2024), https://doi.org/10.1016/j.obhdp.2024.104366; Paul M. Barrett, Spreading the Big Lie: How Social Media Sites Have Amplified False Claims of U.S. Election Fraud, NYU Stern Center for Business and Human Rights, September 2022, https://bhr.stern.nyu.edu/publication/spreading-the-big-lie-how-social-media-sites-have-amplified-false-claims- of-u-s-election-fraud/; and Ahmed Al-Rawi, “Viral News on Social Media,” Digital Journalism, vol. 7, no. 1 (2019), pp. 63-79, https://www.tandfonline.com/doi/full/10.1080/21670811.2017.1387062.
56 Jeff Horowitz and Deepa Seetharaman, “Facebook Executives Shut Down Efforts to Make the Site Less Divisive,” Wall Street Journal, May 26, 2020, https://www.wsj.com/articles/facebook-knows-it-encourages-division-top- executives-nixed-solutions-11590507499.
57 Facebook, “How Feed Works,” Facebook Help Center, https://www.facebook.com/help/1155510281178725.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 11
Some operators have incorporated users’ preferences or choices into their algorithms. For example, in 2018, Meta Platforms announced that it was prioritizing “meaningful posts,” or those shared by the user’s family and friends, in its Facebook News Feed.58 In 2021, Meta announced a new filter bar tool for users to adjust their preferences, such as prioritizing posts from specific people or pages.59
Internet bots—software applications that can automate tasks such as rapid posting, liking, and recirculating content through inauthentic accounts on social media platforms—can affect content prioritization by algorithms and may be used to spread harmful content.60 To amplify misinformation, for example, a bot can be programmed to search for and respond to relevant posts containing specific words or phrases. Users and operators can identify certain internet bots by the syntax and user profiles used by the bot or other abnormal account activity.61 Users may opt not to engage with content created by bots (e.g., avoid sharing or reposting it), and some operators may seek to remove this content. Bots are becoming increasingly sophisticated, making it more difficult for users and content moderators to recognize them, particularly if a post has gone viral. Users may inadvertently engage with content created or shared by an internet bot.62 Some studies have shown that bots can contribute to the long-term spread of misinformation.63
Social media operators maintain policies that prohibit users from posting certain content, such as content that exhibits graphic violence, nudity and sexual content, and hateful speech.64 An operator may temporarily or permanently ban users that violate its policies, depending on the operator’s perspective on the severity of the user’s violation(s). There is no uniform standard for content moderation; operators may choose to moderate content differently across platforms. For example, Meta Platforms states that it prohibits bullying and content that promotes eating disorders on Facebook and Instagram.65 This content is not prohibited on Parler, a privately owned social media platform that is marketed as a promoter of free speech with minimal content moderation.66 Certain content—such as spam and pornographic content—are prohibited on both
58 Adam Mosseri, “Bringing People Closer Together,” Facebook Newsroom, January 11, 2018, https://about.fb.com/ news/2018/01/news-feed-fyi-bringing-people-closer-together/.
59 Ramya Sethuraman, “More Control and Context in News Feed,” Facebook Newsroom, March 31, 2021, https://about.fb.com/news/2021/03/more-control-and-context-in-news-feed/.
60 Fake, or inauthentic, accounts are profiles impersonating other individuals or organizations. An internet bot is software that runs automated computer programs over the internet, generally capable of performing simple, repetitive tasks faster than an individual can. Some websites use a “Completely Automated Public Turing test to tell Computers and Humans Apart,” or CAPTCHA test, to try to identify internet bots. More information on CAPTCHA tests is available at https://www.cloudflare.com/learning/bots/how-captchas-work/.
61 Will Knight, “How to Tell if You’re Talking to a Bot,” MIT Technology Review, July 18, 2018, https://www.technologyreview.com/2018/07/18/141414/how-to-tell-if-youre-talking-to-a-bot/; and Ryan Detert, “Bot or Not: Seven Ways to Detect an Online Bot,” Forbes, August 6, 2018, https://www.forbes.com/sites/ forbesagencycouncil/2018/08/06/bot-or-not-seven-ways-to-detect-an-online-bot/.
62 Kate Starbird, “Disinformation’s Spread: Bots, Trolls and All of Us,” Nature, vol. 571 (July 25, 2019), p. 449, https://www.nature.com/articles/d41586-019-02235-x.
63 For example, see Marina Azzimonti and Marcos Fernandes, “Social Media Networks, Fake News, and Polarization,” European Journal of Political Economy, vol. 76 (January 2023), https://doi.org/10.1016/j.ejpoleco.2022.102256.
64 For example, see Meta Platforms, “Community Standards,” Transparency Center, https://transparency.meta.com/ policies/community-standards/; and YouTube, “Community Guidelines,” Rules and Policies, https://www.youtube.com/howyoutubeworks/policies/community-guidelines/.
65 Meta Platforms, “Community Standards,” Transparency Center, https://transparency.meta.com/policies/community- standards/.
66 Parler, “Community Guidelines,” May 8, 2024, https://www.parler.com/community-guidelines.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 12
Parler and the platforms operated by Meta Platforms.67 Some operators disclose information on their content moderation practices, such as the amount of content removed and the number of appeals;68 operators are not required to publish this information.
Social media operators rely on several sources to identify content that violates their policies: (1) users, (2) operator-designated human content moderators, and (3) automated systems, such as those using algorithms and machine learning techniques.69 Users and automated systems can flag or mark inappropriate posts for content moderators to review and remove when applicable. Some automated systems may also remove content that is not reviewed by a content moderator unless the user appeals its removal. Content moderators, primarily contractors for the platform, may be able to identify nuanced violations of content policy, such as by taking into account the context of a statement.
Automated systems may be better at identifying certain types of objectionable content, although data limitations make it difficult to conduct an assessment. For example, Meta Platforms reports that, of the content that was removed for violating its policies, automated systems removed 94% of violent and graphic content, 86% of bullying and harassment, and 4% of child nudity and physical abuse on Instagram in the European Union between April 1, 2024, and September 30, 2024.70 This may be the result of several factors, including (1) automated systems may be better at identifying violent and graphic content, bullying, and harassment than identifying child nudity and physical abuse; (2) content that is flagged as child nudity and physical abuse requires additional review from content moderators; or (3) fewer users are reporting violent and graphic content, bullying, and harassment, resulting in a higher percentage that are removed by automated systems rather than content moderators. There may also be content that violates the platform’s policies that is never identified and removed.71
To moderate content on their platforms, some social media operators may rely more on automated systems than human content moderators. For example, Reddit reports that of the content removed by moderators from January 2024 through June 2024, about 72% was removed by automated systems and about 28% was removed manually.72 Automated systems can quickly review large volumes of content “when scale problems make manual curation or intervention unfeasible.”73 Additionally, repeatedly reviewing graphic, explicit, and violent material may harm content
67 Ibid.
68 For example, see Meta Platforms, “Community Standards Enforcement Report,” Transparency Center, https://transparency.meta.com/reports/community-standards-enforcement/; and Reddit, “Transparency Reports,” Transparency, https://redditinc.com/policies/transparency.
69 For example, see Reddit, “Content Moderation, Enforcement, and Appeals,” updated September 2024, https://support.reddithelp.com/hc/en-us/articles/23511059871252-Content-Moderation-Enforcement-and-Appeals; and YouTube, “How Does YouTube Enforce its Community Guidelines?,” Community Guidelines, https://www.youtube.com/howyoutubeworks/policies/community-guidelines/#enforcing-community-guidelines.
70 Meta Platforms, Regulation (EU) 2022/2065 Digital Services Act Transparency Report for Instagram, October 25, 2024, pp. 11-12. The automated systems flagged 1,213,764 out of 1,351,522 items of violent and graphic content; 1,015,909 out of 1,176,634 items of bullying and harassment content; and 5,927 out of 133,229 items of child nudity and physical abuse content. CRS calculated the percentages based on these estimates.
71 The percentages are similar to the proactive rates that are reported by Meta Platforms (for more information, see Meta Platforms, “Proactive Rate,” updated February 22, 2023, https://transparency.meta.com/policies/improving/ proactive-rate-metric/).
72 Reddit, “Transparency Report: January to June 2024,” https://redditinc.com/policies/transparency-report-january-to- june-2024.
73 Robert Gorwa et al., “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance,” Big Data & Society, vol. 1, no. 15 (January-June 2020), p. 3, https://journals.sagepub.com/doi/ 10.1177/2053951719897945 (hereinafter Gorwa et al., “Algorithmic Content Moderation,” 2020).
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 13
moderators’ mental health.74 Some content moderators have filed class action lawsuits against operators for psychological trauma and post-traumatic stress disorder from reviewing disturbing content, such as child sexual abuse, rape, and torture.75
Automated systems used by social media operators might misidentify content as violating their policies. For example, Facebook’s automated systems have reportedly removed ads from small businesses, improperly identifying them as content that violates its policies and causing the businesses to lose money during the appeals process.76 A wide range of small businesses reportedly have been affected by these automated removals, including a seed company that shared a photo of Walla Walla onions, which was flagged as being overtly sexual, and a solar roof company that used acronyms that are similar to cryptocurrency tokens.77 Executives at Meta Platforms have reportedly stated that the company has mistakenly removed too much content on its platforms.78 Increased reliance on automated systems might exacerbate, rather than alleviate, some concerns related to operators’ content moderation practices, including the lack of transparency and fairness of what content is removed.79
Some social media operators have altered their content moderation practices in efforts to balance trade-offs between free expression and removing objectionable content that may cause harms. For example, in October 2023, Meta Platforms initially responded to an increase in violent and graphic content depicting the Israel-Hamas conflict by lowering the threshold for its automated tools—that is, used its automated tools more aggressively—to remove the content from its platforms for violating its policies.80 Meta subsequently restored the posts with a warning screen.81 Some organizations criticized Meta’s removal of content as censoring human rights violations.82 Meta’s CEO Mark Zuckerberg announced on January 7, 2025, that Meta would be replacing its fact-checking program, which began in December 2016 to “identify and address
74 Paul M. Barrett, “Who Moderates the Social Media Giants? A Call to End Outsourcing,” NYU Stern Center for Business and Human Rights, June 4, 2020, https://bhr.stern.nyu.edu/publication/who-moderates-the-social-media- giants-a-call-to-end-outsourcing/.
75 For example, see Bobby Allyn, “In Settlement, Facebook to Pay $52 Million to Content Moderators with PTSD,” NPR, May 12, 2020, https://www.npr.org/2020/05/12/854998616/in-settlement-facebook-to-pay-52-million-to-content- moderators-with-ptsd; and Maia Spoto, “Reddit Agrees to Pay California Workers $525,000 Settlement,” Bloomberg Law News, April 30, 2024, https://news.bloomberglaw.com/litigation/reddit-agrees-to-pay-california-workers-525-000- settlement.
76 Sarah Frier, “Facebook’s AI Mistakenly Bans Ads for Struggling Businesses,” Bloomberg, November 27, 2020, https://www.bloomberg.com/news/articles/2020-11-27/facebook-s-ai-mistakenly-bans-ads-for-struggling-businesses.
77 Ibid.
78 Alex Heath, “Meta Says It’s Mistakenly Moderating Too Much,” Verge, December 3, 2024, https://www.theverge.com/2024/12/3/24311513/meta-content-moderation-mistakes-nick-clegg.
79 Gorwa et al., “Algorithmic Content Moderation,” 2020, p. 3.
80 Oversight Board, “Oversight Board Issues First Expedited Decisions About Israel-Hamas Conflict,” December 19, 2023, https://www.oversightboard.com/news/1109713833718200-oversight-board-issues-first-expedited-decisions- about-israel-hamas-conflict/. For more information about the Israel-Hamas conflict, see CRS Report R47754, Israel and Hamas October 2023 Conflict: Frequently Asked Questions (FAQs), coordinated by Jim Zanotti, Jeremy M. Sharp, and Christopher M. Blanchard.
81 Oversight Board, “Oversight Board Issues First Expedited Decisions About Israel-Hamas Conflict,” December 19, 2023, https://www.oversightboard.com/news/1109713833718200-oversight-board-issues-first-expedited-decisions- about-israel-hamas-conflict/.
82 For example, see Human Rights Watch, “Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook,” December 21, 2023, https://www.hrw.org/report/2023/12/21/metas-broken-promises/ systemic-censorship-palestine-content-instagram-and.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 14
viral misinformation,”83 with a community notes system and adjusting its content filters to focus on “illegal and high-severity violations” of their policies, relying on users to report minor violations.84 As another example, after Elon Musk acquired Twitter (now X) in 2022, content moderation on the platform decreased as the importance of free speech was emphasized, resulting in several advertisers pulling their ads.85
Despite social media operators’ content moderation efforts, harmful content can spread before it is discovered, reviewed, and removed. Additionally, users can repost or share harmful content across platforms, meaning content can spread on another platform after the original content is removed, particularly if platforms use moderation practices that vary in scope and efficacy. Conversely, operators may remove content that most users do not consider to be objectionable, including content that some users find valuable.86 As some social media platforms have grown in popularity, their ability to determine what speech is allowed on a platform has created some unease among policymakers.87 As private entities, social media operators have certain legal protections that apply to decisions about what content is available on their platforms.88
Companies that provide content, apps, and services over the internet, including social media operators, are not broadly regulated as an industry. However, some federal agencies enforce laws and regulations applicable to social media platforms, in addition to other entities in other industries.89 Congress has also enacted legislation that specifically addresses certain websites and mobile apps.90 For example, the 118th Congress enacted the Protecting Americans from Foreign Adversary Controlled Applications Act,91 which prohibits foreign adversary controlled apps, such as TikTok, from being distributed, maintained, or updated in the United States.92 TikTok and its
83 Meta Platforms, “Understanding Meta’s Fact-Checking Program,” October 20, 2023, https://www.facebook.com/ government-nonprofits/blog/misinformation-resources.
84 Joel Kaplan, “More Speech and Fewer Mistakes,” Meta Newsroom, January 7, 2025, https://about.fb.com/news/ 2025/01/meta-more-speech-fewer-mistakes/.
85 Ryan Mac and Kate Conger, “X May Lose Up to $75 Million in Revenue as More Advertisers Pull Out,” New York Times, November 24, 2023, https://www.nytimes.com/2023/11/24/business/x-elon-musk-advertisers.html; and Brad Adgate, “With Concerns About Brand Safety, More Advertisers Have Left X,” Forbes, December 7, 2023, https://www.forbes.com/sites/bradadgate/2023/12/07/with-concerns-about-brand-safety-more-advertisers-have-left-x/.
86 For example, see “Social Media’s Struggle with Self-Censorship,” The Economist, October 22, 2020, https://www.economist.com/briefing/2020/10/22/social-medias-struggle-with-self-censorship.
87 Ibid.; Zeynep Tufecki, “Twitter Has Officially Replaced the Town Square,” Wired, December 27, 2017, https://www.wired.com/story/twitter-has-officially-replaced-the-town-square/.
88 CRS Report R47986, Freedom of Speech: An Overview, by Victoria L. Killion; and CRS Legal Sidebar LSB11224, Moody v. NetChoice, LLC: The Supreme Court Addresses Facial Challenges to State Social Media Laws, by Peter J. Benson.
89 For example, the Federal Trade Commission (FTC) protects consumers from deceptive and unfair acts or practices in or affecting commerce (15 U.S.C. §45). The FTC has conducted investigations and filed charges against companies for conducting deceptive practices on the internet.
90 For example, the 105th Congress enacted the Children’s Online Privacy Protection Act (15 U.S.C. §§6501-6506), which sets requirements for operators that are directed to or collect data from children under age 13.
91 P.L. 118-50, Division H.
92 For more information about TikTok and the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACCA), see CRS Report R48023, TikTok: Frequently Asked Questions and Issues for Congress, coordinated by Michael D. Sutherland.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 15
Chinese parent company ByteDance have challenged the law and its enforcement in federal courts.93
Some Members of Congress have proposed amending Section 230 to address their concerns about social media operators’ content moderation practices. This section provides a brief discussion of Section 230 and some federal proposals to amend Section 230. For a more in-depth discussion of Section 230, see CRS Report R46751, Section 230: An Overview, by Valerie C. Brannon and Eric N. Holmes.
Section 230 broadly protects social media operators from liability for publishing—and in some instances, restricting access to or availability of—another user’s content.94 Specifically, Section 230(c)(1) states that interactive computer service providers and users may not “be treated as the publisher or speaker of any information provided by another” person. Section 230(c)(2)(A) states that interactive computer service providers and users may not be “held liable” for any “good faith” action “to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” The term interactive computer service is defined as any “information service, system, or access software provider that provides or enables computer access by multiple users to a computer server,” which includes social media platforms.95
Former Representative Chris Cox and former Representative and current Senator Ron Wyden, who drafted Section 230, have each stated that their intent was to enable free speech and allow interactive computer service providers to moderate content without government intervention.96 Social media operators may also have constitutionally protected rights to moderate content on their platforms.97
In May 2020, then-President Trump issued an executive order instructing federal agencies to take certain actions with respect to Section 230, such as clarifying the scope of the immunity provision for online platforms.98 In accordance with the executive order, in July 2020, the National Telecommunications and Information Administration (NTIA) filed a petition with the Federal Communications Commission (FCC) requesting rulemaking to clarify provisions of Section 230, including the circumstances under which an interactive computer service provider restricting
93 CRS Legal Sidebar LSB11252, TikTok v. Garland: Constitutional Challenges to the Protecting Americans from Foreign Adversary Controlled Applications Act, by Peter J. Benson, Valerie C. Brannon, and Joanna R. Lampe.
94 47 U.S.C. §230.
95 47 U.S.C. §230(f)(2).
96 Testimony of Christopher Cox in U.S. Congress, Senate Committee on Commerce, Science, and Transportation, Communications, Technology, Innovation, and the Internet, The PACT Act and Section 230: The Impact of the Law that Helped Create the Internet and an Examination of Proposed Reforms for Today’s Online World, 116th Cong., 2nd sess., July 28, 2020, https://www.commerce.senate.gov/services/files/BD6A508B-E95C-4659-8E6D-106CDE546D71; Christopher Cox, “Policing the Internet: A Bad Idea in 1996–and Today,” RealClear Politics, June 25, 2020, https://www.realclearpolitics.com/articles/2020/06/25/policing_the_internet_a_bad_idea_in_1996_—_and_today.html; and Ron Wyden, “I wrote this law to protect free speech. Now Trump wants to revoke it,” CNN Business Perspectives, June 9, 2020, https://www.cnn.com/2020/06/09/perspectives/ron-wyden-section-230/index.html.
97 CRS Legal Sidebar LSB11224, Moody v. NetChoice, LLC: The Supreme Court Addresses Facial Challenges to State Social Media Laws, by Peter J. Benson.
98 White House, “Executive Order on Preventing Online Censorship,” May 28, 2020, https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-preventing-online-censorship/.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 16
access to content would not receive immunity.99 In addition, in September 2020, the Department of Justice sent draft legislation to Congress that would have reformed Section 230 by narrowing the scope of liability protection.100 In October 2020, FCC Chairman Ajit Pai released a statement that the agency would be moving forward with rulemaking to clarify the meaning of Section 230, after the FCC’s general counsel concluded that the FCC has the legal authority to interpret Section 230;101 the FCC has not proceeded with rulemaking on Section 230 since then.
Commissioner Brendan Carr, President-elect Trump’s nominee to chair the FCC beginning in 2025, has stated that the FCC should issue an order that interprets Section 230.102 If the FCC were to take action, it might face legal challenges, as some organizations responded to NTIA’s 2020 petition to argue that the FCC lacks authority to interpret Section 230.103
Congress has held hearings and bills have been introduced to amend Section 230 (Appendix).104 Some bills would remove liability protection for interactive computer service providers that promote or suppress certain content or use an automated process to target and amplify content.105 Other bills would allow providers to be held liable for not removing objectionable content.106
Amending Section 230 may incentivize social media platforms to alter their content moderation practices, potentially addressing some commentators’ concerns. Some have argued for amending Section 230 to remove liability protection for dominant technology firms censoring content,107 as well as to provide individuals who are harmed on the platform with leverage against operators.108
99 National Telecommunications and Information Administration, In the Matter of Section 230 of the Communications Act of 1934, July 27, 2020, https://www.ntia.gov/files/ntia/publications/ntia_petition_for_rulemaking_7.27.20.pdf.
100 Department of Justice (DOJ), “Department of Justice’s Review of Section 230 of the Communications Decency Act of 1996,” press release, September 23, 2020, https://www.justice.gov/opa/pr/justice-department-unveils-proposed- section-230-legislation; DOJ, “Department of Justice’s Review of Section 230 of the Communications Decency Act of 1996,” https://www.justice.gov/ag/department-justice-s-review-section-230-communications-decency-act-1996.
101 Federal Communications Commission (FCC), “Statement of Chairman Pai on Section 230,” October 15, 2020, https://docs.fcc.gov/public/attachments/DOC-367567A1.pdf; and Thomas Johnson Jr., “The FCC’s Authority to Interpret Section 230 of the Communications Act,” FCC, October 21, 2020, https://www.fcc.gov/news-events/blog/ 2020/10/21/fccs-authority-interpret-section-230-communications-act.
102 Brendan Carr, “Federal Communication Commission,” in Project 2025 Presidential Transition Project: Mandate for Leadership: The Conservative Promise, ed. Paul Dans and Steven Groves (Washington, DC: The Heritage Foundation, 2023), pp. 845-860, https://static.project2025.org/2025_MandateForLeadership_FULL.pdf.
103 For example, see John Bergmayer and Harold Feld, “Comments of Public Knowledge,” in In the Matter of National Telecommunications and Information Administration Petition to ‘Clarify’ Provisions of Section 230 of the Communications Act of 1934, as Amended, RM-11862, September 2, 2020, https://www.fcc.gov/ecfs/document/ 109020607125130/1; and Emma Llanso et al., “Comments of the Center for Democracy & Technology Opposing the National telecommunications and Information Administration’s Petition for Rulemaking,” in In the Matter of Section 230 of the Communications Act, RM-11862, August 31, 2020, https://www.fcc.gov/ecfs/document/10831957605823/1.
104 For example, see U.S. Congress, House Committee on Energy and Commerce, Subcommittee on Communications and Technology, Where Are We Now: Section 230 of the Communication Decency Act of 1996, hearings, 118th Cong., 2nd sess., April 11, 2024, https://docs.house.gov/Committee/Calendar/ByEvent.aspx?EventID=117099.
105 For example, DISCOURSE Act (S. 921, 118th Congress), COLLUDE Act (S. 1525, 118th Congress).
106 For example, see CASE-IT Act (H.R. 573, 118th Congress).
107 For example, see Craig Parshall, “Big Tech and The Whole First Amendment,” Federalist Society, August 14, 2020, https://fedsoc.org/commentary/fedsoc-blog/big-tech-and-the-whole-first-amendment; and Jonathan Tepper, “Facebook and Google Must Be Regulated Now,” The American Conservative, May 13, 2019, https://www.theamericanconservative.com/articles/facebook-and-google-must-be-regulated-now/.
108 For example, see Danielle Keats Citron and Benjamin Wittes, “The Internet Will Not Break: Denying Bad Samaritans §230 Immunity,” Fordham Law Review, vol. 86, no. 2 (2017), https://ir.lawnet.fordham.edu/cgi/ viewcontent.cgi?article=5435&context=flr; and Center for Countering Digital Hate, “Understanding Section 230 – Social Media Companies’ Get Out of Jail Free Card,” May 17, 2024, https://counterhate.com/blog/understanding- section-230-social-media-companies-get-out-of-jail-free-card/.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 17
Others highlight the general lack of transparency regarding operators’ content moderation decisions.109 One study recommends pairing Section 230 liability protections with new public obligations for social media operators, including transparency and moderation standards and advisory oversight from regulators.110
Some commentators have argued against amending Section 230, raising concerns about potential unintended consequences.111 Amending Section 230 to encourage moderation of objectionable content or to limit liability protections for removing content would affect all interactive computer services (e.g., search engines, internet service providers) and their users, unless legislative language would explicitly specify a subset of interactive computer service providers and users. Social media operators might adjust their content moderation practices, ranging from aggressively screening content to not moderating any content unless it is illegal, including content that may be objectionable or obscene to most users. Increased exposure to liability might also limit competition, as nascent firms may not have sufficient resources to address regulatory compliance and potential litigation.112
Some Members of Congress have introduced bills to address their concerns about social media operators’ content moderation practices, including some that would amend Section 230 (see Appendix). Some states have enacted legislation related to social media, although challenges to the validity of many of these laws are being litigated in federal courts as of the date of this report.113
This section provides a selection of potential options for Congress. For additional legislative considerations, see CRS Report R47662, Defining and Regulating Online Platforms, coordinated by Clare Y. Cho.
Congress might choose to take no action to address social media operators’ content moderation practices, potentially in light of free speech concerns. Operators have voluntarily adjusted their algorithms and content moderation practices and may continue to do so in response to pressure
109 For example, see Joan Donovan, “Why Social Media Can’t Keep Moderating Content in the Shadows,” MIT Technology Review, November 6, 2020, https://www.technologyreview.com/2020/11/06/1011769/social-media- moderation-transparency-censorship/; and Mark MacCarthy, Transparency Recommendations for Regulatory Regimes of Digital Platforms, Centre for International Governance Innovation, March 8, 2022, https://www.cigionline.org/ publications/transparency-recommendations-for-regulatory-regimes-of-digital-platforms/.
110 Tarleton Gillespie, “Platforms Are Not Intermediaries,” Georgetown Technology Law Review, vol. 2, no. 2 (2018), pp. 198-216, https://georgetownlawtechreview.org/wp-content/uploads/2018/07/2.2-Gilespie-pp-198-216.pdf.
111 For example, see Daniel Lyons, “Beyond 230: Reframing the Conservative Debate Over Social Media Regulation,” AEIdeas, American Enterprise Institute, November 4, 2020, https://www.aei.org/technology-and-innovation/beyond- 230-reframing-the-conservative-debate-over-social-media-regulation/; and Kate Ruane, “Dear Congress: Platform Accountability Should Not Threaten Online Expression,” American Civil Liberties Union, October 27, 2020, https://www.aclu.org/news/free-speech/dear-congress-platform-accountability-should-not-threaten-online-expression/.
112 Jeff Kosseff, “The Gradual Erosion of the Law That Shaped the Internet,” Columbia Science & Technology Law Review, vol. 18, no. 1 (2016), pp. 1-41, https://heinonline.org/HOL/Page?handle=hein.journals/cstlr18&collection= journals&id=1&startid=&endid=41; and Jennifer Huddleston, Competition and Content Moderation: How Section 230 Enables Increased Tech Marketplace Entry, Cato Institute Policy Analysis, no. 922, January 31, 2022, https://www.cato.org/sites/cato.org/files/2022-01/policy-analysis-922.pdf.
113 For example, see CRS Legal Sidebar LSB11224, Moody v. NetChoice, LLC: The Supreme Court Addresses Facial Challenges to State Social Media Laws, by Peter J. Benson.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 18
from their users, advertisers, government bodies, and other external stakeholders. It is unclear whether operators will continue to do so and whether the changes implemented by operators would always align with the public interest and be sufficient to address congressional concerns.
Congress might seek to incentivize operators to implement changes by, for example, holding hearings or conducting investigations. Some operators have voluntarily joined an industry group—the Tech Coalition—to take coordinated action to combat child sexual exploitation and abuse.114 Operators could similarly create a coalition to determine what types of content should be allowed on their platforms. Different operators have established different priorities and approaches to balancing free expression and removing objectionable content; it is unclear whether operators would be able to reach a consensus and whether Congress would agree with guidance or standards recommended by the industry coalition.
If Congress chooses to take legislative action, it might consider amending Section 230, as discussed in the previous section. It might also consider requiring social media operators to provide information about their content moderation practices. Some operators voluntarily publish reports about their content moderation practices, which include estimates for the amount of content removed for violating the platforms’ policies.115 Congress might consider whether these reports provide sufficient information, whether additional information would be beneficial, and the potential costs associated with obtaining the information required in the reports, particularly for operators with limited resources. Legislation requiring private entities to disclose certain information could raise First Amendment concerns.116
Another legislative option might be regulating content moderation practices, particularly for platforms with many users. For example, legislation could focus on the platform’s use of algorithms or other platform features (e.g., auto play, notifications). Congress might also determine that certain objectionable content or moderation practices are sufficiently detrimental to the public to warrant legislative action. Legislation addressing specific types of content or regulating content moderation could raise First Amendment concerns.117
Congress might consider implementing federal advisory or regulatory oversight of social media platforms. Some commentators have proposed oversight that would provide the regulatory authority with access to algorithms and data used by operators and allow it to establish disclosure requirements, such as requiring operators to disclose the data they collect, tests they conduct, prevalence of objectionable content, and actions taken to moderate content.118 For this option, Congress may need to determine the regulatory authority’s jurisdiction, specific objectives, and the authorities it would exercise.
114 For more information about the Tech Coalition, see Tech Coalition, “Working Together to End Online Child Sexual Exploitation and Abuse,” https://www.technologycoalition.org/.
115 For example, see Meta Platforms, “Community Standards Enforcement Report,” https://transparency.meta.com/ reports/community-standards-enforcement/; and Reddit, “Transparency Reports,” https://redditinc.com/policies/ transparency.
116 CRS Report R47986, Freedom of Speech: An Overview, by Victoria L. Killion.
117 Ibid.; and CRS Legal Sidebar LSB11224, Moody v. NetChoice, LLC: The Supreme Court Addresses Facial Challenges to State Social Media Laws, by Peter J. Benson.
118 Transatlantic Working Group, Freedom and Accountability: A Transatlantic Framework for Moderating Speech Online, University of Pennsylvania Annenberg Public Policy Center, 2020, https://www.annenbergpublicpolicycenter.org/feature/transatlantic-working-group-freedom-and-accountability/; and Paul M. Barrett, Regulating Social Media: The Fight Over Section 230—and Beyond, NYU Stern Center for Business and Human Rights, September 2020, https://bhr.stern.nyu.edu/publication/regulating-social-media-the-fight-over- section-230-and-beyond/.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 19
Legislation could indirectly affect content moderation. Some commentators, for example, have focused their concerns on the scope and reach of large social media platforms and proposed legislative options to increase competition.119 One article, for instance, proposes solutions that include defining rules for operators based on their size and requiring dominant platforms to allow others to build customizable content feeds that users may choose from.120 This may allow users displeased with the content moderation practices of one platform to move to another, particularly if there are numerous interoperable platforms. This would depend on technical feasibility, whether operators would still invest in the underlying infrastructure, and whether network effects and economies of scale would make it difficult for new operators to compete.
Congress might also consider legislation unrelated to content moderation. For example, some Members have introduced bills seeking to promote digital literacy, which might empower users to make informed decisions about their use of social media platforms.121 This might improve users’ interactions on social media platforms, but platforms might continue to promote harmful content or impede free expression due to their use of algorithms or their content moderation practices.
Some overarching questions regarding content moderation practices on social media platforms include the following:
• Might Congress or the executive branch take action to address social media operators’ content moderation practices?
• What is the appropriate balance between free expression and preventing objectionable content that might cause harm?
• If action to address the spread of objectionable content and promote free expression is deemed necessary, which institutions—public or private—are to bear the primary responsibility for it?
• Who is to determine whether certain content is objectionable?
If Congress chooses to address social media operators’ content moderation practices, it might consider the intended scope of proposed actions; under what conditions they would be applied; and the range of potential legal, social, and economic consequences, both intended and unintended, that may result. It might consider whether any potential action would impose costs, monetary or otherwise, that further entrench the market power of incumbent operators. It might also consider how U.S. actions, such as regulating social media companies’ content moderation practices, would align with an international legal and regulatory framework.
119 For example, see Tom Wheeler et al., New Digital Realities, New Oversight Solutions in the U.S.: The Case for a Digital Platform Agency and a New Approach to Regulatory Oversight, Harvard Kennedy School Shorenstein Center on Media, Politics, and Public Policy, August 2020, https://shorensteincenter.org/wp-content/uploads/2020/08/New- Digital-Realities_August-2020.pdf; and Daphne Keller, “Who Do You Sue? State and Platform Hybrid Power Over Online Speech,” Hoover Institution, Aegis Series Paper no. 1902, January 29, 2019, https://assets.documentcloud.org/ documents/5735692/Who-Do-You-Sue-State-and-Platform-Hybrid-Power.pdf (hereinafter Keller, “Who Do You Sue?,” 2019).
120 Keller, “Who Do You Sue?,” 2019.
121 For example, see Digital Citizenship and Media Literacy Act (H.R. 9584, 118th Congress) and Investing in Digital Skills Act (S. 4391, 118th Congress).
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 20
Table A-1. Selected Legislation Related to Content Moderation Practices of
Social Media Platforms
Introduced in the 118th Congress
Bill No. Title Summary
H.R. 573 CASE-IT Act This bill would have limited Section 230 immunity for a user or provider of an interactive computer service based on certain content moderation decisions. The bill would have removed Section 230 immunity from being treated as the publisher of information provided by another content provider for one year if a user or provider facilitates (1) illegal online content; (2) certain exploitive contact between adults and minors; or (3) content that is indecent, obscene, or otherwise harmful to minors. Further, to retain Section 230 immunity, an interactive computer service that is dominant in its market (i.e., has gained substantial, sustained market power over any competitors) would have had to make content moderation decisions pursuant to policies or practices that are consistent with the First Amendment. However, the bill would not have limited the application of Section 230(c)(2)(B) immunity for actions taken to enable or make available the technical means to restrict access to objectionable material.
H.R. 1231; S. 560
SAFE TECH Act This bill would have limited Section 230 immunity to claims arising from third-party speech rather than third-party information. Additionally, Section 230 immunity would not have applied if a user or provider (1) accepts payment to make the speech available, or (2) creates or funds (in whole or in part) the speech. The bill would have changed legal procedures concerning Section 230 by (1) requiring a defendant in a lawsuit to raise Section 230 as an affirmative defense, and (2) placing the burden of proving that the defense applies on the defendant. Some courts have held that Section 230 bars claims for civil penalties and injunctive relief. The bill would have expressly excluded requests for injunctive relief arising from a provider’s failure to remove, restrict access to, or prevent dissemination of material likely to cause irreparable harm. However, the bill would have protected a provider from liability for actions taken to comply with such injunctions. Under current law, Section 230 does not apply to federal criminal law, intellectual property law, and other designated areas of law. The bill would have added additional exceptions for civil rights law; antitrust law; stalking, harassment, or intimidation laws; international human rights law; and civil actions for wrongful death.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 21
Bill No. Title Summary
H.R. 2635 The Big-Tech Accountability Act of 2023
This bill would have removed Section 230 immunity for providers of social media services (e.g., Facebook and TikTok). Additionally, the bill would have prohibited a provider of social media services from suspending or otherwise restricting the account of a U.S. citizen based on that citizen’s social, political, or religious status. The prohibition would have applied even if the citizen clearly violates the provider’s policies related to hate speech, sexual harassment, discrimination, or violent or threatening speech. A provider that violates the prohibition would have been subject to civil penalties.
H.R. 4624; S. 2325
Algorithmic Justice and Online Platform Transparency Act
This bill would have established requirements for certain commercial online platforms (e.g., social media sites) that withhold or promote content through algorithms and related computational processes that use personal information. The platforms would have been required to
• make disclosures about their collection and use of personal information and their content moderation practices;
• retain specified records that describe how the algorithms use personal information and assess whether the algorithms produce disparate outcomes based on race and other demographic factors in terms of access to housing, employment, financial services, and related matters;
• employ algorithms safely and effectively; and
• allow users to access and transfer their personal information.
If a platform uses algorithms to publish or sell advertising, it would have been required to maintain a library of the advertisements. The Federal Trade Commission would have also been required to adopt rules concerning deceptive advertising. A platform’s chief executive officer or other senior officer would have been required to certify compliance with disclosure requirements. Additionally, platforms would have been prohibited from (1) employing algorithms or other design features that result in discrimination or similar harms based on demographic or biometric factors, or (2) processing information such that it impairs voting rights. Further, users of a platform would have been prohibited from violating civil rights laws using the platform’s algorithms. The bill would have prohibited waivers or other methods that limit rights under the bill; provided whistleblower protections for individuals who report violations; and authorized enforcement by specified federal agencies, states, and private individuals. The bill would have also provided funding for an interagency task force to study the discriminatory use of personal information by platforms’ algorithms.
H.R. 4910 Deplatform Drug Dealers Act
This bill would have specified that Section 230 immunity does not apply to the illegal advertisement or distribution of controlled substances on the internet.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 22
Bill No. Title Summary
H.R. 7891; S. 1409
Kids Online Safety Act This bill would have set out requirements to protect minors from online harms. The requirements would have applied to covered platforms, which are applications or services (e.g., social networks) that connect to the internet and are likely to be used by minors. However, the bill would have exempted internet service providers, email services, educational institutions, and other specified entities from the requirements. Covered platforms would have been required to take reasonable measures in the design and operation of products or services used by minors to prevent and mitigate certain harms that may arise from that use (e.g., sexual exploitation and online bullying). Additionally, covered platforms would have been required to provide (1) minors with certain safeguards, such as settings that restrict access to minors’ personal data; and (2) parents or guardians with tools to supervise minors’ use of a platform, such as control of privacy and account settings. Covered platforms would have also been required to
• disclose specified information, including details regarding the use of personalized recommendation systems and individual- specific advertising to minors;
• allow parents, guardians, minors, and schools to report certain harms;
• refrain from facilitating advertising of age-restricted products or services (e.g., tobacco and gambling) to minors; and
• annually report on foreseeable risks of harm to minors from using the platform.
Additionally, the bill would have required large (based on specified revenue, employment, or user criteria) websites, internet applications, and search engines (including social network sites) to meet certain requirements before using algorithms that prioritize information furnished to the user based on user-specific data. For example, such platforms would have been required to (1) provide users with notice that the website uses such algorithms, and (2) make available a version of the platform that uses algorithms that do not prioritize information based on user data. The bill would have provided for enforcement through the Federal Trade Commission and states. Further, the bill would have required the commission to seek to contract with the National Academy of Sciences to study the risks of harm to minors by the use of social media and other online platforms. The bill would have established a council to advise on implementing the bill. It would have also required guidance for market and product research focused on minors and an evaluation of options to verify a user’s age.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 23
Bill No. Title Summary
S. 147 See Something, Say Something Online Act of 2023
This bill would have required a provider of an interactive computer service to submit an activity report to the Department of Justice if it detects the transmission of any post, message, comment, tag, or other user-generated content or transmission that commits, facilitates, incites, promotes, or otherwise assists the commission of a major crime. The activity report describing the transmission would have been required to contain (1) the name, location, and other identification information submitted by the user; (2) the date and nature of the user-generated content or transmission detected for suspicious activity; and (3) any relevant text, information, and metadata related to the suspicious transmission. If a provider fails to report a known suspicious transmission, the bill would have eliminated Section 230 immunity for claims related to that transmission.
S. 483 Internet PACT Act This bill would have required providers of interactive computer services to publish their policies explaining the types of content that is permissible on their services and provide a system for users to submit complaints about content that may violate the policies or involve illegal content. Further, providers would have been required to establish a process for removing certain content that violates their policies and notifying the information content provider about the removal, including a mechanism to appeal the removal. Providers would have also been required to publish a report every six months that details the instances in which the company took action with respect to content, including removing content, deprioritizing content, and suspending content provider accounts. The bill would have removed Section 230 immunity for providers if the provider has actual knowledge of illegal content on its service and does not remove the content within specified time frames. The bill would have provided for enforcement of these requirements by the Federal Trade Commission.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 24
Bill No. Title Summary
S. 921 DISCOURSE Act This bill would have limited Section 230 protections for a user or provider of an interactive computer service related to content provided by third parties. It would have also required a provider that offers its service through a mass-market offering to the public to disclose information about its content moderation activities. The bill would have removed Section 230 protection for a provider with a dominant market share if the provider
• promotes or suppresses a viewpoint through its content moderation, including by affecting a content creator’s revenue;
• uses automated processes (e.g., algorithms) to target and amplify content provided to a user who has not requested or searched for the content; or
• solicits, funds, modifies, or otherwise contributes to content.
Currently, a provider retains Section 230 immunity even when it restricts access to materials that it considers objectionable. Under this bill, a provider would have retained protections if restricted materials fall, based on an objectively reasonable belief, into a prescribed list of harmful or unlawful categories. Additionally, the Section 230 immunity would not have applied to providers that (1) restrict access to content in a manner that burdens the exercise of religion, or (2) fail to comply with an existing requirement to notify customers of options for limiting a minor’s access to harmful online content (e.g., parental controls). The bill would have also changed legal procedures related to the liability protections, including by specifying that the protection serves as an affirmative defense.
S. 941 Removing Section 230 Immunity for Official Accounts of Censoring Foreign Adversaries Act
This bill would have eliminated Section 230 immunity for particular social media platforms related to content generated or shared by adversarial foreign governments that restrict access to or censor social media platforms. The Department of State would have been required to compile a list of such governments, and the list must include China, Cuba, Iran, North Korea, Russia, Syria, and Venezuela. Under Section 230, a social media platform is generally not liable for content generated by third parties. Under this bill, if a social media platform knowingly hosts or distributes the content of a verified account controlled by or working on behalf of a listed government, the social media platform would have lost Section 230 immunity for that content. (A verified account is one that displays a badge or other identifier that indicates the authenticity or validity of the account holder or has more than 500,000 followers.) The provisions of the bill would have applied to any domestically headquartered internet website, application, or platform that (1) is open to the public, including citizens from any country; (2) primarily enables users to communicate with each other by posting information, comments, messages, or images; and (3) has more than 50 million monthly users in the United States. The provisions would not have applied to email services or services where content is preselected by the provider (i.e., not user generated) and any chat, comments, or interactive features that depend on the preselected content.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 25
Bill No. Title Summary
S. 1525 COLLUDE Act This bill would have removed Section 230 immunity if a provider restricts access to or availability of content containing political speech because of a governmental request unless the request serves a legitimate law enforcement or national security purpose. In addition, the bill would have changed legal procedures for applying Section 230. Currently, Section 230 serves as broad immunity that typically allows the early dismissal of lawsuits, thereby preempting lawsuits and statutes that impose liability for third-party content. This bill would have made the protection an affirmative defense, which would have meant the provider or user must prove that the protection applies before the lawsuit may be dismissed.
S. 1671 Digital Platform Commission Act
This bill would have established a commission to regulate digital platforms. These are online services that facilitate interactions between users and between users and entities (including online services) that offer goods and services. The bill would have provided the commission with rulemaking, investigative, and related authorities to regulate access to, competition among, and consumer protections for digital platforms. This would have included setting standards for age verification and age-appropriate design. The bill would have also provided for administrative and judicial enforcement of the regulations. The commission would have been required to establish a council of technical experts, representatives of digital platforms, and other experts (e.g., representatives of nonprofit public interest groups and academics) to recommend standards for algorithmic processes and other policies. Additionally, the commission would have had the option to designate systemically important digital platforms. The bill would have included criteria for the commission to use when designating a platform as systemically important (e.g., whether its operations have significant nationwide economic, social, or political impacts). The bill would have also required that the commission receive pre- merger notifications concerning designated platforms. The commission would have been allowed to provide recommendations about such mergers to the Department of Justice and the Federal Trade Commission, and those agencies would have been required to give the recommendations substantial weight when reviewing such mergers. The bill would have also required the commission and any relevant federal agency to consult each other when investigating or regulating the effects of digital platforms on certain matters, including competition and consumer protection. The President would have been required to appoint an independent panel to evaluate the commission after five years and recommend whether to extend the commission.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 26
Bill No. Title Summary
S. 2314 PRESERVE Online Speech Act of 2023
This bill would have required interactive computer services (e.g., social media companies) to issue a public disclosure containing specified information related to a request or recommendation by a government entity that the service moderate content on its platform. Examples of such moderation include eliminating the ability of a user to comment upon information or terminating or limiting a user’s account. Failure to comply with this requirement would have resulted in a fine of $50,000 per day, which would have been deposited in the Rural Digital Opportunity Fund. The Federal Communications Commission would have been required to submit an annual report that included the contents of each such public disclosure.
S. 4213 Kids Off Social Media Act This bill would have limited children’s access to social media platforms and required both platforms and schools to implement certain restrictions on children’s social media usage and screen time. Specifically, the bill would have prohibited social media platforms from knowingly allowing children under the age of 13 to create or maintain accounts. Platforms would have been required to delete existing accounts held by children and any personal data collected from child users. Platforms also would have generally been prohibited from using automated systems to suggest or promote content based on personal data collected from users under the age of 17. The bill would have directed the Federal Trade Commission to enforce these provisions. States would have been allowed to bring civil actions against platforms whose violations of these provisions had adversely affected residents of the state. Further, as a condition of receiving discounted telecommunications service under the Schools and Libraries Universal Service Support (E-Rate) program, schools would have been required to use blocking or filtering technology to prevent students from accessing social media platforms on school networks and devices. Schools receiving E-Rate support would have also been required to implement policies that specify permitted device usage and screen time by grade. Schools would have been required to submit copies of their internet safety and screen time policies to the Federal Communications Commission, and the commission would have been required to make those policies publicly available in a database. Under the bill, social media platforms would have been defined as consumer-facing sites that function primarily as forums for user- generated content. Some categories of online platforms would have been explicitly excluded, including sites that provide primarily videoconferencing, emailing, and educational services.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service 27
Bill No. Title Summary
S. 4977 Digital Integrity in Democracy Act
This bill would have required large social media platforms to promptly remove from their sites false information about election logistics and voter eligibility. Specifically, platforms notified of potential false election information would have been required to investigate the veracity of the flagged information and, if it were false, remove it. Covered information includes false information about the time and place of, or voter eligibility for, an election. Platforms generally would have been required to remove false information within 48 hours of receipt of notification of its existence. If notification is received on the day of an election, including during an early or absentee voting period, platforms would have been required to remove the information within 24 hours. The Department of Justice would have been allowed to bring a civil suit against a social media platform that violated the timely removal requirement. States would have been allowed to bring suit against a platform if the false information at issue related to an election in the state, and candidates would have been allowed to bring suit against a platform if the candidate were aggrieved by the false information. Such suits would have been allowed to seek money damages and injunctive relief. The bill would have also specified that Section 230 immunity does not apply to false election information that is knowingly hosted on a social media platform. However, platforms that comply with the timely removal requirements with respect to false election information would have retained Section 230 immunity.
Source: Congress.gov. Notes: Bills are ordered by bill number, with the House bills listed first. The summaries are adapted from the summaries provided on Congress.gov, specifically by changing the tense used to discuss the bill and the phrasing used to discuss amendments to Section 230 for greater clarity (e.g., rewording “federal liability protection” as “Section 230 immunity”).
Clare Y. Cho Specialist in Industrial Organization and Business Policy
Ling Zhu
Analyst in Telecommunications Policy
Jason Gallo, former CRS section manager, co-authored the original version of this product.
Social Media: Content Dissemination and Moderation Practices
Congressional Research Service R46662 · VERSION 7 · UPDATED 28
This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan shared staff to congressional committees and Members of Congress. It operates solely at the behest of and under the direction of Congress. Information in a CRS Report should not be relied upon for purposes other than public understanding of information that has been provided by CRS to Members of Congress in connection with CRS’s institutional role. CRS Reports, as a work of the United States Government, are not subject to copyright protection in the United States. Any CRS Report may be reproduced and distributed in its entirety without permission from CRS. However, as a CRS Report may include copyrighted images or material from a third party, you may need to obtain the permission of the copyright holder if you wish to copy or otherwise use copyrighted material.