We Must Regulate Social Media Content material Moderation, However We Can’t Simply Eradicate Part 230(c).
The dangerous facilitation of disinformation and misinformation online is now a defining characteristic of social media platforms. Facebook, Twitter, TikTok, YouTube, and other platforms have faced controversy for abetting the rapid proliferation of political and Covid-19-related misinformation, climate denialism, and hate speech. While many countries have enacted or are working to pass legal frameworks to regulate the content moderation practices of social media companies, the United States has maintained a laissez-faire status quo, relying on platforms to act independently in accordance with their community guidelines. Yet in the wake of multiplying real-world consequences, self-regulation is no longer a tenable solution. The United States must explore possibilities for government regulation to strengthen and standardize content moderation practices.
Popular support for stronger social media content moderation emerged most notably in 2018 after a rapid increase in rumors and misinformation on Facebook prompted attacks on ethnic minorities in Myanmar and Sri Lanka. Public criticism of Facebook’s role in catalyzing real-world violence originally pressured the company to create more robust policies for removing misinformation and hate speech on its platform. More recently in the United States, the proliferation of disinformation and false claims concerning the 2020 presidential election and Covid-19 vaccinations have produced significant offline consequences. The January 6 Capitol insurrection—in which a violent mob of Donald Trump supporters seeking to delegitimize the 2020 election results clashed with police, trapped legislators, and vandalized the US Capitol Building—has been widely attributed to conspiracy theories, false claims of election fraud, and inflammatory content that surged uncontrollably on Facebook ahead of the attack. Likewise, Covid-19 conspiracy theories, anti-Asian hate speech, and falsehoods about the vaccine circulating on social media platforms like Facebook, TikTok, and Twitter have undermined vaccination uptake and have initiated anti-Asian harassment and hate crimes both online and offline.
Social media companies have taken critical voluntary actions to crack down on the spread of such content. Facebook implemented “emergency” measures to remove misinformation before the 2020 presidential election and strengthened its moderation efforts to curb escalating calls for violent protests of the election outcome. To address Covid-19-related disinformation, Twitter and Facebook not only banned President Trump, but they also ramped up the stringency of their content moderation practices by adding more extensive fact-checking and health warning labels; connecting users with credible, science-based information; and engaging in more rigorous application of their community guidelines. Yet these measures are inconsistently enforced and riddled with loopholes that enable the continued spread of disinformation. While Twitter and Facebook have defended their moderation practices to lawmakers, they did not succeed in averting the Capitol insurrection or effectively tamping down Covid-19 disinformation.
“It is tough business to be the custodians of the internet.”
Social media companies’ shortcomings underscore the urgent need for government infrastructure that regulates content moderation practices. Furthermore, widespread democratic support for government regulation materialized in the wake of the pandemic and has continued to grow. Yet particularly in the United States, where objectionable speech is legally protected, government regulation of content moderation practices confronts an almost irresolvable tension that legislation across the globe has failed to effectively reconcile without controversy. How does one define and codify the parameters of “removable” or “impermissible” speech without encroaching upon freedom of expression? Furthermore, who (or what entity) should be tasked with determining whether a specific piece of content falls within such boundaries?
Part of what makes regulation so difficult is the nature of social media platforms and the content itself. Social media companies are different from traditional media companies in that the content circulated on their platforms is user-generated. Since their platforms merely facilitate the spread of user-generated content, social media companies do not engage in the same forms of editorial oversight that entities that publish content are required to do. Accordingly, Section 230 of the US Communications Decency Act generously shields social media companies from legal liability for the content that users post on their platforms. As social media companies are not legally responsible for the speech of their users, they have no legal obligation to moderate content.
Section 230’s intermediary liability protections have made it the primary battleground for United States regulatory debate regarding social media content moderation. Lawmakers have introduced numerous proposals aimed at tackling online content moderation, many of which seek to erode or eliminate Section 230 intermediary liability protections. Yet, importantly, while Section 230(c) provides social media companies with legal immunity from responsibility for their users’ content, it also provides companies with legal immunity to moderate content. Under the second provision of Section 230(c), social media companies are protected from liability for engaging in content moderation aligned with their community standards and terms of service. Indeed, social media companies have made requisite use of these protections following the Capitol insurrection and amid the Covid-19 pandemic as noted above. Accordingly, eliminating Section 230 protections not only undermines innovation and free discourse, but it could further disincentivize social media companies from moderating content.
While Congress has held several hearings with Big Tech and social media company executives to investigate paths forward for regulation, these hearings have mainly revealed legislators’ lack of understanding about the industry. Furthermore, even as social media companies have defended their moderation practices, the technical difficulties of effectively moderating content have forced companies to outsource the task to algorithms and third parties. It is tough business to be the “custodians of the internet.”
Given these hurdles, US lawmakers, social media companies, and experts of the ubiquitous information environment must make a concerted effort to collaborate and think creatively about how to balance protections for online free discourse with the public interest of curbing dangerous information. Federal courts have and continue to establish exceptions to Section 230’s liability shield, which may begin to determine which types of content social media companies must meaningfully moderate. Furthermore, lawmakers could develop standards of transparency and digital privacy that all social media platforms must meet to qualify for Section 230(c) protection. Ultimately, online discourse can no longer be governed by the regulatory frameworks of a non-digital age—our laws must adapt to our increasingly virtual world.
Comments are closed.