Gonzalez v. Google Might Without end Change Content material Regulation

This week, the Biden administration filed a brief siding with the plaintiffs, and against big tech companies, in the high-profile Supreme Court case Gonzalez v. Google. The highest court in the country is expected to rule on Section 230 of the Communications Decency Act for the first time ever, sometime in February 2023, and their decision could have massive consequences for content platforms and aggregators industry-wide.

The 1996 Communications Decency Act (or CDA) was among Congress’ first attempts to regulate internet material, and fixed a number of important definitions and precedents that have essentially guided the development of online technologies ever since. In particular, Section 230 of the CDA shields online platforms from liability based on content that isn’t posted by their own staff or in-house teams (such as user-generated submissions). It’s been interpreted to mean that internet companies relying on content from users – like Twitter and YouTube and TikTok and Pinterest – aren’t themselves publishers, and therefore can’t be held legally responsible for the ways that individuals use their platforms, or the content that they post.

In 2020, the famously tech industry-averse Justice Thomas specifically recommended that the Court revisit Section 230 and consider narrowing its scope. Last year, while ruling on the question or whether or not President Trump violated Americans’ First Amendment rights by blocking them on Twitter, Thomas once more appeared to attack Section 230 and the current state of tech industry regulation more broadly, arguing that “applying old doctrines to new digital platforms is rarely straightforward” and questioning whether or not social media companies should be federally regulated in the fashion of telephone carriers.

What does this mean for internet content?

The Court now looks ready to take on these issues specifically, and potentially alter the ways in which internet content is published and regulated on a fundamental level. Gonzalez v. Google specifically delves into the question of whether Section 230 protects platforms when their own internal algorithms recommend third-party content. The plaintiffs are the family members of American citizen Nohemi Gonzalez, who was killed in an ISIS terrorist attack on a Paris bistro in 2015. Lawyers from the Gonzalez family argue that YouTube, and its parent company Google, recommended ISIS recruitment videos via its algorithms, and thus specifically helped to promote the terrorist organization and its cause, making them responsible for the violent consequences. In this week’s brief, Biden’s solicitor general further made the case that, by sharing ISIS videos algorithmically, Google violated the 1990 Antiterrorism Act, directly aiding a foreign adversary of the US government.

What have the courts said in the past?

Thus far, the courts have maintained that Section 230 does indeed shield Google from prosecution, provided its algorithms treated ISIS videos in the same exact fashion as other user-generated content. Some 9th Circuit judges dissented, arguing that curation is a fundamentally separate activity than just publishing alone, and that destinations such as YouTube and Twitter should be forced to consider the potential harm of content they help to distribute.

So far, most judges and court-watchers have fallen short of calling for a complete re-evaluation of Section 230. This rule has been so potent a force in the development of the tech and media industries, author and law professor Jeff Kosseff actually calls the language of Section 230 “the 26 words that created the internet.” Understandably, courts have been hesitant to alter its interpretation, or revoke it entirely, due to the potentially far-reaching consequences of such a decision.

Still, both sides of the political aisle have current concerns about how speech is regulated and potentially censored online, making the present moment ripe for revisiting Section 230. Conservatives have major concerns about allegations of censorship by left-leaning platforms, which could be addressed by fundamentally altering the expectations of how content sites and social media apps curate and present content. Liberals and progressives, on the other hand, largely want more tools to limit the spread of disinformation and protect online users from harassment and trolling campaigns.

How might the decision impact tech companies?

But the potential impact of changing our interpretation of Section 230, or eliminating the provision entirely, could go much further. Many tech companies have argued that they’d invariably become more conservative and restrictive about what kinds of content they allow on their sites should the rules change, out of fear of potential legal repercussions. Some kinds of now-permissible content – ​​such as health and medical advice regarding birth control or vaccinations, for example – could also become legal footballs moving forward, with special interest and activist groups now trying to combat political messaging they don’t like via the courts.

Further, if the Supreme Court rules that algorithmic promotion of content isn’t protected by Section 230, platforms might theoretically have to abandon automated curation entirely, or risk the systems promoting content for which they’re now legally liable. For some types of sites that rely on potentially divisive or controversial user submissions – such as the “employer review” aggregator Glassdoor – a change to Section 230 could impact their entire business model. Could a company host negative reviews of an individual boss if those reviews must meet a journalistic publication’s legal standard for accuracy? As of February, questions like these may no longer be purely rhetorical. — Lon Harris

Comments are closed.