Clarence Thomas and the enigma of social media

America owes Clarence Thomas an expression of gratitude.

In a 12-page match with a statement dismissing a case resulting from Donald Trump’s decision to prevent certain people from reading his Twitter account as President, Thomas has a number of broad questions about the nature of the social Media and their legal handling raised and philosophical. His answers to these questions are mostly wrong. But that does not mean that his arguments should be rejected. On the contrary, we are responsible for clarifying how novel social media really is – and how imperfectly our thinking and the applicable law apply to them and the dilemmas they represent for our body politics.

Since the shock of the 2016 election and the realization that online disinformation and radicalization played a significant role in the outcome, both Washington and Silicon Valley have recognized the need for digital platforms to more actively monitor and control the content published and controlled on promoted them. This conviction strengthened in the run-up to the 2020 elections and reached a climax in the turmoil of the following weeks, culminating in the insurrectionary violence on Capitol Hill on January 6 and the subsequent decision by Twitter and Facebook to ban social media Accounts of the outgoing president.

In other words, the emerging consensus seems to be that social media companies need to see themselves as editors who make millions and millions of decisions every day about what to accept for publication. This process is currently a moderation of the content. Section 230 (of Title 47 of the US Code) has always been delimited between content publishers and moderators, so that with few exceptions online businesses are not legally liable for what third parties post on their websites. It also protects these companies from legal liability for removing third party material that is found to be offensive or obscene. What is happening now is that the terms aggressiveness and profanity are being expanded significantly to encompass much broader categories of language and expression, including fake news, lies, hatred, gas light and incitement, with companies like Twitter, Facebook, Google and Amazon authorized In order to make the countless verdict, one must ask how, when and against whom the restrictions are to be applied.

This is where Clarence Thomas comes in.

In his agreement, Thomas acknowledges that in addition to the special immunity that Section 230 gives websites and digital platforms, private companies generally have the freedom (established in the first amendment) to distance themselves from expressions they disapprove of. The government could not usually force social media companies to publish ideas or include people they did not want to be associated with – any more than they could force a newspaper or magazine to publish certain ideas or individuals against their will.

There are important exceptions to this rule, however – and Thomas suggests that there is a strong argument that social media platforms should be treated as an exception. This is mainly due to their extraordinary size and power and the crucial role they play in our public life. As Thomas points out, social media companies “offer opportunities for historically unprecedented idioms, including speech by government actors.”

This “tremendous control over language,” Thomas said, makes these private corporations very much like political gatekeepers, who define what can be said and who can say it at the national level. This is an order of magnitude greater than what the New York Times enjoys when deciding whether or not to post a comment on its pages and website, or when Fox News decides whether to invite a guest to one or not its prime time programs. A more appropriate analogy would be a company that provides every citizen with a microphone that serves as a requirement for full citizenship and political participation – and then selectively exercises the power to turn it off for certain people or groups if the company decides to do so It is guaranteed.

We saw an example of that power wielded in the weeks leading up to the 2020 election when Twitter and then Facebook blocked the New York Post from tweeting to keep Hunter Biden, the worried son, from going for a somber Hit job advertised is the Democratic candidate for president. We saw another example when, following the January 6 riot, several social media companies banned Trump from tweeting in the final weeks of his presidency. (This former president’s restriction remains in effect today, three months later.)

In these cases, social media companies have powers that far exceed those of the New York Post or even the President of the United States in controlling political language. Like many liberals and progressives, I really enjoyed not having to endure a torrent of Trump tweets filled with arson provocations and lies about election fraud in his administration’s last ten days, and I think during this post-insurrection period, he how we prepared to mute his successor was very good for the country. The decision to do so, however, was made solely by a small handful of private companies, which acted with virtually no political control or democratic accountability.

That those on my side in the greatest political divide in the country enjoyed the exercise of this tremendous power in these particular cases should not lead us to ignore the threatening consequences. Who exactly is running the political show in this country (and the world)? And where are the limits of your strength?

In his agreement, Thomas suggests a number of different ways we could start thinking about social media companies, all of which are pointing in the same general direction. In some passages, Thomas claims that such companies are similar to companies that provide essential public services – such as a pipeline, network, or utility. What such companies have in common is that strict government regulations restrict them in a number of ways as their activities directly affect the public interest. However, in other sections of the consensus, Thomas points out that he thinks social media companies are more like hotels or restaurants, companies that are expected to treat all comers equally without discrimination. From a legal point of view, the first group of companies is referred to as a common carrier. The second are places of public accommodation.

I find all of Thomas’ analogies quite tense. (I’ll explain why in a moment.) But I think it’s possible to work a metaphor out of his various comments on social media companies to capture how he sees them and their role in our politics. He sees Twitter, Facebook, Google, and Amazon as gigantic public billboards where the vast majority of Americans regularly communicate, share information, trade, and express political opinions. According to this model, the poster owners must decide for themselves who can post on it and what they can say when they do so. This means giving these owners a government-like power to determine the rules of the political game. Rather than granting these owners this level of power over our public lives, Thomas suggests regulating billboards along the lines of the Anti-Discrimination Act, without banning anyone, and everyone is welcome, on overlapping marketplaces of information, trade, ideas and opinions to compete.

There are a number of problems with this hyper-libertarian vision of online life, but I want to focus on just one – namely, that social media platforms are very different from gigantic billboards (and pipelines and utilities, and hotels and restaurants) . What we see on the billboards does not neutrally reflect what users post on them. What we see is a product of the interaction between what people publish, with complex, proprietary algorithms developed and controlled by the companies.

It’s not even possible to say that each platform corresponds to a single billboard. On the contrary, each of us sees our own billboard that is tailored (curated) to him. The exact content, placement, and ranking are determined by our previous interactions with the platform and the algorithm’s best efforts to anticipate our wants and desires, hopes and needs.

Protesting the prospect of more intentional (and intentionally political) moderation of content means waking up fairly late to a problem. Twitter, Facebook, and the other social media companies tomorrow could swear never to deplatform another user or delete any other politically controversial post, and they would still be massively involved in moderating, customizing, and manipulating any content we do see when we visit their websites. Our newsfeeds, searches, scrolling, and potential purchases continue to be curated specifically for each of us to keep us busy and click to the maximum.

The problem with Thomas’s analysis is that it is not radical enough. He has the right to see the political danger posed by social media – and the right to be aware of the many ways in which the megabusinesses in this sector categorically differ from other types of businesses – but he understands that true character and the parameters of the threat are not. Not only are these companies enormous transportation companies, and they cannot simply be treated as places of public accommodation at the national level. They are different and new, and pose unique and novel challenges to democracies around the world, and we will need new kinds of laws and regulations to deal with them.

Whether we determine that the right response will require the development of new technology-based forms of regulation, or whether we ultimately decide that companies need to be wound up, Clarence Thomas deserves credit for standing up as one of nine Supreme Court judges Language has brought specific and fruitful questions to one of the most pressing problems of our time.

Comments are closed.