Broadcast indecency could provide a path ahead for social media regulation

The worrying information from Facebook whistleblower Francis Haugen continues to provide new insights into the failure of the social media platform to monitor bad actors and moderate harmful content. As a result, the broader question of whether and how government interference in platform governance should be pursued remains an ongoing topic of discussion, with the last week being held in a series of Congressional hearings on the matter.

In the US, however, due to our strong First Amendment case law, these considerations are unlikely to lead to measures that maintain full protection against both disinformation and hate speech. Furthermore, the unprecedented hostility the Trump administration had towards the news media has served as a haunting reminder of why we should be suspicious of any new type of government intervention in the media sector. However, if in the ultimate cost-benefit analysis we see that our commitment to the absolutism of the First Amendment is actually undermining the democracy that the Constitutional Amendment is designed to protect, then perhaps it is appropriate to review the use of speech on social media. One possible way forward in this rethinking is to rethink how we regulate indecency in broadcasting.

Why indecency? And why radio? Because broadcast objection is the only time the Federal Communications Commission (FCC) and the Supreme Court have approved the creation of a language category that, from a regulatory and legal point of view, only exists within the confines of a particular medium. In contrast to profanity, for example, which is an unprotected category of language regardless of its distribution, indecency is only less protected in the context of broadcasting. In no other communicative context are there federal restrictions on indecency. This is reflected in the FCC’s definition of indecency: “Material that depicts or describes, in context, sexual or excretory organs or activities in clearly objectionable terms as measured against contemporary community standards for the broadcast medium”. (Emphasis added).

What relevance could a language category that was developed for an old and increasingly irrelevant medium have for the topic of social media regulation? In terms of content, not much. Indecency, as defined above, is found at best on the verge of concern around social media, where hate speech and disinformation emerge as the deepest and most influential problem areas.

However, what is potentially relevant – and perhaps worth considering – is the underlying idea that one or more distinctive language categories could be worked out solely for the context of social media, just as indecency is a regulatable language category exclusively within the broadcast context. Could we imagine a regulatory environment in which hate speech and disinformation remain largely unregulated, except in the specific and narrow context of social media? This is a question I investigated as part of a larger research program that looked at whether regulatory frameworks and justifications developed in the traditional media sector could provide useful insights that can guide our approach to social media platforms.

According to the model of media regulation developed in the USA, such special treatment of speech in the exclusive context of a particular medium must be accompanied by a valid justification. In other words, what makes a medium so distinctive that it deserves different treatment from a regulatory point of view? In the broadcasting context, the two main reasons for indecency were: 1) that broadcasters use a scarce public resource (the broadcast spectrum) and, as the public trustee of that resource, are therefore entitled to lower levels of First. and 2) that broadcasting “is uniquely ubiquitous,” and this diffusion provides a reason for treating broadcasting differently from other media. It’s worth noting that the Supreme Court rejected later efforts by regulators to apply the indecency standard to telephony, cable television, and the Internet.

I have argued at length elsewhere that it can be convincingly argued that social media platforms like broadcasters use a public resource (in this case, aggregated user data) and should therefore be treated like public trustees – much like broadcasters have to adhere to certain Conditions in return for their access to the collective broadcast spectrum. Whether or not social media platforms are “uniquely ubiquitous” as the FCC and the Supreme Court once held for broadcasting is another question. The two media share important criteria that the Supreme Court used in characterizing broadcasting as uniquely ubiquitous. They’re both free, generally available, easily accessible, and work in such a way that unexpected, accidental exposure to malicious content is always a legitimate option.

Imagine for a moment that one of these reasons for treating social media differently from other media has gained momentum. So, in this narrow but deeply meaningful context, might it make sense to give government a more active oversight role over certain language categories? That more active role need not involve the kind of case-by-case decision-making and intervention associated with indecency in broadcasting, but rather, as advocated by those like Mark MacCarthy, some degree of accountability to a federal agency for compliance with independently measured effectiveness thresholds for systems to monitor disinformation and hate speech.

Certainly, the problems of hate speech and disinformation are not just restricted to social media. Hate speech and disinformation come from traditional and partisan news agencies, politicians and political organizations, activists and bad actors of all kinds. Still, social media is an important mechanism by which the voices of all these stakeholders are amplified as platforms like Facebook, Instagram, YouTube, and Twitter have emerged as some of the most pervasive, unconstrained, and engaging media outlets of all known.

The bottom line is that a precedent has been set for the construction of less protected language categories, limited solely to the boundaries of a particular medium. As we continue to grapple with the destabilizing effects of disinformation and hate speech, should policymakers and courts consider creating explicit definitions of these language categories that only work in the context of social media? Could this provide a way forward for some type of intervention that would better enable the government to hold these platforms accountable for the content they host and distribute? This could certainly be viewed as an extreme and ultimately misguided approach to the problems of hate speech and disinformation, but given the size of the challenges we are currently facing, it may at least be a conversation worth having.

Meta, the parent company of Facebook and Instagram, and Google are general, unreserved donors to the Brookings Institution. The results, interpretations, and conclusions published in this article are those of the author only and are not influenced by donations.

Comments are closed.