Historical past explains why international content material moderation can’t work
Messaging and social-media applications are displayed on a smart phone on April 24, 2020 (Nicolas Economou/NurPhoto)
Social media platforms face an all but impossible challenge: generating standards for acceptable speech that transcend borders and apply universally. From nudity and sexual content to hate speech and violent material, digital platforms have tried to write rules and build content-moderation regimes that apply around the world. That these regimes have struggled to meet their goals, however, should come as no surprise: The global speech standards authored by online platforms are not the first time that tech innovators have tried to write global rules for speech. Unfortunately, the history of attempts to write such rules does not bode well for contemporary efforts to build global content-moderation regimes. From telegraphic codes to the censorship of prurient material, the promise of globally consistent standards have long been plagued by important—and to some extent inevitable—linguistic and contextual differences.
The challenge of telegraphic codes
Conflicts regarding global content regulation are as old as electronic media. In 1865, the International Telegraph Union was founded to provide technical standards for the world’s growing network of telegraphs. In addition to its technical work, the ITU embarked on a project that would last six decades to create telegraph codes—short messages in which one word could convey an entire phrase of meaning. Mired in controversy over which languages and alphabets to include, the ITU’s grand ambitions to create a global language ultimately failed. This failure has important lessons for today’s social-media firms attempting to build similarly global content-moderation regimes.
From the 1870s until the interwar period, the ITU attempted to create its standards for telegraphic codes—a vital project for a global communications system of highly limited bandwidth. Over decades of wrangling, delegates sought to create a system that could work for all languages. They tried everything from a syllable-based system to creating standard code words. But none could hold up to universal scrutiny. Every proposal was based on problematic exclusions of certain languages from purportedly global telegraphic codes and excluded all languages that did not use the Roman alphabet.
One proposed code in the early 1900s, for example, tried to create a code using syllables that would add up to pronounceable code words. But the code only included syllables from Romance and Germanic languages (as well as Latin). Other languages were excluded because some delegates asserted that they did not contain pronounceable syllables. One French representative gave the example of a Polish tongue-twister—“Chrząszcz brzmi w trzcinie” (“the beetle buzzes in the reed”)—which seemed unpronounceable to non-Slavic speakers and which justified excluding Slavic languages from a supposedly international code of pronounceable language. (This is a linguistically absurd statement, because, obviously, Polish speakers can pronounce Polish. Pronounceability is in the eye of the beholder!)
By 1928, the ITU abandoned its attempt to create a universal code language and allowed senders to use whatever codes they wished. The code just had to adhere to the technical standard of using words with a maximum of five letters. A global linguistic standard had proven impossible.
Censorship of sexual material
Though tech companies are now trying to formulate global content standards, at various points in history policymakers have pushed in the opposite direction, enshrining in law different standards for content in different places. Consider the historically disparate approaches European and American governments have taken toward censor content related to sex.
While different European countries have taken different regulatory attitudes towards speech, governments codified press regulation and constrained freedom of expression in certain ways. France and Germany promulgated new press laws in the 1880s that considerably expanded freedom of the press but still deemed some speech illegal, such as blasphemy in the case of Germany. After World War II and into the 1990s, such laws were amended to include hate speech and denial of crimes against humanity. For understandable historical reasons, European regulation has focused more on incitement to violence.
Historically, the United States has taken a different approach. While the United States is seen today as something of a free-speech extremist, America has not always had the most liberal content policies, particularly when it comes to sex. Much of that attitude stemmed from legislators, but it also seeped into public attitudes and private businesses like publishing. In the 1870s, a U.S. Postal Inspector, Anthony Comstock, spearheaded an effort to allow the Post Office to seize and suppress obscene and immoral materials as well as prosecute authors and publications’ owners. In 1873, Congress passed the Comstock Act, which enabled such moral censorship. Under the Comstock Act, the U.S. Post Office could no longer deliver “obscene, lewd, or lascivious” material. That included information on abortion, contraception, and venereal disease. In some cases, even anatomy textbooks no longer made it through the mail.
Such suppression of content continued into the interwar period. In 1928, Mary Ware Dennett, an advocate for women’s suffrage and co-founder of the Voluntary Parenthood League, was fined for distributing “The Sex Side of Life.” Dennett’s fine was overturned after an appeal from the ACLU. Others, like Ida Craddock, the first female undergraduate at University of Pennsylvania, suffered worse fates. Craddock committed suicide in 1902 after multiple jail stints for her work that included sex manuals for married couples. Craddock’s suicide note named Comstock.
These content policies meant that a century ago, the United States was seen as the opposite of a bastion for free speech. In 1905, George Bernard Shaw opined that censorship under the Comstock Act was “the world’s standing joke at the expense of the United States. Europe likes to hear of such things. It confirms the deep-seated conviction of the Old World that America is a provincial place, a second-rate country-town civilization after all.” All this is a reminder that American legislators have actively intervened in speech for a large stretch of American history.
These were issues well into recent memory. Only in 1964 did the Supreme Court overturn the ban on publishing Henry Miller’s Tropic of Cancer, deeming the book to have some literary merits. Fred Kaplan has argued that this decision, amongst others, ended American obscenity restrictions and “set off an explosion of free speech.” But Miller’s work had been published in France thirty years earlier in 1934.
It was no coincidence that books like Miller’s were published in continental Europe. Historically, the United States has harbored greater concerns about sexual content, while Europe has been more concerned with hate speech or violence. While some of this remains regulated by law, cultural norms also play a role. Rare is the play in a Berlin theater that does not involve an actor getting naked.
Even in the era of television, differences continued. Like most European countries, the United States established rules regarding “indecent” programming on television. Unlike Europe, however, contestations over indecency generated broad public outrage. In the early 2000s, the Federal Communications Commission (FCC) became increasingly concerned with indecency regulations, particularly after Janet Jackson’s “wardrobe malfunction” during 2004’s Super Bowl halftime show. Following that incident, the public submitted more than 1.4 million comments to the FCC, with many asking for harsher regulation. Such cries spurred the Broadcast Decency Enforcement Act of 2005, which raised the penalties for obscene, indecent, and profane language.
Today, such prudish attitudes regarding sex are replicated in content-moderation documents authored by American companies and applied globally. Women’s nipples are still censored on platforms like Instagram, a policy that continues to draw criticism from women who have had their posts removed on this basis. As recently as November, Madonna criticized Instagram for taking down a picture of her partially-exposed nipple, decrying the policy as “sexism… ageism and misogyny.” Various celebrity and activist campaigns organized around the #freethenipple hashtag have sought to raise awareness about this over the past decade.
In light of this history, it is unsurprising that the first chip at the protections of Section 230 of the Communications Decency Act involved sex. In 2018, Congress passed the FOSTA (Fight Online Sex Trafficking Act)/SESTA (Stop Enabling Sex Traffickers Act). This amended the liability protections of Section 230, which mostly exempt online platforms from liability for content posted by users. FOSTA/SESTA established an exception that holds websites legally liable for any content that enables sex trafficking or prostitution. Websites like Backpage.com, which had been used by sex workers to advertise their services and screen clients, swiftly ceased to exist. Experts argue that measures like FOSTA/SESTA have made sex work more dangerous by forcing more of it to be done on the street without an online screening mechanism. Regardless of its merits, a measure like FOSTA/SESTA illustrates the basic impossibility of standardized global content regulation. The measure would make no sense in countries like the Netherlands where consensual prostitution is legal.
By contrast, European speech regulation has focused on issues like hate speech and a doctrine of proportionality. Back in 2016, the European Union created a voluntary Code of Conduct on Hate Speech in conjunction with social media platforms. Frustration with such voluntary measures and concerns around “fake news” after the U.S. election of 2016 led to Germany promulgating its Network Enforcement Law (NetzDG) in 2017. In force since 2018, the law requires social media companies with more than two million unique users in Germany to adjudicate complaints about potentially illegal social media posts within 24 hours. Although NetzDG is often known as a hate speech law, it actually encompasses 22 statutes of German speech law, much of which had lain dormant since their creation in the 1880s. None involved sexually explicit content.
The challenge of global content moderation today
Historical attitudes to speech help to explain the foundations of social media companies’ global terms of service. When companies like Facebook first emerged, they did not think about content regulation in advance. Terms of service emerged in an ad hoc fashion, often in response to user complaints or controversies. One origin story lies in American sensitivities around nudity: Facebook only started to create more systematic terms of service in 2008 after disputes about whether it would allow photographs of breast-feeding mothers on the platform.
In the following years, social media platforms’ terms of service and content-moderation rules have grown into lengthy documents with thousands of content moderators around the world attempting to enforce the same rules everywhere. Such an approach has proven complicated. Companies often do not employ enough content moderators to filter content in many users’ languages. In countries like Myanmar and Ethiopia, Facebook’s content-moderation regime has failed to prevent the platform from being used to facilitate violence.
While problems in places like Ethiopia and Myanmar have often led to calls for deleting content, a lack of context can also lead to over-deletion. Since 2017, May 5 is commemorated as a National Day of Awareness for Missing and Murdered Indigenous Women and Girls (MMIWG) in Canada. This May, Instagram was forced to apologize for deleting posts about MMIWG, apparently due to a processing error. Despite the contrition, many Indigenous women felt that this replicated their experiences of being long ignored and pushed aside. Emily Henderson, an Inuk writer based in Toronto, said that Instagram had not “adequately addressed that feeling of silence and erasure.”
Although the MMIWG issue may seem particular to Canada, it raises broader questions about how social media companies deal with content related to persecuted groups. In 2019, a national inquiry found that Canada had committed a “genocide” in perpetuating violence against Indigenous peoples over centuries. Understanding issues particular to certain places requires context. I certainly know far more about Indigenous history after living in Canada than I did before. If U.S.-based social media companies do not have the tools to deal with content related to a genocide in its neighbor to the north, it is perhaps not so surprising that they have not developed the tools to deal with content elsewhere.
To globalize U.S.-based speech norms, then, is to privilege a certain type of English and a certain very recent understanding of free speech. Some critics like Michael Kwet have decried the spread of these norms as a form of “digital colonialism.”
Partly as a response to the failure of global content-moderation regimes and out of a desire to exercise greater control over the online world, states—both democratic and authoritarian—are now intervening in speech as well. Russia has threatened to ban YouTube from the country if the company does not comply. The Indian government has requested take-downs from major social media companies, seemingly to cover up the truth about Covid-19 deaths, even leading to a raid on Twitter’s Delhi office. Germany’s NetzDG requires social media companies like Facebook to delete posts that contravene 22 statutes of German speech law within 24 hours or face financial penalties. Almost every country seems to be steaming ahead with its own content regulation. It is time to think more concretely about what this will mean for online speech in the coming decades as states create very different speech regimes.
Given these historical and contemporary struggles to globalize speech regulation, no wonder social media companies are having a hard time. Meta’s new Oversight Board, for example, is using international human rights law to try to create a global framework but local implementation seems inevitable and, given history, predictable. Esperanto proved a fleeting dream of a universal world language in the late nineteenth century. So too will Big Tech’s aspirations for global terms of service.
Heidi Tworek is an associate professor at the University of British Columbia and a non-resident fellow at the German Marshall Fund of the United States.