How Trump and covid-19 made social media “censorship” a partisan concern

Online content moderation wasn’t always a partisan issue. That began to change in 2016.

October 9, 2022 at 7:00 a.m. EDT

How social media ‘censorship’ became a front line in the culture warHow social media ‘censorship’ became a front line in the culture war (Lucy Naland/The Washington Post)

Early last year, amid mounting criticism that social media was spreading disinformation about covid-19, Facebook expanded an unprecedented campaign to police falsehoods by banning what it called “debunked claims” about the virus. Among them: The claim that covid was “man-made” and had leaked from a lab in Wuhan, China.

To the Biden administration and the scientific establishment, Facebook’s efforts to rein in misinformation were too little, too late, given how its network had helped false and damaging claims to go viral in the first place. But others complained that the crackdowns squelched legitimate debate about the frustrating performance of public health authorities — a view that was partly vindicated when China’s lack of transparency prompted prominent scientists to declare the lab-leak theory “viable” and demand further investigation.

In May 2021, barely three months after it imposed the ban, Facebook backtracked: “In light of ongoing investigations into the origin of Covid-19 and in consultation with public health experts,” the company said, “we will no longer remove the claim that Covid-19 is man-made from our apps.”

What people can and can’t say online — and the role of Big Tech in making those calls — has emerged as a critical fault line in American politics. The left cries for content moderation to tamp down disinformation, racism and misogyny. The right decries that as censorship and demands the right to free speech.

In recent months, several flash points have brought this battle to the fore. Last week, the Supreme Court agreed to hear a case that accuses YouTube of abetting terrorism by recommending content from the Islamic State. Last month, a federal court upheld a Texas law that would prevent social media platforms from removing or limiting posts on the basis of political viewpoint.

Meanwhile, the world’s richest man, Elon Musk, is pushing to close a deal that would give him sole control of Twitter, whose decision to ban President Donald Trump after the attack on the U.S. Capitol on Jan. 6, 2021, has reverberated as perhaps the single most divisive act of content moderation in internet history. Musk has said he would reinstate Trump.

“We’re approaching a pivotal moment for online speech,” said Daphne Keller, who directs the Program on Platform Regulation at Stanford University’s Cyber Policy Center. “The political pressures on content moderation have increased tremendously.”

How online forums set and enforce rules for what users can post wasn’t always so divisive. When the consumer internet was born in the mid-1990s, lawmakers in both parties shared a desire to see American tech firms thrive. That consensus survived early battles over pornography, copyright infringement, breastfeeding photos and terrorist propaganda.

A landmark Supreme Court fight over social media now looks likely

But as in so many realms of American society, the 2016 election marked the beginning of the end of that bipartisan comity.

Christopher Cox, a former Republican congressman from California, now serves on the board of Netchoice, a tech industry lobbying group that is fighting the Texas law. Cox said he can understand conservatives’ frustration with some of the platforms’ decisions, which he called an “abuse of power.”

But the remedy is not to give more power over speech to the state, he argued, “Politicians exercising control over the political speech of others is a very dangerous recipe.”

Protecting ‘the little guy’

In 1995, Cox helped craft the provision that paved the legal path for today’s internet giants to moderate online speech. At the time, the political stakes seemed so low that the national media barely noticed.

The consumer internet was just blossoming, with millions of Americans beginning to log on to services such as CompuServe, Prodigy, and AOL. To the extent most lawmakers considered online speech at all, their chief concern was limiting the availability of pornography to minors.

Cox and Rep. Ron Wyden (D-Ore.) had a different concern. Earlier that year, a libel judgment against Prodigy held that its attempt to police its forums made it responsible for users’ content. Cox and Wyden worried the judgment would stifle the fledgling internet.

Section 230: The little law that defined how the Internet works

So they hashed out a statute that gave online service providers broad latitude to host, distribute and moderate content posted by users without being held liable when they posted something illegal. Part of a broader bill called the Communications Decency Act, it came to be known simply by its location: Section 230.

In a recent interview with The Washington Post, Wyden, now a senator, recalled that he saw the internet companies as “the little guy,” and wanted to give them leeway to develop their innovative technologies without being squashed by heavy-handed regulations. He thought empowering them to moderate their own sites would lead to a cleaner, safer internet without the need for government censorship of online speech.

Cox, in a separate interview, added: “The question is who’s in charge. There are going to be decisions made about what content is on these websites. Should the government be in charge of it? There are all sorts of reasons that would be a bad idea. It’s subject to all sorts of abuse.”

Early court decisions went on to interpret Section 230 even more broadly than Cox and Wyden had anticipated, establishing sweeping immunity for user-posted content. That set the stage for the rise of sites like Yahoo, Google, MSN. Later came YouTube, which is owned by Google, and Facebook. They could host, aggregate, and organize vast pools of user content without having to worry too much, from a legal standpoint, about whether it might be false, hurtful, or even dangerous.

The result was a potent business model that, compared with traditional media, dispensed with paid content creators in favor of unpaid ordinary users, and replaced paid editors with software algorithms designed to surface the most relevant, engaging or tantalizing content.

Yet the consumer internet was never an unfettered free-speech zone. The most successful online platforms discovered early that they had to make and enforce basic rules or they’d be overrun by pornography, spam, scams, harassment and hate speech — and that would be bad for business.

Even when an internet forum starts with a goal of allowing freewheeling discourse, “they quickly run into the inevitable fact that you have to moderate in order to have a commercially viable and user-friendly product,” said Evelyn Douek, a Stanford law professor who researches online speech regulations.

Twitter’s top lawyer long weighed safety, free speech. Then Musk called her out.

The need to screen and review millions of posts per day on sites like YouTube and Facebook gave rise to a shadow industry of commercial content moderation involving huge teams of workers spending their days making rapid-fire calls about whether to take down posts that users have flagged as offensive or obscene. To preserve the illusion of a “free-speech zone,” tech companies tend to distance themselves from that work, often outsourcing it to poorly compensated contractors in far-flung locales, said Sarah T. Roberts, author of “Behind the Screen: Content Moderation in the Shadows of Social Media.”

Even so, some decisions proved too thorny or consequential for tech companies to sweep under the rug.

In 2006, a shocking video appeared on the then-new YouTube: Grainy and shaky, the amateur footage showed deposed Iraqi president Saddam Hussein being hanged by members of the new Iraqi government, some of whom shouted insults in his final moments. The hanging had been closed to the media; the video exposed a vengeful and undignified execution at odds with official reports.

The decision of whether to leave the video up or take it down fell to Google’s deputy general counsel, a young lawyer named Nicole Wong. “What we ended up deciding was that the video of the execution was actually a historic moment, and it was actually important that it be shared and seen,” Wong said in a 2018 conference.

Two years later, an angry group of moms protested outside the Palo Alto offices of the three-year-old social media site Facebook, which had been taking down breastfeeding photos for violating its rule against nudity. The furor spurred Facebook to develop its first internal rule book for what users could and couldn’t post, drawing fine-grained, if somewhat arbitrary, distinctions to delineate between wholesome and prurient images, among other things.

Previous content policies had amounted to, “If it makes you feel bad, take it down,” former safety lead Charlotte Willner said. She recalled that one of the guiding motivations amid the lack of regulation of online content was executives’ desire not to run afoul of powerful people, especially public officials who might try to sue or regulate them.

Facebook keeps researching its own harms — and burying the findings

Despite the occasional flare-ups, the big platforms cultivated an image as guardians of free speech abroad — one Twitter official boasted in 2012 that his firm was “the free-speech wing of the free-speech party” — while maintaining a studied political neutrality at home.

But as social media’s influence on politics and social mores has grown, it has become clearer that free speech for some users could mean real harm for others.

In 2014, large subcultures of angry, mostly male, gamers targeted a handful of women in the video-game industry and media with vicious, coordinated online threats, which at times spilled into real-world attacks. That movement, known as GamerGate, challenged tech companies’ claims to neutrality, because it pitted the free-speech claims of one group of users against the privacy and safety of others, said Tarleton Gillespie, a principal researcher at Microsoft and author of the book “Custodians of the Internet.” Neutrality, in this case, meant allowing the harassment to continue.

The illusion of social media’s neutrality with respect to partisan politics began to crumble two years later, with effects that are still reverberating.

In May 2016, the tech blog Gizmodo ran a story alleging that liberal Facebook employees were secretly suppressing news stories from right-leaning outlets in the social network’s influential “Trending” news section. While prioritizing mainstream news sources over overtly partisan outlets might seem reasonable to some, conservatives saw it as a “gotcha” moment that proved Silicon Valley tech giants were imposing their liberal values on their users.

Facebook CEO Mark Zuckerberg embarked on a high-profile apology tour, meeting personally with top conservative politicians and pundits, ordering bias training for his employees, and laying off the journalists in charge of the Trending section.

How ‘Stop the Steal’ grew its digital playbook from Occupy, Gamergate

While the company’s leader was busy doing damage control, however, his platform was being exploited in troubling new ways in the run-up to the November 2016 U.S. presidential election.

A cottage industry of fake news sites, some run by teenagers in Macedonia, was booming on Facebook as its fabricated articles — which often had a pro-Trump bent — sometimes received more likes and clicks than factual news reports.

It emerged after Trump’s election that profit wasn’t the only motive for the flood of manipulative political content on Facebook. Russian operatives had also been using fake accounts, groups and pages on the social network to spread polarizing content aimed at turning Americans against one another.

Attempts by Facebook employees to address both the fake news problem and Russian information operations were undermined, The Post later reported, by its leaders’ fear of further angering conservatives.

By 2017, many on the left had come to blame Facebook and social media for helping to elect Trump, pressuring tech companies to take tougher stands against not only fake news but the president’s own frequent falsehoods and racial provocations.

How conservatives learned to wield power inside Facebook

In response, tech companies that once prided themselves on their lean workforces, tacitly accepting some ugliness as the cost of doing business, began spending heavily on human content moderators. They developed software to help automate the process of flagging posts that might violate their increasingly complex rule books.

While their efforts swept up inflammatory posts by Trump’s more fervent supporters, Facebook and Twitter were loath to take action against Trump himself. Instead, they concocted various ad hoc exemptions to allow him to remain on the platform.

Social media also shared blame for the rise of a more vocal and visible white supremacist movement, which used online forums to radicalize, recruit, and organize events such as the deadly “Unite the Right” rally in Charlottesville in 2017. To liberals, that reinforced the link between online speech and real-world violence, making content moderation literally a matter of life and death. That link would be driven home in 2018 as hate speech and lies about Muslims that spread on Facebook helped fuel a genocide in Myanmar against the country’s Rohingya minority.

At the same time, the right became increasingly suspicious of tech companies’ efforts to tackle those problems domestically, viewing them as censorial and politically motivated. The platforms’ actions against right-wing accounts and groups involved in the Charlottesville violence galvanized the far right to begin setting up its own “free speech” social networks, such as Andrew Torba’s Gab.

In what has since become a common rallying cry on the right, Sen. Ted Cruz (R-Tex.) criticized “large tech companies putting their thumb on the scales and skewing political and public discourse.”

By the end of 2017, an industry that had previously enjoyed widespread trust and popularity among Americans — the same industry Wyden had seen as “the little guy” in need of protection two decades earlier — had come to be known by left and right alike as “Big Tech.” The epithet, an echo of past crusades against Big Business, Big Banks, Big Tobacco and Big Pharma, conjured not only power but corruption, a force that needed to be reined in.

The first blow to Section 230 came in 2018, when Congress passed and Trump signed a bipartisan bill to fight online sex trafficking by removing the liability shield for sites that facilitated it, whether knowingly or not. Sites that hosted adult “personals” ads shut down altogether rather than face lawsuits, a change that many sex workers said made them less safe. No longer able to advertise and screen clients online, they returned to streetwalking to drum up business.

Analysis: Conservatives say Google and Facebook are censoring them. Here’s the real background.

Since then, Congress has struggled to find a path forward. In 2018, Republicans held hearings investigating Facebook’s alleged suppression of pro-Trump influencers Diamond and Silk, while the left railed at social media’s role in the rise of conspiracy theorist Alex Jones. A month after Facebook said that banning Jones would run “contrary to the basic principles of free speech,” it did just that, responding to mounting pressure that also led Apple, Spotify, YouTube and, eventually, Twitter to ban him over his false statements that the Newtown, Conn., school shooting in 2012 was a hoax.

By that point, nobody believed that tech company policies were being enforced consistently — or objectively. So Facebook came up with a novel solution: a semi-independent, nonprofit review panel, called the Oversight Board, staffed with experts on freedom of expression and human rights from around the world.

By 2019, Trump allies such as Sen. Josh Hawley (R-Mo.) were calling for changes to Section 230 that would require platforms to be politically neutral to receive legal protections. “Google and Facebook should not be a law unto themselves,” Hawley said. “They should not be able to discriminate against conservatives.”

The outbreak of covid-19 in 2020 brought new tests for the platforms. Large swaths of the right, including Trump, rejected scientific guidance on how to stop the spread.

Again, the companies managed to infuriate both left and right. Their algorithms rewarded enticing yet unsubstantiated conspiracy theories, such as a viral video titled “Plandemic” that advanced a slew of conspiratorial claims about the virus’s origins, how it spreads, and the safety of masks and vaccines. At the same time, their moderation systems — by now partly automated, with human moderators sent home because of covid restrictions — scrambled to remove such content under new policies prohibiting misinformation about the virus.

Analysis: Facebook and YouTube’s vaccine misinformation problem is simpler than it seems

Meanwhile, the platforms were slowly getting tougher on Trump as he began to predict a “rigged election” and deride mail-in ballots as “fraudulent,” laying the groundwork for his attempt to dispute the results of the coming presidential election. The tech companies’ reluctance to penalize a sitting president was colliding with their policies against election misinformation — one of the few, narrow categories of falsehoods, along with covid-19 and vaccine misinformation, that they had vowed to police.

In May 2020, Twitter hid a Trump tweet behind a fact-checking label for the first time. The White House retaliated with an executive order directing the Federal Trade Commission to reinterpret Section 230 to weaken or remove tech platforms’ liability shield. (It didn’t.) Seven months later, Trump threatened to veto a bipartisan defense spending bill unless Congress first repealed Section 230. (It didn’t, and Congress later overrode his veto.)

A galvanizing moment for the right came in October 2020, just weeks before the election in which Democrat Joe Biden unseated Trump.

The New York Post, a right-leaning tabloid, published a story about illicit materials found on a laptop that had reportedly belonged to Biden’s son Hunter. Facebook and Twitter later said they had been warned just days earlier by federal authorities to be on the alert for foreign influence operations related to the election, including possible “hack-and-leak” maneuvers. Both reacted swiftly and aggressively, with Facebook using its algorithms to limit sharing of the Post’s story on its network. Twitter banned all links to the article and suspended the Post’s account.

The moves drew outrage from the right, whose leaders saw Silicon Valley tech companies wielding their power to bury a journalistic report from a major newspaper in what smelled like an attempt to help the Democratic candidate in the upcoming election. Even some critics from the left wondered if the platforms had overstepped by substituting their judgment for the editorial judgment of an established news organization — albeit a tabloid with some infamous missteps on its record.

While questions remain about the laptop story, a later investigation by The Washington Post appeared to validate at least part of the New York Post’s reporting. Twitter’s then-CEO, Jack Dorsey, eventually apologized for what he described as an honest mistake, while Facebook’s Mark Zuckerberg recently acknowledged his company got it wrong as well.

Content moderation in the crosshairs

The platforms finally suspended Trump after the Jan. 6, 2021, attack on the U.S. Capitol, on the grounds that his continued posts disputing the election risked inciting further violence. Twitter banned him permanently, and Facebook and YouTube issued indefinite suspensions.

The moves, which came only after Trump had lost his grip on power, reinforced the sense among critics on both sides that the tech companies were making up the rules as they went, with one finger held to the political winds and one eye on their rivals as each jostled to avoid sticking its neck out alone.

Trump threatens to veto major defense bill unless Congress repeals Section 230, a legal shield for tech giants

Between the Hunter Biden story and the Trump ban, some conservative leaders had seen enough. They were ready for the government to take back some of the power it had previously entrusted to internet companies.

In May 2021, Florida Gov. Ron DeSantis (R), signed a state law banning large social media platforms from “censoring” posts by elected officials, candidates for office, or major news organizations. Texas followed with a law that went even further, preventing platforms from limiting the online speech of any Texan — not just politicians or news outlets — on the basis of political viewpoint. Both laws also required online platforms to be more transparent about their rules and the justification behind their moderation decisions.

Numerous other states have drawn up similar bills, which could take effect if the Texas and Florida laws survive their ongoing legal challenges.

In a ruling that swept aside decades of precedent, the U.S. Court of Appeals for the 5th Circuit in September upheld Texas’s social media law, setting the stage for courts to reinterpret the First Amendment for the digital age.

Comments are closed.