Social networks battle to shutdown racist abuse after England’s Euro Cup remaining loss

Bukayo Saka from England is comforted by head coach Gareth Southgate.

Laurence Griffiths / Getty Images

Sunday night was a moment of the national heart among England’s football fans as the country’s team stood on the verge of winning their first major international tournament in over half a century before losing to Italy on penalties. It was also another ugly incident of racism on social media, with some supporters hurling all their anger and frustration at the three players who missed their penalties – who all happened to be black.

While national team and manager Gareth Southgate made it clear the loss was something the entire team shouldered together, some disgruntled supporters took to Twitter and Instagram, specifically targeting Marcus Rashford, Jadon Sancho and Bukayo Saka.

Top tips from the editors

Subscribe to CNET Now for the day’s most engaging reviews, news, and videos.

The vitriol posed a direct challenge to social networks – an event-specific surge in hate speech that forced them to realign their moderation efforts to contain the damage. It is just the latest incident for the social networks that need to be on the alert for high-profile political or cultural events. While these companies have a regular process that involves using machine-automated tools and human moderators to remove the content, this recent incident is just another source of frustration for those who believe social networks are not responding quickly enough.

People shouldn’t “report” racists on these footballers’ Instagram. Anyone on Instagram should scroll through each account and just delete any account that left hideous comments? Why are they always just reactive and inactive, expecting normal users to do the job?

– Dr. Panti Bliss-Cabrera (@PantiBliss) July 12, 2021

To fill the void, companies rely on users to report content that violates guidelines. After Sunday’s game, many users shared tips and guidance on how best to report content, both to platforms and to the police. It was disheartening for the same users to learn that a company’s moderation technology had found nothing wrong with the racist abuse they highlighted.

Many users also wondered why, if it’s a billion dollar company, for example, Facebook was unprepared and ill-equipped to deal with the easily anticipated influx of racist content – instead of reporting it to unpaid, compassionate users.

No gray areas when it comes to racism

For social media companies, moderation can fall into a gray area between protecting freedom of expression and protecting users from hate speech. In these cases, they need to assess whether user content violates their own platform guidelines. But that wasn’t one of those gray areas.

Racial abuse is classified as a hate crime in the UK, and London’s Met Police said in a statement it would investigate incidents that happened online after the game. In a follow-up email, a Met spokesman said the abuse cases would be examined by the Home Office and then passed on to local police forces to deal with them.

Twitter “quickly” removed over 1,000 tweets through a combination of machine automation and human verification, a spokesman said in a statement. In addition, she has permanently blocked “a number” of accounts, of which she proactively discovered “the vast majority” herself. “The heinous racist abuse directed against English players last night has absolutely no place on Twitter,” the spokesman said.

Meanwhile, there has been frustration among Instagram users who, among other things, identified and reported abusive content posted on the accounts of black gamers.

Back to Glasgow on a series of responses from @instagram to the posts I reported last night.

Every single one of them contained racist slurs or clearly offensive emojis. Computer said they were fine. pic.twitter.com/4pOQUMbv5Q

– lism. (@lastyearsgirl_) July 12, 2021

As per Instagram’s guidelines, using emojis to attack people based on protected traits, including race, is against the company’s policies on hate speech. Human moderators who work for the company take context into account when reviewing emojis usage.

But in many of the instances reported by Instagram users where the platform hasn’t removed monkey emojis, it appears that the reviews weren’t done by human reviewers. Instead, their reports were processed by the company’s automated software, which told them, “Our technology has determined that this comment is unlikely to violate our community guidelines.”

An Instagram spokeswoman said in a statement that “nobody has to experience racist abuse anywhere, and we don’t want it on Instagram.”

“We were quick to remove comments and accounts last night indicating insults against England’s footballers and we will continue to take action against those who break our rules,” she added. “In addition to our work to remove this content, we recommend that all players enable Hidden Words, a tool that means no one has to see abuse in their comments or DMs. No one is going to fix this challenge overnight, but we have a duty to protect our community from abuse. “

The problem of racism in football meets the problem of moderation in technology

The social media companies shouldn’t have been surprised by the reaction.

Football professionals have felt the strain of the racial abuse they suffer online – and not just after that one game against England. In April the Football Association of England organized a social media boycott “in response to the persistent and persistent discriminatory abuse experienced online by football-related players and many others”.

The racism problem in English football is not new. In 1993, the issue forced the Football Association, the Premier League, and the Professional Football Association to launch Kick It Out, an anti-racism program that became a full-fledged organization in 1997. Under Southgate’s leadership, the current version of the English squad has embraced anti-racism louder than ever and got on their knees ahead of the games to support the Black Lives Matter movement. Nevertheless, there is racism in sport – online and offline.

On Monday, the Football Association strongly condemned the online abuse after Sunday’s game, saying it was “appalled” by the racism against players. “We couldn’t be more clear that anyone behind such disgusting behavior is not welcome to follow the team,” it said. “We will do everything we can to support the players affected and at the same time demand the toughest penalties for all those responsible.”

Social media users, politicians and human rights organizations are calling for internet-specific tools to combat online abuse – and prosecuting perpetrators of racist abuse as if they were offline. As part of its “No Yellow Cards” campaign, the Center for Countering Digital Hate calls on platforms to ban users who spread racist insults for life.

In the UK, the government has sought regulation that would force tech companies to take more vigorous action against harmful content, including racist abuse, in the form of the Online Safety Bill. But it has also been criticized for moving too slowly to get the laws into effect.

We couldn’t be more proud of this @ England team.

It is absolutely appalling and totally unacceptable to put even more racial abuse on some of the English players on social media.

Our full statement below pic.twitter.com/ET2EJrF9Mu

– Kick It Out (@kickitout) July 12, 2021

Tony Burnett, the CEO of the Kick It Out campaign (which is publicly endorsed by both Facebook and Twitter), said in a statement Monday that both social media companies and the government must step up to deal with racist attacks To prevent abuse on the Internet. His words were taken up by Julian Knight, Member of Parliament and Chair of the Digital, Culture, Media and Sport Committee.

“The government must continue legislating for the tech giants,” Knight said in a statement. “Enough of dragging your feet, everyone suffering at the hands of racists, not just English players, deserves better protection now.”

With increasing pressure to act, social networks have also increased their own moderation efforts and developed new tools – with varying degrees of success. The companies track and measure their own progress. Facebook uses its independent oversight board to evaluate its performance.

However, critics of social networks also point out that the way in which their business models are structured offers them little incentive to discourage racism. Any engagement will increase advertising revenue, they argue, even if that engagement consists of people liking and commenting on racist posts.

“Facebook has made content moderation difficult by establishing and ignoring its opaque rules and increasing harassment and hatred in order to boost the share price,” former Reddit CEO Ellen Pao said on Twitter on Monday. “Negative PR is forcing them to fight racism that was on their platform from the start. I hope they really fix it.”

Comments are closed.