IDing the issue of racial abuse on social media

In the age of surveillance capitalism, giving the driver’s license to big tech could be the new normal. Is this the dystopian future we are meant to be or a hard reminder of the bigotry we have yet to address?

Last week the stark reality of online abuse came into focus for those of us privileged enough to escape the wrath of some powerful behind a keyboard.

The EM final should have been a happy occasion. When England lost the game, many fans turned on the three players they held responsible for the loss instead; Marcus Rashford, Jadon Sancho and Bukayo Saka – who received a series of racist messages online (not for the first time).

Priti Patel tweeted despite previously criticizing anti-racist gestures by the English team that there was “no room” for racist abuse in the UK. Even Boris Johnson made a statement in support of the players. The hypocritical remarks made by these leaders only pay lip service to the problem. Patel and Johnson have no real responsibility for the racist remarks they have made in the past. If there are no consequences for them, then why should the average person feel like they are doing something wrong?

The world has gotten a glimpse of what it means to be black online, and it has reinvigorated the longstanding debates about how social media platforms can and should combat online harassment. One of the more extreme solutions that are becoming increasingly popular is requiring users to submit photo ID when setting up a social media account.

In recent months the Irish government and opposition TDs have been calling for such a rule to be implemented. In the UK, Katie Price launched a petition on the matter which sparked a response from the UK government saying the solution could “disproportionately affect vulnerable users and affect freedom of expression”.

The general idea is this: when you sign up for a social media account, be it Twitter, Facebook, or even TikTok, you are required by law to submit photo ID of yourself.

This means that your account is inextricably linked to your identity. If your account harasses people, you can be permanently banned from the platform and you will not be able to open a new account.

“I think that’s a bad solution,” says Dr. Francesca Sobande, lecturer in digital media at Cardiff University.

Dr. Sobande believes that such a regulation could ultimately harm the very people it is trying to protect.

“When we think of different groups of people who are most likely to be monitored or monitored, or whose existence is seen as ‘deviant’ in society due to problems related to racism, sexism, ableism, xenophobia, transphobia, and we think about the different forms the surveillance that already exists, how this ID information can possibly be used to further surveillance of the most discriminated people? “

Whistleblowers, activists, political refugees and survivors of domestic violence are among those who often rely on anonymity on the internet. Forcing them to link their identity to their account could put them at risk.

“There’s an idea that efforts like this are only used to combat online abuse,” she says, “but the big tech success stories and pursuit of profit suggest otherwise.”

Matt Navarra, social media advisor and host of the Geek Out podcast, agrees.

“We have had this with Russia and other less trusted governments and bodies around the world who are likely to want to get this type of information,” explains Navarra.

“When everyone’s ID and information is stored on a social network, it’s just ripe for someone to hack into and take over those details and possibly identify people who were previously anonymous.”

“Again, it’s a question of practicality and feasibility,” adds Navarra. “Big platforms will definitely push that back.”

“Platforms like Facebook with two or three billion users – can you imagine the logistical task they have to go through in order to collect, store and manage the ID of each individual user?”

From a financial point of view, such measures may not be popular with platforms, as the solution entails high costs in terms of time, resources, data storage and more data security.

“Additionally, they will realize that many people will feel uncomfortable giving their photo ID to a large private company, and that will limit their growth potential for new users. This will have a massive impact on your bottom line in terms of advertising revenue as there are fewer users and fewer new users, ”says Navarra.

Other industry experts believe that profit is at the heart of this problem and that large platforms are being actively discouraged from addressing this problem.

“Racism creates user engagement. This increases advertising revenue through longer user sessions on the platform. It doesn’t matter whether the user likes or dislikes the racist content as long as they click and comment on it, ”said Christopher Wylie, author, Canadian data consultant and whistleblower at Cambridge Analytica in a recent tweet.

Navarre sees it differently – he believes the political pressure faced by these massively profitable, influential platforms is enough to change them.

“Facebook, Twitter, any of these major platforms are not going to enjoy being the focus of online harassment discussions. There is nothing to be gained in this position, ”he explains.

Navarre acknowledges that there has been a reluctance to adopt stricter rules and regulations in this regard.

“Apparently nobody wants to take responsibility. Governments do not want to make the rules and dictate to citizens what they can and cannot do or say online, nor do the platforms see it as their job to be “what they call the arbiter of truth” ” he says.

In February of this year, Twitter issued a statement that there was “no room for racist behavior” on the platform.

At this point, 11 million tweets were made by people in the UK about the championship. Twitter claimed to have deleted over 5,000 of them for content violations. After England lost the euro, Twitter deleted over 1,000 tweets and suspended a number of accounts.

The question remains: Surely social media platforms should proactively prevent online harassment and not just delete posts afterwards? Ordinary people have to repeatedly report harassment and abuse to the apps, which rarely imposes severe consequences on the perpetrators. Studies have shown that, for inexplicable reasons, blacks and people of color on Instagram are exposed to significantly more shadow bans and penalties, while racial abuse is treated with a lighter hand.

“The problem is that the technology they are using, which is a combination of machine learning, natural language processing, and artificial intelligence (AI), is not mature enough to be 100% accurate 100% of the time,” says Navarra who is certain that platforms would “have great technology”.

“The nuances of languages ​​around the world and the context in which they are posted, and a whole host of very subtle factors related to language, make it very difficult for this technique to tell if someone is harassing or abusing someone or whether it is in jest or in the context of someone describing something or asking a question, ”explains Navarra.

The inherent bias of technologies like AI needs to be considered here too. AI algorithms are often created by white, healthy, cis-sex straight men. This inevitably permeates these algorithms with bias, and many have been shown to be racist.

A prime example of this was in 2017 when Deborah Raji, a 21-year-old black woman from Ottawa who worked at Clarifai, a start-up that built technology that automatically recognized and sold objects in digital images, wanted to sell it to businesses, law enforcement and agencies Government agencies found that over 80% of the faces the company trained its facial recognition software on were white and the majority were male. The people who chose the training dates were mostly white men who didn’t know their dates were skewed.

Any attempt to develop AI to address this problem must focus on people of color, trans people, the disabled and queer people if it is to be effective.

Defining what is and what is not hate speech has also proven difficult terrain. The meaning of “free speech” is often skewed by members of the alt-right or by people who are racist, anti-LGBTQ, or anti-immigrant who consider it a matter of “free speech” to be able to spread hatred, these groups.

Debates have risen over the British flagship “Online Safety Bill” and the demarcation of freedom of expression and hate speech. Gaby Hinsliff, in a recent Guardian article, points out the complexity of regulating such things and how dependent these definitions are on your (possibly bigoted) beliefs: “To say that biological sex is real and immutable would be considered transphobic in some circles viewed hate speech and in others as a perfectly reasonable statement of fact.

Simply increasing the capacity of social media platforms to punish such behavior is far from the end of the story. This problem has a cause that we must face.

“Big tech should do more, but it’s not just about big tech,” says Dr. Sobande. “Racism is not something specific to social media.

“It’s systemic, it’s structural, and it needs to be approached appropriately in society – any work to combat online racism has to be part of a wider and sustained work to combat racism, period.”

It is important to remember that last week’s effusions are not news to many people.

“What we see is aimed at these soccer players, is part of the daily lives of many black people participating in various digital spaces, and black women stand at the intersection of sexism and racism.

“This is an important moment in order to reckon with the reality of what it means to be black and visible online,” explains Dr. Sobande.

“I am glad that more of these conversations are going on now, but it is also very frustrating to know how many people have tried to address these issues and hold various institutions and public figures accountable, and how, Es needs something like that so that questions are asked and politicians give explanations when we know how many black people have been confronted with this abuse on a daily basis for many, many years, ”she says.

So it seems that the first and most important step in ending online racism is to acknowledge that it is an extension of the offline racism’s unique form of discrimination.

As Navarra points out, “racism was not introduced when Facebook was launched”.

Comments are closed.