Are the social media platforms doing sufficient – or something
LAST WEEK, FORMER US President Barack Obama gave a speech at Stanford University, California, where he outlined what he believed were the biggest threats to democracy.
Obama spoke about the need to address income inequality, the negative effects brought about by globalization, and the rising power of autocratic regimes across the world.
But the former president said that one of the “biggest reasons for weakening democracies is the profound change that’s taking place in how we communicate and consume information”.
He went on to talk about the deluge of instant information that is easily accessed by everyone with an internet connection, and how eventually “we lose our capacity to distinguish between fact, opinion and wholesale fiction. Or maybe we just stop caring”.
This constant stream of information can be exploited by bad actors:
“You just have to flood a country’s public square with enough raw sewage. You just have to raise enough questions, spread enough dirt, plan enough conspiracy theorizing that citizens no longer know what to believe.”
At the center of this flood are the world’s social media companies.
Misinformation on social media
While misinformation is nothing new, the sheer amount of false facts being shared and consumed on social media is unparalleled in human history.
Information is shared across the world instantly thanks to the innovations of the world’s main social media platforms: Facebook, Instagram and WhatsApp (all owned by the recently rebranded Meta), YouTube (owned by Google) and Twitter.
Over the past number of years, it has become clear that these platforms are hotbeds of misinformation and disinformation, with serious real-world consequences.
(Disinformation is defined as the deliberate distribution of false information that intends to cause harm; misinformation is defined as when false information is shared but no harm is meant).
Some examples include Russian interference in the 2016 US election, during which, according to US intelligence, thousands of fake accounts managed to spread disinformation in an attempt to tilt the election in Donald Trump’s favour.
More recently, last year’s 6 January Capitol riots in the US, in which supporters loyal to Donald Trump stormed the US Capitol in an attempt to overturn the results of the 2020 election, came about in large part due to unfounded allegations that the election was rigged .
In Ireland, the Covid-19 pandemic saw an upsurge in misinformation being spread about the virus and, later, the efficacy and safety of vaccines. Last year, Deputy Chief Medical Officer Dr Ronan Glynn told The Journal that public health officials were battling an “avalanche of conspiracy theories and misinformation”.
“We know that uncertainty is something that breeds scope for misinformation. We saw that during the early stages of the pandemic,” says Shane Timmons, Research Officer with the Behavioral Research Unit (BRU) of the ESRI.
“When people are highly uncertain about something they just seek out information and anything that comes is going to fill that gap.”
In response to all this, there have been calls from policymakers and citizens concerned in Ireland, Europe and the US for social media companies to stop the rapid growth and spread of misinformation online. So what are they doing to stop it? And is it enough?
Although the number of active users fell for the first time in its history last year, Facebook still boasts close to 3 billion monthly active users (MAUs), making it by far the most popular social media platform.
Combined with Instagram (1.4 billion MAUs) and WhatsApp (2 billion MAUs), Meta is by a long way the most popular and powerful social media company in the world.
It is no surprise, then, that Meta and Facebook have been in the firing line in recent years when it comes to the spread of misinformation. Last year, before it was due to testify before a US House committee, the company released an update and a timeline of its efforts to combat misinformation on its platform.
Among the many listed initiatives were measures aimed at clamping down on fake accounts, labeling content that references Covid-19, and blocking Donald Trump’s account after the Capitol attack.
The company states that it has also built a global network of factcheckers – of which The Journal is a part – to help it stop the spread of false information. However, critics argue the company isn’t doing near enough to tackle misinformation on its platforms.
This was recently highlighted by former employee and whistleblower, Frances Haugen, who allegedly the company knowingly wasn’t doing enough to combat misinformation, claims which Facebook has denied.
Francis Haugen testifying to the United States Senate
Source: Alamy Stock Photo
“I do not believe Facebook, as currently structured, has the capability to stop vaccine misinformation,” Haugen said, stating that current efforts were only likely to remove “10% to 20% of content”.
YouTube, the video sharing and content platform, is wildly popular across the world, with 30,000 hours of footage uploaded every hour.
At the start of the year, a global alliance of factcheckers wrote an open letter to YouTube chief executive Susan Wojcicki, saying that the platform was “one of the major conduits of online disinformation and misinformation worldwide”.
“What we do not see is much effort by YouTube to implement policies that address the problem,” the letter reads.
“On the contrary, YouTube is allowing its platform to be weaponized by unscrupulous actors to manipulate and exploit others, and to organize and fundraise themselves.”
In response, the spokesperson for YouTube said the company had “invested heavily in policies and products in all countries we operate to connect people to authoritative content, reduce the spread of borderline misinformation, and remove violative videos”.
Last year, YouTube started to ban all videos that shared Covid-19 vaccine misinformation, as part of its efforts to tackle the spread of misinformation on its platforms. In February of this year, it released a detailed breakdown of the challenges and efforts it is going to to tackle the issue on its platform.
These include attempts to tackle misinformation before it goes viral and ramping up efforts to tackle the spread of false information across the world.
Though it has significantly fewer MAUs than Facebook and YouTube, Twitter’s regular use by political leaders in countries across the world makes it a highly important platform.
In a statement to the US House of Representatives Committee last year, founder and former CEO Jack Dorsey said Twitter and other tech companies “had to work to do to earn trust from those who use our services”.
Source: Patrick Pleul
Dorsey said that for Twitter this means tackling concerns about the transparency of the company, the fairness in its decision-making, the reason its algorithms behave the way they do, and users’ concerns about privacy and the use of their data.
“Our efforts to combat misinformation, however, must be linked to earning trust. Without trust, we know the public will continue to question our enforcement actions,” Dorsey said.
Twitter is also trialling a new innovation known as “Birdwatch”, which allows people to identify information in Tweets they believe is misleading and write notes that provide informative context.
“We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable,” Twitter says on its website.
“Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.”
No news is bad news
Your contributions will help us continue to deliver the stories that are important to you
Support us now
Bidwatch is still in its pilot phase.
In recent weeks, concerns have been expressed about the future of the platform after it was announced that billionaire Elon Musk is to buy the company, with the aim of making Twitter more committed to free speech.
Experts have raised concerns that fewer regulations could increase the spread of hate speech and disinformation on Twitter.
Is it enough?
According to Dr Eileen Culloty, assistant professor in the School of Communications at DCU, a key issue in combating the spread of misinformation is the lack of accountability that social media companies have.
“I think the big issue thus far has been that what the platforms do is entirely voluntary,” she said.
“For example, it’s easy for a platform to say that they are countering disinformation by labeling content, or that they have a new content policy that says that this type of content is not allowed anymore.
“But we’ve no idea whether any of that works in practice, and if you don’t do experiments to test whether it’s effective and if you don’t provide information about who’s actually seeing those labels, they’re truly just a gesture .”
Culloty states that more well-funded independent oversight and research is needed in order to ensure companies are implementing the measures that they say they are, and that these measures are having an actual positive effect.
Last week, the EU reached an agreement on the Digital Services Act, a landmark piece of legislation aimed at ensuring “a safer digital space where the fundamental rights of users are protected and to establish a level playing field for businesses.”
European commissioner Margrethe Vestager tweeted that the act “will make sure that what is illegal offline is also seen & dealt with as illegal online – not as a slogan, as reality!”
The final text and details of the act still need to be decided, but among the many measures are provisions which will make the larger social media companies more responsible for checking the spread of misinformation on their platforms.
“They’re calling it co-regulation instead of self-regulation,” says Culloty.
“But that again sounds good and like something progressive is happening on paper, but the question is will the regulators have the capacity and the resources to go and question the platforms, to go and ask for data?”
According to Culloty, focusing too much on individual issues with social media platforms, like misinformation, risk obscuring the wider, systematic issues at the heart of the companies.
“They’ve all done slightly different things [to address misinformation] and it’s quite ad hoc and idiosyncratic, and there’s an expectation now that they should be doing more,” she says.
But I think we’re kind of stuck in this trap of thinking there’s a misinformation problem, we must do more about that, then there’s a harassment problem, there’s a hate speech problem, but they all kind of stem from the same systematic issues around the platforms and how they’re designed.
“And regulators need to address that systematic issue rather than… once you start trying to define what all these different problems are, I think it becomes much harder to do it.”
In his speech last week, former US President Barack Obama echoed this, that it is the design of the platforms themselves that may be the issue:
“It’s that in the competition between truth and falsehood, cooperation and conflict, the very design of these platforms seems to be tilting us in the wrong direction.”
This work is co-funded by Journal Media and a grant program from the European Parliament. Any opinions or conclusions expressed in this work are the author’s own. The European Parliament has no involvement in nor responsibility for the editorial content published by the project. For more information, see here.