Time to curb the pretend information operating riot on social media
A year ago, on January 20th, an anonymous blog in France mistakenly linked 5G technology to the emergence of a mysterious coronavirus strain across China.
In the days and weeks that followed, conspiracy theory took hold on social media and took hold in the UK. It was the first of many.
Over the past year, the UK has battled an onslaught of misinformation about Covid-19 and vaccines that spread with confusion over how the virus came about and what was being done to stop it from spreading.
Social media platforms provided fertile ground for fake news to flourish at the start of the pandemic, the consequences of which could be seen both online and in the real world.
In the UK, telecommunications towers have been subject to arson and engineers have been attacked. NHS staff and relatives of patients faced Covid deniers who protested in front of hospitals that were “empty” according to online conspiracies.
Attempts by technology platforms to raise authoritative voices like the World Health Organization and the NHS fail to address the shortcomings that govern how people consume information online.
For answers to questions about vaccines, tests, and bans, more and more people are turning to Facebook, Twitter, YouTube, and other social media sites where individuals can easily create powerful platforms by sending emotionally charged conspiracies to millions .
Social media companies have largely left it up to users to go their own way amid this onslaught of unsubstantiated claims, dubious sources of health information, and coordinated anti-Vax campaigns. Fortunately, the age of self-regulation on social media is coming to an end.
The UK government has tabled a new Online Damage Act, which is the first time Ofcom oversees and regulates social media companies. This is a welcome step forward and is much needed. But the bill could go further.
First, we need to address the systemic flaws in platform design that allow falsehoods to go wild. Algorithms lead us all to increasingly outrageous, highly interesting content.
A Facebook user who “likes” an anti-mask page is encouraged to join an anti-vax page. An Instagram user watching a “Covid Hoax” video is pressed into claiming that a “plan” was developed by governments in order to lose our personal freedoms.
YouTube’s recommendation features are known for exposing viewers to increasingly extreme and sensational videos. Without regulation, algorithmic advertising creates a distorted information ecosystem that exposes users to a barrage of bizarre claims.
These recommendation systems must be regulated transparently, with the safety of the users taking priority. Users deserve the opportunity to customize the content that is served to them and to know how these algorithms curate the information displayed.
Second, users need better tools to help them identify misinformation. With families relying on the internet more than ever, the government should ensure that all social media feeds and search results include tools that flag content from unreliable sources, including hoaxes that undermine public health advice.
Blue ticks and verification badges often convey a false sense of trustworthiness when in many cases the opposite is true and reports with large followers cite untrustworthy news sites to spread their falsehood.
We need to improve digital media literacy and ensure that users are encouraged to click through based on their credibility rather than their virality.
Finally, the platforms’ community guidelines and enforceable terms and conditions should be much more accessible to users. We’ve all seen the ability to “report” content, but how many of us actually know what happens when we press that button?
Users should be able to report all kinds of harmful content, including misinformation, and we should know exactly how to make a decision to remove it or keep it online once the content is reported.
Misinformation has real consequences. At this critical point in time, the government has the opportunity to continue advancing technology platforms, becoming more transparent, and strengthening the social media ecosystems that dominate today’s news traffic.
Digital resilience depends on our ability to regain control and help users make better decisions online. We cannot underestimate the threat that poses if we do not do so.