Tackling COVID-19 vaccine misinformation on social media
The extent of the problem is evident in Facebook data, which shows that between March and October this year, more than 12 million pieces of content on Facebook and Instagram (which it also owns) were removed for containing misinformation leading to imminent physical Can cause damage, such as B. Content related to bogus prevention measures or exaggerated cures.
During the same period, the social media giant warned of around 167 million pieces of content on Facebook. The alerts are based on fact-checking articles written by partners.
The recent approval of COVID-19 vaccines has resulted in social media users focusing on misinformation about the vaccines’ effectiveness and safety.
This includes misinformation about side effects, as well as conspiracy theories, including baseless and false claims that the vaccines are used to insert microchips into people and that Microsoft founder and philanthropist Bill Gates is spreading COVID-19 to benefit from the vaccine.
The Age and The Sydney Morning Herald are tracking three anti-vaccination Facebook groups in Australia using Facebook’s own tool, Crowd Tangle, and find that the groups have seen 22,000 likes over the past 12 months, up 57 percent.
The social media platforms know that the emergence of misinformation is a problem and are making efforts to address the problem.
Facebook has made it difficult to find anti-vaccination groups like those covered by The Age and The Sydney Morning Herald by removing them from search results on the platform.
It’s a practice known as a “shadow ban,” where platforms attempt to limit the spread of misinformation about COVID-19 online by making content difficult to find rather than completely deplating it.
The platforms have also restricted the use of hashtags used by misinformation spreaders like #covidisahoax and #vaccinescauseautism.
If you enter these hashtags in Facebook, TikTok or Instagram, you will only receive a message that posts using the hashtag have been temporarily hidden because “some of the content in these posts violates our community standards”.
Ninety-year-old Margaret Keenan became the first patient in the world to receive the Pfizer-BioNTech COVID-19 vaccine outside of a medical trial after it was approved by UK regulatory agencies.Recognition:Getty Images
Facebook started sending notifications this week to users who shared, commented on, or liked posts that contain misinformation about COVID-19. The notifications provide these users with links to trusted sources of the virus.
“Our position on vaccine misinformation is clear – we are removing false claims about the vaccine’s safety, effectiveness, ingredients, or side effects, including conspiracy theories, and we are continuing to remove COVID-19 misinformation that could lead to imminent physical harm,” he says Josh Machin, Public Policy Director at Facebook Australia.
Twitter announced on Friday that someone in Australia who searches for specific vaccine-related keywords on its platform will be prompted to be directed to the Department of Health’s vaccination information resources and Twitter account.
Starting Monday, Twitter will begin removing “the most harmful information” and flagging tweets that may contain misleading information about the vaccines as well as political tweets that are factually inaccurate.
TikTok also released new guidelines this week outlining how users should be directed to relevant and trustworthy information from public health experts when searching for COVID-19 misinformation.
Arjun Narayan, TikTok’s chief of confidence and security for the Asia-Pacific region, says misinformation in itself is not new and has been around for centuries.
“It just goes without saying that everything is on social media these days. Everything is digital. So many of the social fault lines are now manifesting on social media,” he says. “Misinformation survives and thrives in an information vacuum, and the best antidote … is to counter it with accurate information.”
TikTok’s proactive detection algorithms and team of 1,000+ content moderators around the world are also removing misinformation about COVID-19 from the video platform in Australia, according to Narayan.
“We do not allow any medical misinformation that poses a threat to the public interest and a health risk on the platform,” he says. “So when it comes to dangerous conspiracy theories, we have no tolerance for them.”
The efforts of social media platforms are not a simple matter of altruism. The UK government has announced that it will enact laws next year that will fine Facebook, Instagram, Twitter and TikTok more than £ 18 million ($ 31.6 million) for allowing users to view material Exploitation of children, posting terrorist content or disinformation against vaccination.
Disinformation differs from misinformation in that it is done on purpose.
In Australia, various agencies are dealing with each of these issues, with the eSafety officer overseeing cyberbullying material, image-based abuse and child exploitation material, while the Therapeutic Goods Administration is empowered to crack down on illegal advertising of therapeutic products .
“There is currently no comprehensive regulation of online misinformation in Australia,” said a spokesman for the Australian Communications and Media Authority (ACMA).
The government has asked ACMA to oversee the development of a voluntary code of conduct on disinformation and news quality for digital platforms. However, this will probably only be introduced next year.
Between March and October 2020, Facebook removed more than 12 million pieces of content from Facebook and Instagram to contain misinformation.
Dr. Belinda Barnet, professor at Swinburne University of Technology, says social media platforms need to do more.
“If any content that contains information about vaccinations, for example, goes viral, it needs to be reviewed immediately,” she says. “It’s in their ability to do this – they know instantly what content is going viral and has been shared a thousand times.”
According to Barnet, misinformation is increasing in Australia and the shadow ban is limited in effectiveness.
“The people it doesn’t catch, this particular policy, are the people who are already in those groups. So if you are already part of an anti-vaccination group, you can see it right away [misinformation] Content and all related content, ”she says.
Barnet is also concerned that the platforms’ strategy does not prevent high profile social media users from spreading misinformation, such as celebrity chef and anti-Vaxxer Pete Evans, who viewed sunlight as the best vaccine, and politician Mark Latham, the last Week on Twitter posted that the University of Queensland’s COVID-19 vaccine was intentionally implanted with the HIV-AIDS virus.
“We’re going to have a problem that isn’t as big as America’s, but as the government pointed out when vaccination was introduced, there will be people who believe this misinformation,” says Barnet.
The risk is to point out misinformation and to indicate that we are inadvertently reinforcing something that might otherwise go unnoticed and be ignored.
University of Sydney Associate Professor Adam Dunn has researched vaccination-related misinformation on social media for the past five years and published research in the American Journal of Public Health in July that examined 21.7 million vaccine-related tweets.
The research found that for typical Twitter users, the vast majority of the content they see or engage in is not critical of vaccination or the promotion of misinformation. Only about 5 percent of social media users belong to communities in which vaccine-critical content occurs more frequently and the smallest proportion of users publishes or shares vaccine-critical content.
“Misinformation is a tiny part of what most people see. So it seems like a huge stretch to suggest that it could change their beliefs and decisions,” says Dunn. “We worry too much that people are against vaccines. We have to make sure that everyone who needs access to the vaccines has access to the vaccines first.”
However, Mrozinski believes that it is important that misinformation is extracted and limited in scope: “By the time hundreds of millions of people see the damage, the word has spread.”
Sometimes Mrozinski admits it’s a hassle as he and other health care professionals face anti-Vaxxer attacks for posting on social media, but he’s determined to move on.
“People who are against things always seem to scream louder and make the most noise,” he says.
Get our Coronavirus Update Newsletter
Stay informed of the news you need to know about the pandemic. Sent Monday and Thursday. Sign up here.
Cara is the small business editor for The Age and The Sydney Morning Herald in Melbourne
Mostly seen in technology