Opinion: One strategy to cease the harmful unfold of vaccine myths
White House officials have expressed outrage that Facebook and other platforms have not cracked down on false vaccine claims in recent days amid rising coronavirus cases, hospitalizations and deaths. Last week, President Joe Biden claimed that social networks “kill” people by allowing the spread of health misinformation, although he retracted those words on Tuesday and instead blamed the authors of such misinformation. Facebook, for example, contradicts claims that it is responsible for promoting misinformation. A spokesman told CNN that Biden’s claims that technology companies were responsible for spreading vaccine misinformation “are not supported by the facts. The fact is, more than 2 billion people have viewed authoritative information about COVID-19 and vaccines on Facebook, which is more than any other place on the internet. “
As the coronavirus continues to make Americans and so many others around the world sick, people (especially those fortunate enough to live in a country with strong access to Covid-19 vaccines) need to be encouraged to get vaccines that more than just them can protect against serious illness and death, but also prevent them from passing the virus on to those around them who cannot be vaccinated – such as children and people with certain medical conditions. The spread of misinformation about vaccines must stop.
It is for this reason that the White House rightly questions Section 230. It needs to be updated. However, the exceptions to this law must be extremely narrow and focus on widespread misinformation that clearly threatens lives.
According to the Center for Countering Digital Hate, only 12 people are responsible for 65% of the misinformation about vaccines circulating on the Internet. The organization found 812,000 cases of anti-vaccine content on Facebook and Twitter between February 1 and March 16, 2021, which it reported was just an “example” of the widespread misinformation.
The failure of tech companies to stop this is unacceptable. Instead, Instagram (owned by Facebook) has occasionally recommended misinformation to its users, according to the center’s report. And even when this bogus content was reported to social media companies, they overwhelmingly refused to take action. While the center accuses Facebook, Twitter and Google of failing to identify and remove anti-vaccine content, it notes that “the extent of the misinformation on Facebook, and therefore the impact of its failure, are greater. Many internet activists are Against the amendment to Section 230 because removing legal liability for online intermediaries who host or republish content could limit our ability to have wide-ranging conversations on social media – including those on controversial topics.Review all conversations, who we run on social media every day has been the victim of fake online claims that she slept with a reviewer to get him to write a glowing review on a game she developed, and has been exposed to death threats and other abuse as a result flooded t, wrote in her book Crash Override: How Gamerga te (Nearly) Destroyed My Life, and How We Can Win the Fight Against Online Hate.
But there is a way to protect the openness of the internet and the functionality of social networks while at the same time taking action against falsehoods that cause mass damage. Congress should pass a law making technology companies responsible for removing content that directly endangers lives and has a mass reach – such as more than 10,000 likes, comments or shares. The definition of life endangerment should also be narrow. It should include serious public health threats – such as misinformation about vaccines – or other direct invitations to cause serious harm to ourselves or others.
This requirement of this type of updated legislation would allow tech companies to focus their efforts on monitoring content that is widely distributed (and also making them the most money, by the way, as social networks rely on popular content to keep people on their toes) Websites to keep them can generate advertising revenue). The content with the greatest reach and engagement is of course the most influential, and therefore the most potentially harmful.
There are, of course, many legal precedents for this. As I mentioned earlier, it is constitutional to restrict freedom of expression in limited cases, such as when it threatens to facilitate crime or poses immediate and real threats. Information fueling a deadly pandemic is clearly qualified.Such a law would also pose a serious danger that cannot be ruled out and must be addressed: politicians could use it to try to hinder the spread of information they do not like. (Recall how former President Trump often cited accurate reports that he disliked as “fake news”). Therefore, the truth judges in such cases would have to be federal judges who are nominated by the President but confirmed by the Senate and should be impartial. The Justice Department and Attorneys General could bring lawsuits against social networks for failing to remove deadly misinformation that is prevalent on their platforms. Such cases could be ruled by a panel of judges (for further protection from a single activist lawyer) and technology companies found violating the law and facing fines.
The real idea is that the prospect of fines and the PR damage that comes with lawsuits would lead social networks to step up their monitoring of misinformation in order to avoid lawsuits in the first place. That would leave the responsibility above all with the company to detect and prevent dangerous and widespread fake news.
Of course, that is exactly what happens with copyrighted material. Copyright infringement is not protected under Section 230. So if a user shares copyrighted material on a social network without permission, the copyright owner can sue the platform for damages. This is why social networks are so adept at removing such content – and how we ended up in previous situations, such as when Twitter removed a clip posted by former President Trump of the band Nickelback while separately viewing the prospect a civil trial could conjure up war.
If tech companies can figure out how to remove clips that harm people’s commercial interests, then surely they can also figure out how to remove posts that threaten our lives.
Social networks could have avoided this type of regulation by taking better action against misinformation from the outset. But they have long tried to evade responsibility for the social impact of the misinformation spread on their platforms. In 2017, Facebook CEO Mark Zuckerberg used his voting rights to block a shareholder resolution that would only have required the company to publicly report how it deals with misinformation and the effects of its misinformation policy.
As the virus vaccines protect us, misinformation on social media is explosively contagious and deadly. Congress should vaccinate us against some of the worst while preserving the viability of broad, unqualified speech that does not threaten life on social media.
Comments are closed.