Social media’s nice misinformation clean-up act

Social media giants have taken a number of steps to remove misinformation from their platforms, but those efforts are unlikely to appease the angry lawmakers on either side.

What’s happening: When they testify virtually before House lawmakers on Thursday, Mark Zuckerberg, CEO of Facebook, Sundar Pichai, CEO of Google, and Jack Dorsey, CEO of Twitter, will point out the recent changes to company policy to argue they are everything do to stem the tide of misinformation and extremism online.

Yes but: Policy changes are not the same as effective results.

  • “The performance they have shown us so far is largely unacceptable,” said Rep. Jan Schakowsky, chairman of House Energy & Commerce’s consumer protection committee, at an event on Monday. “We’re pushing legislation and regulation … it happens.”

Flashback: Democratic lawmakers have long been angry about misinformation on social platforms and previously questioned CEOs about the issue.

  • Anger peaked again after pro-Trump insurgents stormed the U.S. Capitol on Jan. 6. Lawmakers pointed to extremists organizing on Facebook groups, posting their indiscretions on Instagram Live, and following former President Trump’s tweets urging supporters to go to the Capitol.
  • Shortly afterwards, Twitter finally banned Trump, while Facebook and YouTube banned his account until further notice. Trump’s appeal against the account suspension is currently before the Facebook board of directors.
  • Conservative lawmakers have argued that the platforms’ decisions to suspend Trump and groups that support him are examples of censorship and political bias.

Facebook Outlined his work on preventing misinformation in a comment on Monday that found warning screens on false posts to prevent people from clicking 95% of the time.

  • This month, Facebook expanded its limits on recommending citizens and political groups to users around the world, after previously setting limits on recommending such groups in the United States
  • Other changes include penalizing groups that violate Facebook rules by not recommending them as often and warning users if they want to join a group that violates Facebook standards.
  • In February, the company announced crackdown on pandemic misinformation and said it would ban the publication of debunked claims about vaccines.

Twitter has blocked More than 150,000 accounts for sharing QAnon content since the Capitol attack, a spokesman told Axios.

  • The company also announced earlier this month that it would flag tweets with potentially misleading information about COVID-19 vaccines and introduce a strike system that could result in account being permanently banned.
  • Twitter is revising its policy towards politicians and government officials, soliciting input from the public on whether the world’s leaders should be governed by the same rules as everyone else.

YouTube said This month, more than 30,000 videos were removed that made misleading or false claims about COVID-19 vaccines in the past six months.

  • Parent company Google banned advertisers from showing ads related to the 2020 elections or topics related to the January 6 riot.

The other side: According to a new study by Progressive Non-Profit-Avaaz, Facebook could have prevented around 10.1 billion estimated views of pages where misinformation was exchanged in March 2020.

  • Prominent sites that repeatedly exchanged triple views of misinformation on Facebook from October 2019 to October 2020, Avaaz researchers found along with evidence that the 100 most popular false or misleading stories on Facebook received around 162 million views.

What’s next: Legislators are ready to legislate to combat the spread of misinformation and disinformation.

Comments are closed.