To Battle Media Disinformation, Take a look at Algorithms, Say Specialists

In the national discussion on retrofitting social media regulations, what users are allowed to say may be less important than what platforms do with the content when it is said, according to panelists at the recent Social Media Summit @ MIT .

There was debate over whether the government should amend Section 230 of the Communications Decency Act, which protects platforms from liability for content posted by their users. However, officials keen to curb the spread of inaccurate information and the growth of violent movements on these sites could be more effective if they adopt the way the platforms’ algorithms amplify the content and recommend chatting with extremist users on the Connect websites so researchers, journalists and lawyers experts speak at the summit.

“They have these platforms … they like to make sure we keep talking about this issue of freedom of speech to censorship because it’s good for them because then they don’t really have to discuss the more difficult issues,” said the panelist and Future of Democracy Fellow Yaël Eisenstat, who previously headed the Global Elections Integrity team of the Facebook Business Integrity unit in 2018.

Should anyone attempt to sue Facebook for complicity in the Jan. 6 Capitol attack, Einestat said, “I suspect Facebook will try to use the argument from Section 230 to say,” We are not responsible for the conspiracy theories, that these users publish. “… Yes, but have your recommendation engines, your targeting tools, your curation done any of these things that actually helped facilitate this crime?”

A congressional hearing on April 27 will address this question of algorithms. MIT panelists recently set out their own views on whether and how the government can intervene in social media regulation, and the challenges and consequences this can pose.

What’s at stake

An internal Facebook task force recently documented the site’s role as a breeding ground for spreading the falsehoods that led to the Jan. 6 attack on the Capitol and as an organizational tool for the insurgents. These and other high-stakes events could increase the pace of discussion about government intervention in the functioning of the platforms.

Social media platforms have not invented falsehoods, but they have enabled false information due to errors (misinformation) and deliberate lies (disinformation) of posters to spread quickly, and shared social media features like bespoke newsfeeds can lead to this that users live in echo chambers where false information is repeated until it is normalized.

The platforms are designed to encourage high levels of user engagement – not to encourage deliberate public discourse – and the algorithms that determine what items appear in users’ newsfeeds do not distinguish between fact and falsehood, several panelists said. The result is that users may see inaccurate stories repeated without questioning views, and individuals may struggle to know – let alone deny – news that are not presented to them in their own news feeds.

Ali Velshi, NBC correspondent and host of MSNBC’s “Velshi,” said the result is that residents often cannot agree on what is real, so debates on political and social issues hang around to uncover basic facts . The conversation cannot lead to a productive debate about how to respond to the realities described by these facts.

“You never come up with a discussion: ‘What should good police work look like? ‘… [or] “How should we deal with the use of health care in this country?” Said Velshi.

A computer screen showing a group of people on a video call.

Above (left to right): Clint Watts, Sinan Aral, Ali Velshi. Below: Maria Ressa, Camille Francois

The question according to § 230

The government has a variety of tools in place to combat the spread of harmful inaccuracies through social media.

The platforms – like all private companies – are not required to freedom of expression, and current law already prohibits certain speeches, said Richard Stengel, former US Secretary of State for Public Diplomacy and Public Affairs. He found that these websites are actively monitoring content to remove child pornography and copyright infringing content. All that governments need to do is update or repeal Section 230 or pass new laws to similarly encourage the elimination of conspiracy theories or other harmful content.

“Facebook can do anyone [content] Law it wants it to, but if you stop giving them more liability, they won’t remove any content, ”Stengel said.

However, that added liability could have unintended consequences, said Jeff Kosseff, professor of cybersecurity law at the United States Naval Academy and author of a Section 230 story. Social media platform legal advisors are likely to recommend that platforms remove even real messages that are themselves Providing complaints proving controversial or controversial in order to minimize the risk of legal action, he said.

Part of the complication is that some content is clearly problematic while other posts are more subjective and platforms are careful about drawing incorrect conclusions. Stengel said government regulation can save platforms from this risk by giving that decision-making and responsibility to public decision-makers.

“You [platforms] want to be more regulated – they don’t like being in the gray area of ​​subjective decisions; They want to be able to say, “Well, the government made me do it,” said Stengel.

Speech vs. reinforcement

Marking posts to remove and banning users who violate policies is often an uphill battle, given the speed at which content is created and distributed. The summit host and Sinan Aral, a professor at MIT’s Sloan School of Management, referred to the results of a 2018 study he co-authored that inaccurate stories are retweeted faster than real ones, reaching 1,500 users six times earlier. Real people played a bigger role than bots in spreading the falsehoods.

This may point to the crux of the problem – not that some users and bots are posting disinformation, but that the platforms’ algorithms then proactively recommend the posts to a wide audience and encourage reposting.

“There has always been a division between your right to speak and your right to a megaphone, which reaches hundreds of millions of people,” said Renée DiResta, research manager at Stanford Internet Observatory. “It is valuable that you can publish your… views, however [the] Platform doesn’t have to increase it. “

A woman wearing headphones speaking on a video call.

Renee DiResta

Eisenstat also said violent movements planned through Facebook, like the Michigan government’s attempt to kidnap Gretchen Whitmer, in 2020 are taking off in part because the social media platforms’ recommendation engines can proactively connect those who are who later become perpetrators. The platforms encourage users to connect with certain other users and groups that people might otherwise not want to look for. The recommendation engines are also behind curating inaccurate “news” posts on users’ feeds and spreading the misunderstandings, she said.

There may already be tools in the books to tackle this, with Eisenstat agreeing that section 230 is most precisely construed to protect platforms from liability only for the language of the user, not the actions of the user own engines of the platform.

A variety of approaches

Platforms can revise their designs and strategies to slow down disinformation – if public regulations urge them to do so.

Social media companies could use standby fact checkers to flag or remove inaccurate posts ahead of important events like elections, DiResta suggested, and platforms could redesign the user experience, prompting account holders to reconsider whether they are content read or know about them in full is correct before republishing. Social media companies could also be asked to review paid political ads that users could reasonably assume have been reviewed by the platforms, Eisenstat said.

Stengel also suggested major reconsideration of language regulations, suggesting that state and local governments address certain harmful content by adopting regulations against hate speech.

An ideal strategy might not come up quickly, but Eisenstat said the government should focus on passing something that will at least make things better.

“Everyone thinks we need the alpha and omega, a silver bullet, a law that suddenly makes social media a nice, healthy place for democracy. That won’t happen, ”she said. “But we can all agree that the status quo cannot continue.”

Government Technology is a sister company of Governing. Both are departments of e.Republic.

Comments are closed.