Report: Social media manipulation impacts even US senators

A huge, globalized industry of low-cost social media tampering service providers continues to thrive, distorting both commerce and politics – including the verified social media accounts of two U.S. senators

December 21, 2020, 5:01 am

4 min read

Researchers at the center, a NATO-accredited research group based in Riga, Latvia, paid three Russian companies EUR 300 to buy 337,768 fake likes, views and shares from posts on Facebook, Instagram, Twitter, YouTube and TikTok including content from verified Reports from Sens. Chuck Grassley and Chris Murphy.

Grassley’s office confirmed the Iowa Republican had participated in the experiment. Murphy, a Connecticut Democrat, said in a statement that he agreed to attend because it was important to understand the vulnerability of self-verified accounts.

At a time when much public debates have gone online, widespread manipulation of social media not only distorts commercial markets but also poses a threat to national security, Janis Sarts, NATO director of StratCom, told The Associated Press.

“These types of bogus accounts are hired to get the algorithm to believe this is very popular information, and to make things that divide more popular and accessible to more people. That in turn deepens the divisions and weakens us as a society, ”he said.

Researchers found that more than 98% of fake engagements stayed active after four weeks, and 97% of the accounts they reported for inauthentic activity were still active five days later.

NATO StratCom conducted a similar exercise in 2019 using reports from European officials. They found that Twitter is now no longer removing authentic content faster, and Facebook has made it harder to create fake accounts and pushed manipulators to use real people instead of bots, which is more expensive and less scalable.

“We’ve spent years strengthening our false intrusion detection systems, focusing on stopping the accounts that are potentially causing the most damage,” a spokesman for the Facebook company said in an email.

But YouTube and Facebook’s own Instagram remain vulnerable, researchers said, and TikTok appeared to be “defenseless.”

“The amount of resources they spend depends heavily on how vulnerable they are,” said Sebastian Bay, the report’s lead author. “That means that you are unequally protected on all social media platforms. This makes the regulation stronger. It’s like having cars with and without seat belts. “

The researchers said they were promoting non-political content, including pictures of dogs and food, for the purposes of this experiment in order to avoid actual impact during the US election season.

Ben Scott, executive director of Reset.tech, a London-based initiative to combat digital threats to democracy, said the research showed how easy it is to manipulate political communications and how little platforms have done to address long-standing problems to fix.

“What sucks the most is the ease with which it can be manipulated,” he said. “Fundamental democratic principles of how societies make decisions are corrupted when you’ve organized manipulations that are so widespread and so easy to do.”

Twitter said it was proactively addressing platform manipulation and was working to mitigate it on a large scale.

“This is an evolving challenge, and this study reflects the immense effort that Twitter has made to improve the health of public conversation,” said Yoel Roth, director of website integrity for Twitter, in an email.

YouTube announced that it was taking precautions to eradicate spurious activity on its website, noting that in the third quarter of 2020, more than 2 million videos were removed from the website for violating its spam policies.

“We will continue to deal with attempts to abuse our systems and share relevant information with industrial partners,” the company said in a statement.

TikTok said it has no tolerance for spurious behavior on its platform and that it removes content or accounts that encourage spam or mis engagement, impersonation, or misleading information that can cause harm.

“We are also investing in third-party testing, automated technology and comprehensive guidelines to stay one step ahead of the ever-evolving tactics of people and organizations trying to mislead others,” a company spokesman said in an email.

———

Associate press writer David Klepper of Providence, Rhode Island, contributed to this report.

Comments are closed.