Democracies are on the mercy of social media trade – Information

Facebook is becoming more and more dangerous. A recent investigation by MIT Technology Review found that Facebook funds misinformation by paying “millions of advertising dollars to clickbait actors” through its advertising platform.

By Frantisek Vrabel

Released: Sunday, January 23, 2022 at 10:27 p.m

In the war on disinformation, identifying the enemy can be difficult. Journalists, politicians, governments and even grandparents have been accused of enabling the spread of falsehoods online.

While neither of these groups is entirely innocent, the true adversary is more mundane. As Facebook whistleblower Frances Haugen testified late last year, it is social media’s own algorithms that make disinformation accessible.

Since its launch in 2004, Facebook has evolved from a student social network into a surveillance monster that is destroying social cohesion and democracy around the world. Facebook collects vast amounts of user data — including intimate facts like body weight and pregnancy status — to map its users’ social DNA. The company then sells this information to anyone – from shampoo manufacturers to Russian and Chinese intelligence agencies – who want to “micro-target” its 2.9 billion users. In this way, Facebook enables third parties to manipulate minds and trade in “human futures”: predictive models of the decisions individuals are likely to make.

Facebook has been used around the world to sow distrust in democratic institutions. Its algorithms have enabled real-world violence, from the genocide in Myanmar to the recruitment of terrorists in South America, West Africa, and the Middle East. Lies about voter fraud in the United States, promoted by former President Donald Trump, flooded Facebook in the run-up to the Jan. 6 riots. Meanwhile, in Europe, Facebook enabled Belarusian strongman Aleksandr Lukashenko’s perverse efforts to use migrants as weapons against the European Union.

In the Czech Republic, disinformation originating from Russia and shared on the site has flooded Czech cyberspace thanks to Facebook’s malicious code. An analysis conducted by my company found that the average Czech citizen is exposed to 25 times more disinformation about Covid-19 vaccines than the average American. The situation is so dire and the government’s actions so inept that Czechs are relying on civil society – including volunteers known as Czech Elves – to monitor and counter this influence.

Efforts to contain Facebook’s threat to democracy have failed miserably. In the Czech Republic, Facebook has partnered with Agence France-Presse (AFP) to identify malicious content. But with only one part-time employee and a monthly allotment of just ten dubious posts, these efforts are a drop in the bucket of disinformation. The “Facebook Files” published by The Wall Street Journal confirm that Facebook takes action on “only 3-5 percent of hate speech”.

Facebook has given users the ability to opt out of custom and political ads, but this is a token gesture. Some organizations, like Ranking Digital Rights, have asked the platform to disable ad targeting by default. That is not enough. Micro-targeting at the root of Facebook’s business model relies on artificial intelligence to capture user attention, maximize engagement and disable critical thinking.

In many ways, micro-targeting is the digital equivalent of the opioid crisis. But the US Congress has aggressively lobbied to protect people from opioids through legislation aimed at increasing access to treatment, education and alternative medicines. To stop the world’s addiction to fake news and lies, lawmakers must recognize the disinformation crisis for what it is and take similar action, beginning with proper regulation of micro-targeting.

The problem is that nobody outside of Facebook knows how the company’s complex algorithms work — and it could take months, if not years, to decipher them. That means regulators have no choice but to rely on Facebook’s own people to guide them through the factory. To encourage this cooperation, Congress must offer comprehensive civil and criminal immunity and financial redress.

Regulating social media algorithms seems complicated, but compared to even greater digital threats lurking on the horizon, it’s low hanging fruit. “Deepfakes” — the AI-based large-scale manipulation of videos and images to sway opinion — is hardly a topic of conversation in Congress. While lawmakers fret over the threats to traditional content, deepfakes pose an even greater challenge to individual privacy, democracy, and national security.

Meanwhile, Facebook is becoming increasingly dangerous. A recent investigation by MIT Technology Review found that Facebook funds misinformation by paying “millions of advertising dollars to clickbait actors” through its advertising platform. And CEO Mark Zuckerberg’s plans to build a metaverse, “a convergence of physical, augmented, and virtual reality,” should spook regulators everywhere. Just imagine the potential harm these unregulated AI algorithms could do if they were allowed to create a new immersive reality for billions of people.

In a statement following the recent Washington, DC hearings, Zuckerberg reiterated an offer he previously made: Regulate us. “I don’t think private companies should make all decisions on their own,” he wrote on Facebook. “We strive to do the best job we can, but at some level our democratically elected Congress is the right body to evaluate trade-offs between social justices.”

Zuckerberg is right: Congress has a responsibility to act. But Facebook also has a responsibility to act. It can show Congress what social injustices it continues to create and how it creates them. Until Facebook scrutinizes its algorithms – guided by the expertise of its own experts – the war on disinformation will be unwinnable, and democracies around the world will remain at the mercy of a ruthless, rogue industry.

František Vrabel is CEO and Founder of Semantic Visions, a Prague-based analytics company that collects and analyzes 90% of the world’s online news content.

Comments are closed.