Opinion | What I Discovered on My Quest to Repair America’s Social Media Drawback

To counteract this, Fukuyama and his colleagues proposed a so-called “middleware” as a reform of Section 230 of the Communications Decency Act of 1996 – the law that protects digital platforms from the due diligence obligations under the common law that other industries for the foreseeable damage, that they cause. (Unlike newspapers and broadcasters, Internet companies cannot be sued for the falsehoods and misinformation they post and promote.) They envisioned “middleware” as third-party software that verifies the importance and accuracy of all types of information presented in social Media and social media are presented by search engines and that would affect news feeds and search results accordingly. Fukuyama and colleagues argued that platforms should give their users the ability to filter their content using middleware – a rule that, unlike some other proposed reforms of Section 230, would avoid government control over content.

“A competitive layer of new companies with transparent algorithms would take over the editorial gateway functions that are currently being filled by dominant technology platforms with opaque algorithms,” wrote Fukuyama and his colleagues.

Until we saw Fukuyama’s work, we never thought of calling NewsGuard “middleware”. We had developed a journalistic solution to a technology problem by giving people information to help them decide whether a source they come across online is generally trustworthy. We rated the news and information sources that make up 95 percent of exposure in the US, UK, Germany, France and Italy. NewsGuard analysts apply nine basic, non-political criteria of universal journalistic practice; B. Whether these sources disclose ownership or publish corrections. Each source receives a weighted rating between 0 and 100, a red or green rating, and a nutritional label that indicates the type of website. Consumers can subscribe to a browser extension or mobile version, but are more likely to get access through companies and other companies that license the ratings and labels and make them available to people on their network.

Although we did not consider our reviews to be middleware, this is how our licensees implement our information. Microsoft was the first technology company to offer our ratings and labels by integrating them into their Edge mobile browser and giving users of the Edge desktop browser free access. Internet providers like British Telecom in the UK, health systems like Mt. The Sinai in New York and more than 800 public libraries and schools in the US and Europe offer our ratings and labels through access to a browser extension that has red or green labels next to messages in Inserts social media feeds and search results. Research shows that access to these reviews results in a dramatic decrease in the number of people who believe or share false content and a boost in confidence in high quality news sites. Scoring websites is more effective than fact-checking, which doesn’t catch up with falsehoods until they’ve gone viral.

A middleware solution makes sense for the platforms: it’s not surprising that an industry that said from birth 25 years ago that it would not be held responsible for its actions would now need help to operate responsibly. However, based on our experience at NewsGuard, Section 230 or other threats need reform to remove platform immunity and force Silicon Valley to give its users security tool options.

It’s noteworthy to compare Microsoft – which faced its regulatory challenges a generation ago – to give its users a choice against Silicon Valley’s foreclosed no-choice approach.

Microsoft operates a Corporate Defending Democracy Program. Microsoft President Brad Smith wrote a book in 2019 called Tools and Weapons, calling on his industry to do better. “If your technology changes the world, you have a responsibility to deal with the world you helped create,” wrote Smith. and sometimes on disturbance as an end in itself. “

WASHINGTON, DC – SEPTEMBER 5: Jack Dorsey, Twitter’s chief executive officer, testifies during a House Committee on Energy and Commerce hearing on Twitter’s transparency and accountability on Capitol Hill on September 5, 2018 in Washington, DC. Earlier in the day, Dorsey faced questions from the Senate Intelligence Committee on how foreign activists use their platforms to influence and manipulate public opinion. (Photo by) | Drew Angerer / Getty Images

Without violating trust in our discussions with Silicon Valley platforms, I can say that, unlike Microsoft, they are not yet ready to give their users information about the trustworthiness of the sources advertised in their products. There are executives in these companies who privately admit they want to do more and would be relieved to rely on accountable third parties with transparent criteria. Jack Dorsey, CEO of Twitter, even pondered it, “You can think of a more market-driven and market-driven approach to algorithms.” The official line, however, is that platforms must completely control the user experience: why should users customize algorithms with security tools, if the algorithms make these companies the most valuable in the world? This means Congress must reform Section 230 to force the platforms to open up to middleware.

The platforms would certainly need help if they were forced to make their products less toxic. Facebook, Google / YouTube, and Twitter each have secret internal ratings for news publishers based on their trustworthiness, but don’t tell publishers how they rank or how the algorithms use the ratings. Regardless of human judgments or artificial intelligence that social media companies use to create their rankings, Russia’s RT, Vladimir Putin’s propaganda arm formerly known as Russia Today, is hugely successful. When RT became the first news channel with a billion views on Google’s YouTube in 2013, a YouTube vice-president appeared on RT’s celebratory broadcast to praise RT’s coverage as “authentic” and without “agendas or propaganda”. He was sincere.

NewsGuard gives RT a red rating and states in a long label that Putin is funding RT to spread Kremlin lies and to promote division in democracies. RT must now register as a foreign agent in the US.

Without using the term “middleware”, this is the approach that a new UK law to reform platforms will take. The UK government’s “Online Harms” plan would “re-expose major social media platforms and search engines operating in the UK to make companies take responsibility for the safety of their users” while “giving freedom to defend “expression.”

UK regulations would require “companies to have appropriate systems and processes in place to combat harmful content and activities”. The plan states: “Trustworthy content can be clearly identified and tools are provided to users to manage the content displayed.” The European Commission’s Code of Conduct on Disinformation also requires platforms to “empower” their users by providing trustworthiness describe in detail from sources based on journalistic principles obtained from third parties.

In anticipation of UK regulations, a home industry of security tools has grown in the UK that the platforms could offer to help with this new due diligence. The UK government has published a list of more than 80 “security technology providers” that Silicon Valley could fall back on to provide a layer of protection between the platforms and their consumers. For example, SafetoNet tracks keystrokes that children and teenagers make online to identify signs of stress and prevent bullying.

Marketing gurus probably wouldn’t suggest “middleware” as an exciting term for a new industry, but nonetheless, it could revolutionize the way we interact with the Internet for the better. These tools would help the platforms meet new legal responsibilities to avoid harm – not by trusting that they continue to run their own secret, inadequate algorithms, and not by the government monitoring the content, but by giving their users the Giving choice about their online experience.

We tried the alternative, and 25 years after Section 230, most people get their messages from social media platforms that are not geared towards accuracy or safety, but rather that maximize engagement and revenue, even through misinformation and banter. The need for platforms to provide tools so that users can choose what to trust their newsfeeds would finally counter the infodemia of online misinformation.

Comments are closed.