It’s not only a social media downside – how search engines like google unfold misinformation

Introducing Croakey: As governments around the world grapple with the complexities of regulating technology giants, given the widespread health problems, it is important that the health sector address these policy debates and developments.

A useful rundown of some of the concerns and possible solutions comes from a recent #LongRead at The Atlantic: “How to Put Out the Dumpster Fire of Democracy: Our Democratic Habits have been killed by an internet kleptocracy profiting from disinformation, polarization and anger. Here’s how to fix that.

Authors Anne Applebaum and Peter Pomerantsev argue that democracy becomes impossible as the algorithms of digital platforms encourage hate speech, radicalization, conspiracy and propaganda, with an “online system controlled by a tiny number of secret companies in Silicon Valley”.

They write:

In this sense, the Internet has taken us back to the 1890s: We have a small class of enormously rich and economically powerful people again, whose obligations are to themselves and perhaps their shareholders, but not to the common good.

We need to change the design and structure of online spaces so that citizens, businesses and political actors have better incentives, more choices and more rights. “

Applebaum and Pomerantsev argue that breaking up the big corporations could help diversify the online economy, but is not good for democracy without making efforts to also address the problem of algorithms through greater transparency and public control over their use.

Her article suggests that the most appropriate historical model for algorithmic regulation is not monopoly, but environmental protection: “To improve the ecology around a river, it is not enough just to regulate corporate pollution. It won’t help just breaking up the polluting companies … “

Meanwhile, a law professor at the University of Ottawa, Vivek Krishnamurthy, has raised concerns that Canadian plans to regulate social media content are unlikely to be effective and could have unintended consequences for countries “that do not share our commitment to human rights.”

Authoritarian governments pass social media laws similar to Canada and impose draconian penalties on social media companies that fail to remove content that is illegal under national law, writes Krishnamurthy in the Canadian edition of The Conversation.

“The problem, however, is that the laws in many authoritarian countries criminalize expressions protected by international human rights law, from voices dissenting the ruling regime to the cultural and religious expression of minority communities,” says he.

The answer, says Krishnamurthy, is that democracies that respect human rights are working together to develop a multilateral approach to tackling harmful content online.

In the following article, Associate Professor Chirag Shah of the University of Washington Information School explains the “vicious circle” of how search engine algorithms spread misinformation.

Chirag Shah writes:

Search engines are one of society’s main gateways to information and people, but they are also channels of misinformation. Similar to problematic social media algorithms, search engines learn to offer you what you and others clicked before. Since people are drawn to the sensation, this dance between algorithms and human nature can encourage the spread of misinformation.

Search engine companies, like most online services, make money not only from selling ads, but also from tracking users and selling their data through real-time bidding. People are often led to misinformation by their desire for sensational and entertaining news, as well as information that is either controversial or confirms their views. One study found that more popular YouTube videos about diabetes are less likely to contain medically valid information than less popular videos on the subject, for example.

Ad-driven search engines like social media platforms are designed to reward clicks on enticing links as they help search companies improve their business metrics. As a researcher studying search and recommendation systems, my colleagues and I show that this dangerous combination of the corporate profit motive and individual vulnerability makes the problem difficult to fix.

How search results go wrong

When you click on a search result, the search algorithm learns that the link you clicked is relevant to your search query. This is known as relevance feedback. This feedback helps the search engine to give this link a higher weight for this query in the future. If enough people click this link enough times to provide strong relevance feedback, this website will appear higher in search results for this and related queries.

People are more likely to click links that appear at the top of the search results list. This creates a positive feedback loop – the higher a website is displayed, the more clicks there are, and that website is moving higher or keeping it higher. Search engine optimization techniques use this knowledge to increase the visibility of websites.

This misinformation problem has two aspects: How is a search algorithm evaluated and how do people react to headings, titles and excerpts? Search engines, like most online services, are judged on a number of metrics, one of which is user interaction. It is in the interests of search engine companies to give you things to read, look at, or just click on. Therefore, when a search engine or recommendation engine compiles a list of the items to display, it calculates the probability that you will click on the items.

Traditionally, this should bring out the information that is most relevant. However, the notion of relevance is fuzzy because people have used search to find fun search results as well as really relevant information.

Imagine you are looking for a piano tuner. If someone showed you a video of a cat playing the piano, would you click on it? Many would, even if that has nothing to do with piano parts. The tracing service feels confirmed with positive relevance feedback and learns that it is okay to show a cat playing the piano when people are looking for piano tuners.

In many cases, it’s even better than showing the relevant results. People enjoy watching funny cat videos and the search system gets more clicks and more user interaction.

This might seem harmless. What if from time to time people get distracted and click on results that are not relevant to the query? The problem is that people are drawn to exciting images and sensational headlines. They tend to click on conspiracy theories and sensational news, not just cats playing the piano, and more so than real news or relevant information.

Famous but fake spiders

In 2018, Google searched for a Facebook post claiming a new deadly spider had killed multiple people in multiple states. In the first week of this trend survey, my colleagues and I analyzed the 100 best results of the Google search for “new deadly spider”.

This story turned out to be fake, but the people who searched for it were mostly exposed to misinformation related to the original fake post. While users continued to click and share this misinformation, Google continued to post these pages at the top of search results.

This pattern of exciting and unverified stories popping up and people clicking on continues, with people apparently either not interested in the truth or believing the stories must be true if a trusted service like the Google Find them showing these stories. More recently, a refuted report claiming that China leaked the coronavirus from a laboratory has caught on with search engines because of this vicious cycle.

Find the misinformation

To test how well people differentiate between accurate information and misinformation, we developed a simple game called “Google Or Not”. This online game shows two sets of results for the same query. The goal is simple: choose the kit that is reliable, trustworthy, or most relevant.

Either of these two sentences has a result or two that are either verified and flagged as misinformation or a debunked story. We made the game publicly available and promoted it through various social media channels. We have collected a total of 2,100 responses from over 30 countries.

When we analyzed the results, we found that about half of the people mistakenly selected the set with one or two misinformation outcomes as trustworthy. Our experiments with hundreds of other users over many iterations have produced similar results. In other words, roughly half the time it takes people to select outcomes that contain conspiracy theories and false news. The more people select these inaccurate and misleading results, the more the search engines learn that this is what people want.

Big tech regulation and self-regulation issues aside, it’s important that people understand how these systems work and how they make money. Otherwise, market economies and people’s natural tendency to be interested in flashy connections will keep the vicious circle going.

Chirag Shah is an Associate Professor at the Information School (iSchool). He is the founding director of InfoSeeking Lab, which focuses on information retrieval, human-computer interaction (HCI), and social media issues and is supported by grants from the National Science Foundation (NSF) of the Institute’s National Institute of Health (NIH) from Museum and Library Services (IMLS), Amazon, Google and Yahoo.

The following article was first published in the US edition of The Conversation.

further reading

From the Atlantic: How to put out the dumpster fire of democracy. Our democratic habits have been killed by an internet kleptocracy profiting from disinformation, polarization and anger. Here’s how to fix that.

From Canada: Planned social media regulations set a dangerous precedent.

Comments are closed.