Algorithms: How algorithmic suggestions can push web customers into extra radical views and opinions | USA

Social media company logos.Jaap Arriens

The promise of social media was to enable better connections between the expansion of the speed, scope and spread of digital activism.

Before social media, public figures and organizations could use mass channels like television to get their message across to a large audience. News media acted as gatekeepers and enabled information to be disseminated to a mass audience. Fixed criteria were used to decide which stories should take precedence and how they should be treated. At the same time, we had citizen-centered communication – or peer communication – that was more informal and organic. Social media is blurring the lines between the two and offering well-connected individuals a role in shaping opinions.

Twenty years ago we did not have the means to raise awareness or mobilize root causes at the speed and scope that social media makes possible. A hashtag like #deleteuber can go viral and cause 200,000 Uber accounts to close in a single day. In the pre-social era, successful citizen activism (such as that sparked by the Exxon Valdez oil spill) involved lengthy business-activist negotiations for years. In contrast, in today’s world, a single viral tweet can wipe out millions of dollars in company stock valuations or cause governments to change their policies.

Polarization, misinformation and filter bubbles

While such an opinion-forming role enables unrestricted bourgeois discourse that can be conducive to political activism, it also makes individuals more susceptible to misinformation and manipulation.

The algorithms on which the newsfeeds of social media platforms are based are designed for constant interaction and maximum maximization of engagement. Most “big tech” platforms operate without the gatekeepers or filters that control traditional news and information sources. Combined with the vast amounts of data these companies have, they have tremendous control over the flow of information to individuals.

Studies show that falsehoods spread faster than the truth on social media. Often this is because we find messages that make emotions more stimulating, which makes it more likely that we will share such messages, which are then reinforced by algorithmic recommendations. What we see on our social media feeds, including paid advertising, is tailored to our individual preferences, political and religious views. Such personalization can have a number of negative consequences for society – be it the suppression of digital voters or the targeted disinformation of minorities for discriminatory advertising.

The algorithmic design of big tech platforms prioritizes new and micro-oriented content, which leads to an almost uncontrolled spread of misinformation. This was corroborated by Apple CEO Tim Cook, who recently said: “In a moment of rampant disinformation and conspiracy theories being juiced up by algorithms, we can no longer turn a blind eye to a technology theory that says that any engagement is a good engagement is – the longer the better – and all with the aim of collecting as much data as possible. “

Examples of misinformation disseminated through social media, such as voter suppression developed through social media. The 2016 Senate investigation into the disinformation campaign found that “these employees used targeted advertising, deliberately fake news articles, self-generated content and tools for social media platforms” to deliberately manipulate the perceptions of millions of Americans.

The dark side of these committed models is online radicalization and political polarization. While social media provides a sense of identity, purpose, and connection, those who post conspiracy theories and engage in online misinformation also understand the virality of social media, where disruptive content generates more engagement.

We are dealing with actions coordinated by social media that could disrupt the collective functioning of society, from financial markets to voting processes where meme stocks or meme wars by #StopTheSteal or r / WallStreetBets mean “a coup for the gram” .

The danger is that such a viral phenomenon, combined with algorithmic recommendations and echo chamber effects, leads to an intensified cycle of filter bubbles in which users could be pushed to more radical views and opinions.

Algorithmic awareness and media literacy are key

Correcting algorithmic bias and providing better information to users of technology platforms would in itself go a long way in promoting better societal outcomes.

Some of these types of misinformation could be resolved through a mix of government policies and self-regulation by technology companies to better curate and label misleading information, by technology companies working with news organizations using a mix of AI and crowdsourcing misinformation detection. Employing better bias detection strategies and providing greater transparency in delivering algorithmic recommendations to users on technology platforms could solve some of these problems.

What is also needed is more media literacy, including algorithmic awareness of how personalization and recommendations from big tech companies shape our information ecosystem.

Most people are not high enough to understand how algorithms affect their information ecosystem. For example, a Pew poll in the US found that adults who received their messages primarily through social media knew less about politics and current affairs. In the era of Covid-19, misinformation was labeled infodemia by the World Economic Forum.

It’s important to understand how platforms exacerbate pre-existing digital differences, which can lead to active harm to search and social media users. In my own research, I’ve found that a user with greater health literacy based on the way digital platforms provide information for searches are more likely to get useful medical advice from a reputable health care provider like Mayo Clinic. The same digital platform will lead a less educated user to wrong healing practices or misleading medical advice.

Social media has evolved from its initial promise of being a “utopia” of rich online democratic debate to the complexity of filter bubbles and the spread of hate speech. Big tech companies are giving up social power on an unprecedented scale. Their decisions about which behaviors, words and accounts are allowed regulate billions of private interactions, shape public opinion and impair people’s trust in democratic institutions. It is time to recognize that technology platforms can no longer be viewed as profitable entities but have a responsibility to the public. We need a conversation about how algorithm diffusion affects society, and we need greater awareness of algorithmic damage caused by over-reliance on big tech.

Anjana Susarla is an Endowed Professor of Responsible AI at Michigan State University’s Eli Broad College of Business.

Comments are closed.