Views: Ought to social media platforms be regulated to cease hate speech?

YES: Legislators and regulators should implement guidelines to mitigate harmful, incorrect content

By Josef Getachew

On January 6, a crowd of insurgents stormed the U.S. Capitol to overthrow our country’s 2020 presidential election. The attack, which killed five people, was fueled by a stream of disinformation and hate speech that flooded the social media platforms before, during and after the elections. Despite their civic integrity and content moderation guidelines, platforms have been slow or unwilling to take action to limit the distribution of content intended to disrupt our democracy.

This flaw is inherently linked to the platforms’ business models and practices, which provide incentives for the spread of malicious language. Content that generates the most engagement on social media tend to be disinformation, hate speech, and conspiracy theories. Platforms have implemented business models to maximize user engagement and prioritize their profit shares before tackling harmful content.

While the first change prevents our government from regulating language, it has legislative and regulatory tools at its disposal that restrict the social media business practices that bad actors use to spread and reinforce language that affects our democracy can be.

The core component of the business model of all major social media platforms is to collect as much user data as possible, including traits such as age, gender, location, income, and political beliefs. Platforms then share relevant data points with advertisers for targeted advertising. It should come as no surprise that disinformation agents are leveraging the data collection practices and targeted advertising capabilities of social media platforms to target malicious content, especially marginalized communities, to micro-targets.

Comprehensive data protection laws, if passed, may require data minimization standards that limit the collection and disclosure of personal information to what is necessary to provide services to the user. Legislation can also restrict the use of personal data for discriminatory practices that spread harmful content such as suppressing online voters. Without the huge amounts of data that data platforms collect for their users, bad actors will face more barriers targeting users with disinformation.

In addition to data collection methods, platforms use algorithms that determine what content users see. Algorithms track user preferences through clicks, likes, and other forms of engagement. Platforms optimize their algorithms to maximize user interaction. This can mean users are led into a rabbit hole full of hate speech, disinformation, and conspiracy theories.

Unfortunately, platform algorithms are a “black box” about the inner workings of which little is known. Congress should pass laws that hold platform algorithms accountable. Platforms should be asked to indicate how their algorithms process personal data. Algorithms should also undergo third-party audits to mitigate the threats of algorithmic decisions that spread and amplify harmful content.

Federal agencies with enforcement and regulatory capabilities can use their powers to limit the spread of harmful online language that results from the platform’s business practices. For example, the Federal Trade Commission may use its enforcement powers against unfair and misleading practices to investigate platforms for serving election disinformation ads, despite policies in place prohibiting such content. The Bundestag Election Commission can complete its longstanding regulation to demand greater disclosure of political online advertising and to create more transparency about which units are trying to influence our elections.

Outside of legislative and regulatory processes, the Biden administration should set up a task force for the Internet composed of representatives from federal, state and local governments, companies, employees, public interest organizations, universities and journalists. The task force would identify tools to combat harmful language on the Internet and make long-term recommendations for an Internet that would better serve the public interest.

There is no one-size-fits-all solution to eradicating disinformation, hate speech and other harmful content online. In addition to the political ideas, the federal legislature must give more support to local journalism to meet the information needs of the communities.

However, social media companies have proven that profits are more important to them than the security of our democracy. Federal lawmakers and regulators must enact guidelines as part of a holistic approach to holding social media platforms accountable for the distribution of harmful and inaccurate content.

Yosef Getachew is director of the Media & Democracy Program for Common Cause. He wrote this for InsideSources.com.

Tribune Content Agency

NO: Control over online speech should be in the hands of the users, not the government

By Jillian C. York and Karen Gullo

The US elections and their dramatic aftermath have fueled the debate about how to deal with online misinformation and disinformation, lies and extremism. We have seen how social media companies permanently tossed the president, some of his allies, and conspiracy groups off their election misinformation platforms, raised eyebrows around the world, and led to allegations that they were robbed of their First Amendment rights. At the same time, people used social media to communicate plans to commit violence in the Capitol, complaining that platforms are not doing enough to censor extremism.

This has exacerbated calls from politicians and others to regulate online language through the imposition of rules on Facebook, Twitter and other social media platforms. The legislature supports various wrong proposals for this. One would amend the law to make tech companies legally liable for the speech they gave by amending Section 230 of the Communications Decency Act – the idea is that platforms remove harmful language to avoid multiple lawsuits. Another would give state lawmakers the power to regulate Internet language. Last but not least, former President Donald Trump issued an executive order in May that was essentially designed to involve the federal government in private Internet speech so that government agencies can make decisions on platforms’ decisions to remove a post or fire someone. The Biden administration can withdraw from the order – but not yet.

It’s important to note that the current law both gives platforms the right to curate their content at their own discretion (thanks to the first change), as well as protecting them from liability for the choices they make, what to remove or omit want. Without this protection, it is unlikely that we would have seen these platforms grow at all, and neither is it likely that competition in space would continue to flourish.

The alleged remedial measures that are being considered by lawmakers are highly and dangerously flawed and violate the First Amendment language protection. They would promote state censorship contrary to democracy. Big tech companies would have more control over online language than they already have because they can afford the litigation that scares off new entrants. In addition, they would take legal, protected language offline and silence the voices of marginalized and less powerful people who rely on the internet to speak – a diverse group of people that includes activists, journalists, LGBTQ people and many more belong.

Instead, users should have more power to control what they see on their feeds. They should be able to move freely with their data from one platform to another if they don’t like what they see. There should be more competition and more choice of platforms for users to choose the one that’s right for them. Mergers and acquisitions between social media companies should be scrutinized and our antitrust laws better enforced to encourage competition. Instead of having a huge platform like Facebook with Instagram and WhatsApp that devours their competitors, we need several, different platforms from which people can choose.

Facebook, Twitter and Google have far too much control over public discourse and do a mostly terrible job of moderating speeches on their platforms. The decisions they make to run down positions or close accounts are inconsistent and vague and not transparent. That needs to change. Platforms should adopt standards such as the Santa Clara Principles of Transparency and Accountability in Content Moderation (developed by civil society and endorsed by numerous companies), which align content moderation practices with human rights issues, including the right to appeal to decisions and to have people. no algorithms, check distances.

Tech companies have the initial customization right to edit and curate the content on their platforms without government intervention. The government cannot force websites to display or promote language they do not want to display or remove. We support this right. The government shouldn’t have the power to dictate what people can or cannot say online.

But until platforms incorporate fairness, consistency, and transparency in their editing practices, give users more power over their social media accounts, and accept interoperability so that users don’t lose data when they decide to switch platforms, and until policymakers get there find to encourage competition. We will continue to see misguided calls to the government to intervene and regulate the online language.

Jillian C. York, Director of International Freedom of Expression at the Electronic Frontier Foundation, is the author of Silicon Values: The Future of Free Speech in Surveillance Capitalism. Karen Gullo is an analyst and senior media relations specialist at EFF. You wrote this for InsideSources.com.

Tribune Content Agency

Comments are closed.