An ‘eraser button’? Centered concepts might assist bridle Massive Tech

WASHINGTON – Breaking Big Tech? How about reducing the liability of tech companies in cases where the content they deliver to users is causing harm? Or create a new regulator to strictly oversee the industry?

These ideas have attracted official attention in the US, Europe, the UK, and Australia when controversy enveloped Facebook – which has renamed itself to Meta – Google, Amazon, and other giants. Revelations of deep-seated problems uncovered by former Facebook product manager Frances Haugen and backed up by a plethora of internal company documents have given new impetus to legislative and regulatory efforts.

But while regulators are still considering bigger moves like winding up some companies, the most realistic changes may be less ambitious. And also the kind of things people might see on their social feeds.

So lawmakers get creative when they introduce a series of laws designed to overthrow big tech. A bill is proposing an “eraser button” that parents can use to immediately erase all personal data collected from their children or teenagers. Another proposal prohibits certain functions for children under the age of 16, such as: There is also a ban on collecting personal information from anyone aged 13-15 without their consent and a digital “deed” for minors that would restrict the collection of personal information from teenagers as well.

Personal data is of the utmost importance to online users of all ages. It is at the heart of the lucrative business model of social platforms: collecting data from their users and using them to sell personalized ads aimed at targeting specific groups of consumers. Data is the financial lifeblood of a $ 1 trillion social network giant like Facebook. Ad sales make up almost all of its revenue, which reached approximately $ 86 billion last year.

That means the proposed legislation targeting youth personal data could affect the bottom line of social media companies. On October 26, executives from YouTube, TikTok and Snapchat offered fundamental support during a congressional hearing on child safety, but refused to commit to endorsing proposed laws. Instead, they offered the usual Washington lobbyist style of speaking, saying they would be happy to work with Congress on the matter. Translation: You want to influence the proposals.

When it comes to children, Republican and Democratic lawmakers – hopelessly divided over perceived political bias and hate speech on social media – agree that something needs to be done, and quickly. “One thing that unites Democrats and Republicans is, ‘Don’t anyone think of the kids, please,'” said Gautam Hans, a technology lawyer and free speech expert and professor at Vanderbilt University. “It’s very sellable on a non-partisan basis.”

In the UK, efforts to establish stricter rules to protect social media users, especially the younger ones, are more advanced. Members of the UK Parliament asked Haugen for guidance on how to improve UK online safety legislation. Appearing before a parliamentary committee in London on October 25, she warned members that time is running out to regulate social media companies that use artificial intelligence to deliver “interesting” content to users.

The European Union’s data protection and competition authorities have fined some companies billions of dollars in fines and introduced far-reaching new regulations in recent years. The UK set up a new regulator for Facebook and Google last spring.

U.S. regulators didn’t kick in until 2019 when the Federal Trade Commission fined Facebook $ 5 billion and YouTube $ 170 million in separate cases for alleged data breaches. Late last year, the US Department of Justice and a number of states filed breakthrough antitrust lawsuits against Google for dominance in online search. The FTC and several states brought an antitrust lawsuit against Facebook in parallel, accusing Facebook of abusing its market power to destroy smaller competitors.

In addition to child protection measures, US lawmakers from both parties have tabled proposals aimed at cracking down on social media; Target anticompetitive practices by big tech companies and possibly order separations; and to get to the algorithms provided by the technology platforms to determine what is displayed in the users’ feeds.

All of these proposals are about to be finalized.

The Justice Against Malicious Algorithms Act, for example, was introduced by high-ranking Democrats in the House of Representatives about a week after Haugen testified about how social media algorithms deliver extreme content to users and spark anger in order to increase user “engagement”. The bill would hold social media companies accountable by removing their shield against liability, known as Section 230, for tailor-made recommendations to users who are deemed harmful.

Some experts who support stricter regulation of social media say the legislation could have unintended consequences. It doesn’t make it clear enough what specific algorithmic behavior would lead to loss of liability, they suggest, which makes it difficult to see how it would work in practice and leads to wide disagreement about what it could actually do.

For example, Paul Barrett, assistant director of New York University’s Stern Center for Business and Human Rights, calls the bill “very extensive” in a way its authors may not understand and suggests it could destroy liability protection almost entirely . But Jared Schroeder, a First Amendment Fellow at Southern Methodist University, said that while there is “a noble purpose” behind the bill, constitutional guarantees on freedom of expression are likely to nullify any attempt to sue social media platforms would do.

A spokesman for Meta, which owns the Facebook service, declined to comment on proposed legislation. In a statement, the company said it had a long history of advocating updated regulations but did not provide any details.

Meta CEO Mark Zuckerberg has proposed changes that would only provide legal protection to Internet platforms if they can prove that their systems for identifying illegal content are up to their snout. However, this requirement could be more difficult for smaller tech companies and startups to meet, leading critics to claim that this would ultimately work in Facebook’s favor.

Marcy Gordon,

The Associated Press

Comments are closed.