US Supreme Court docket to think about recommender algorithms in key web protect case

The US Supreme Court agreed to hear a case centered on Section 230, a legal shield that protects internet platforms from civil and criminal liability for user content.
Source: US Supreme Court

The US Supreme Court is preparing to take up a case with significant implications for how the internet functions.

In Gonzalez v. Google LLC, the court will determine if Google LLC‘s YouTube LLC is liable for content the platform algorithmically recommends to users. It is the first time the US high court has agreed to hear a challenge to Section 230 of the Communications Decency Act, a landmark piece of legislation that protects internet platforms from civil and criminal liability from user-created content.

In the case, the Gonzalez family argues Google should be liable for the promotion of an Islamic State recruitment video by its algorithms. The video is allegedly tied to a 2015 terror attack in Paris that killed 130 people, including 23-year-old Nohemi Gonzalez.

SNL Image

Recommender algorithms drive a variety of traffic on today’s internet platforms by, for example, suggesting items to buy on e-commerce platforms, videos to watch on streaming platforms or sites to visit in search engine results. In Google’s case, a recommender algorithm drives the functionality of a Google or YouTube search by finding an optimal URL or video for users based on what is typed into the search bar.

If recommender algorithms are ruled as unprotected under Section 230, it will force platforms to rethink how content is pushed to consumers, policy experts said.

“These are really complex issues, and they’re going to require complex regulation and thoughtful interactions by the government,” said Patrick Hall, principal scientist at law firm bnh.ai and professor of data ethics at The George Washington University.

Scope considerations

Two years ago, conservative Justice Clarence Thomas said the court should consider if the text of Section 230 “aligns with the current state of immunity enjoyed by internet platforms” after the court dismissed a separate case regarding Section 230 authority.

In first reviewing Gonzalez v. Google, the US Court of Appeals for the 9th Circuit ruled Section 230 protects recommendation engines. However, the majority said Section 230 “shelters more activity” than Congress may have previously envisioned, and they encouraged federal legislators to clarify the scope of the law.

Hall compared the propagation of harmful content on social platforms to a newscaster talking to children on-air about suicide ideation. “The FCC would be all over that,” Hall said. “What is the reach of news versus some of these social media recommenders? In some cases, I bet social media reaches more people.”

Platform transparency has become a tech policy talking point as lawmakers debate legislation that would force companies to reveal how their algorithms behave to researchers and others. Recommender algorithms received renewed scrutiny over the past year following reports about Instagram LLC, TikTok Inc. and other platforms recommending content that harmed some users’ mental health, spread misinformation or eroded democracy.

But opening up an algorithm to the public or researchers is a double-edged sword, said Michael Schrage, a research fellow at the Massachusetts Institute of Technology and author of the book Recommendation Engines.

Recommender algorithms are best understood when they are transparent, interpretable and explainable, and that level of transparency has commercial and competitive implications, Schrage said.

“Google has been secretive about its algorithms because there’s an incentive to game the algorithm. If I understand the Google algorithm, I can get my content higher in the recommendation,” the researcher said.

Technical concerns

Algorithms are designed mainly to optimize engagement. For instance, a recommender engine could be directed to maximize ad revenue on certain content or minimize “likes” and “shares” with low-quality content. If engagement with specific content is deemed valuable to a platform, its algorithm calculates the probability of a user liking, commenting or sharing it. If that calculated value meets a certain threshold, it will display on a user’s feed.

In Google’s case, the only way to know whether YouTube deliberately optimized for the recommendation of terrorist content would be for the tech giant to open up its platforms to inform the court’s decision, Schrage said. That would likely require either legislative or legal action to force Google’s cooperation. One option would be for the Supreme Court to appoint a special master to review how Google labeled the metadata for the video in question.

Ari Cohn, free speech counsel at TechFreedom, views the recommendation of the ISIS content as a likely error on Google’s part. How the court responds to it could have significant implications, Cohn noted.

“Sometimes [platforms] are going to fall short, but to impose liability because sometimes they didn’t succeed doesn’t make sense if you actually want the internet to be any safer,” cohn said in an interview.

Federal legislative efforts aimed at requiring more platform transparency have stalled in recent months as congressional leadership focuses more on issues like inflation and abortion. As Gonzalez v. Google proceeds, it might motivate lawmakers to move on Section 230 reforms, said Jesse Lehrich, who heads Accountable Tech, a group advocating for regulations on tech platforms.

Barring preemptive legislative reforms, policy experts said the US justices are likely to urge congressional action.

“It’s not that the company explicitly wants to build tools to empower [bad] things to happen,” teaching said. “But it’s the byproduct of their business model … with very few checks or bounds.”

Representatives of the Supreme Court and Google did not return requests for comment.

Gonzalez v. Google is scheduled for argument during the Supreme Court’s October 2022-2023 term. An exact argument date has yet to be set.

Comments are closed.