Malinowski laws makes social media accountable

Congressman Tom Malinowski (NJ-7) and Congresswoman Anna G. Eshoo (CA-18) reintroduced the Protecting Americans from Dangerous Algorithms Act, legislation to hold large social media platforms accountable for their algorithmic amplification of harmful, radicalizing content that leads to offline violence.

They were joined by Representatives Sean Casten, Jason Crow, Suzan DelBene, Mark DeSaulnier, Ted Deutch, Sara Jacobs, Barbara Lee, Joe Neguse, Dean Phillips, Haley Stevens, Debbie Wasserman Schultz, and Peter Welch as cosponsors.

The bill narrowly amends Section 230 of the Communications Decency Act to remove liability immunity for a platform if its algorithm is used to amplify or recommend content directly relevant to a case involving interference with civil rights (42 U.S.C. 1985); neglect to prevent interference with civil rights (42 U.S.C. 1986); and in cases involving acts of international terrorism (18 U.S.C. 2333).

Those laws (42 U.S.C. 1985-1986) are Reconstruction-era statutes originally designed to reach Ku Klux Klan conspirators.

They have been invoked in recent lawsuits against the Proud Boys, the Oath Keepers, and others following the attack on the Capitol on January 6th.

The third statute (18 U.S.C. 2333) is implicated in several lawsuits, including against Facebook, alleging its algorithm connected Hamas terrorists with one another and enabled physical violence against Americans. The bill only applies to platform companies with 10 million or more users.

To read a copy of the legislation, click here. 

Section 230 of the Communications Decency Act (47 U.S.C. 230) immunizes online platforms from legal liability for user-generated content (with some exceptions, such as for federal crimes).

While the law has helped to enable the growth of the modern internet economy, it was enacted 25 years ago when many of the challenges we currently face could not have been predicted.

Today, large internet platforms use sophisticated, opaque algorithms to determine the content their users see, leveraging users’ personal and behavioral data to deliver content designed to maximize engagement and the amount of time spent on platforms.

These engagement-based algorithms often amplify and recommend white supremacist, anti-Semitic, and other conspiracy-oriented material that can intensify fringe beliefs and lead to offline violence.

The Protecting Americans from Dangerous Algorithms Act establishes the principle that platforms should be accountable for content they proactively promote, if doing so leads to specific offline violence.

The bill preserves the core elements of Section 230 that protect the speech of users, is narrowly targeted at the algorithmic promotion of content that leads to some of the worst types of offline harms, and does not seek to mandate political “neutrality” as a condition for Section 230 protections.

“Social media companies have been playing whack-a-mole trying to take down QAnon conspiracies and other extremist content, but they aren’t changing the design of a social network that is built to amplify extremism,” said Malinowski. “Their algorithms are based on exploiting primal human emotions — fear, anger, and anxiety — to keep users glued to their screens, and thus regularly promote and recommend white supremacist, anti-Semitic, and other forms of conspiracy-oriented content. In other words, they feed us more fearful versions of what we fear, and more hateful versions of what we hate. This legislation puts into place the first legal incentive these huge companies have ever felt to fix the underlying architecture of their services — something they’ve shown they are capable of doing but are consciously choosing not to.”

“When social media companies amplify extreme and misleading content on their platforms, the consequences can be deadly, as we saw on January 6th. It’s time for Congress to step in and hold these platforms accountable. That’s why I’m proud to partner with Rep. Malinowski to narrowly amend Section 230 of the Communications Decency Act, the law that immunizes tech companies from legal liability associated with user generated content, so that companies are liable if their algorithms amplify misinformation that leads to offline violence,” said Eshoo.

“Technology platforms are often wired to optimize for user engagement and frequently amplify hate and bias-motivated violence to generate ad revenue. Tech companies shouldn’t have blanket immunity from liability when their algorithms contribute to civil rights abuses and terrorist violence. In fact, a new ADL survey found that 77% of Americans think laws need to be made to hold social media platforms accountable for recommending users join extremist groups. The ‘Protecting Americans from Dangerous Algorithms Act,’ introduced by Reps. Malinowski and Eshoo, is an important step to holding tech companies accountable when their products encourage civil rights abuses or foster terrorism.” —Jonathan Greenblatt, CEO, ADL (Anti-Defamation League)

“The Protecting Americans from Dangerous Algorithms Act continues a crucial discussion about the type of responsibility platforms should bear for the ways that their design and business model impact people and the modern information ecosystem. CR thanks Representatives Eshoo and Malinowski for their urgent and nuanced attention to algorithmic amplification as policymakers continue to evaluate the appropriate role for Section 230 and other tools to ensure that the law makes life online for people better, not worse, in the years to come.” —Laurel Lehman, Policy Analyst, Consumer Reports

“This bill identifies a problem, suggests a sensible solution and will unite all lawmakers in opposition to the unfettered use of platforms by violent extremists. Its passage would undeniably make our societies safer from hate and violence.” —Imran Ahmed, CEO, Center for Countering Digital Hate

“Conspiracy theories and extremist material online have been good for the bottom line of social media platforms, but this has come at great expense to families’ digital well being. It’s also corrosive to our society. Not enough attention has been paid to the algorithms that platforms design and deploy that prioritize user engagement above all else. Fixing this means placing safeguards on how information is amplified and recommended to people, and Representatives Eshoo and Malinowski are proposing a measured response that strikes at some of the most toxic material online.” —James P. Steyer, Founder and CEO, Common Sense

“The Malinowski-Eshoo bill’s focus on platforms’ underlying infrastructure is a helpful approach to curtailing online harms, going beyond a somewhat one dimensional focus on whether an individual piece of content should be removed or not. Addressing the models invisibly driving public discourse is an important step in the urgent task of updating the rules for the internet.” —Rose Jackson, Director of the Policy Initiative at the Atlantic Council’s Digital Forensic Research Lab; Graham Brookie, Director of the Atlantic Council’s Digital Forensic Research Lab

“The Protecting Americans from Dangerous Algorithms Act is an important step forward in addressing the dangers of the AI-driven business models that have been built on the special liability protections of Section 230. We need to focus on the choices made by the platforms in deciding which content they promote and the responsibilities that should be tied to those choices, and we compliment Reps. Eshoo and Malinowski for their longstanding leadership on this issue.” —David Chavern, President and CEO, News Media Alliance

“Section 230 works. But if reform is coming, we hope that policymakers do their due diligence to ensure that proposed legislation doesn’t harm the critical properties of the Internet. This includes conducting an Internet Impact Assessment and ensuring explicit carve outs for the Internet’s infrastructure intermediaries. We appreciate the care that Reps. Eshoo and Malinowski took by using the Internet Impact Assessment in the reintroduction of the Protecting Americans from Dangerous Algorithms Act to ensure the protection of the Internet’s critical properties.” —Andrew Sullivan, President and CEO, Internet Society

“The future of combating antisemitism, in many ways, is combating the digitization of the problem. In AJC’s October 2020 report on The State of Antisemitism in America, one out of five American Jews (22%) reported being targeted with an antisemitic remark online or through social media in the last five years. Nearly half (46%) said they reported online antisemitism to a social media platform but no steps were taken to address the incident. Tech companies and social media platforms must do more. Reforming Section 230 of the Communications Decency Act to hold tech companies liable if their algorithms promote harmful content is an important step.” —Holly R. Huffnagle, U.S. Director for Combating Antisemitism, AJC (American Jewish Committee)

“Representatives Malinowski and Eshoo’s proposed legislation is an important measure that will work to hold the technology sector accountable for irresponsibly deploying algorithms that amplify dangerous and extremist content. The titans of tech have long relied on these algorithms to maximize engagement and profit at the expense of users, and this must change. This bill will help to encourage better behavior from the industry in the interest of public safety.” —Dr. Hany Farid, Senior Advisor, Counter Extremism Project; Professor, UC Berkeley

“The design of internet platforms and choices of their executives have played a central role in amplifying disinformation, to the detriment of all Americans. They have exploited an outdated law to protect their short term profits at enormous cost to the nation. By refusing to address a catastrophe of their own making, internet platforms have made this bill a necessity.” —Roger McNamee, tech investor; co-founder of Elevation Partners, Silver Lake Partners, Integral Capital Partners; and author of Zucked: Waking Up to the Facebook Catastrophe

“If there is to be Section 230 reform, it has to be done by Congress. Representatives Malinowski and Eshoo’s bill is a measured, incremental step insofar as it would hold the largest platforms responsible when they choose to algorithmically promote violations of federal civil rights and anti-terrorism laws.” —Ellen P. Goodman, Professor, Rutgers Law School and Co-Director, Rutgers Institute for Information Policy & Law

“I commend Representatives Malinowski and Eshoo for their leadership in recognizing that the dominant digital platforms now curate social content not in the ways that consumers ideally want, but, rather, in ways that maximize engagement and profit. We need to raise new standards for the digital platforms that incentivize the promotion of positive content and disincentivize the promotion of extremist and conspiratorial content that could lead to offline violence. This innovative legislation to address the blanket liability shield provided by Section 230 of the C.D.A. will induce these much-needed media reforms.” —Dipayan Ghosh, Co-Director of the Digital Platforms & Democracy Project at the Harvard Kennedy School, author of Terms of Disservice, and former technology policy advisor at the White House

“The Malinowski-Eshoo bill is for our times. Twenty-five years ago, the drafters of Section 230 created the safe harbor to promote self-regulation and innovation by online publishers of user-generated content like electronic bulletin boards. But the world has changed dramatically since then. The most popular intermediaries are far from simple publishers or distributors of user-generated content. Today, they assertively design almost all aspects of their consumers’ online experiences; they make recommendations and deliver deeply engaging news stories, videos, and advertisements to jealously hold their users’ attention. Nor have these companies done enough to regulate themselves or the content that reaches consumers. They tinker with and refine their algorithms, even as they know that the content is sometimes discriminatory, harmful, misleading, fraudulent, or deeply divisive. For the past twenty-five years, under the prevailing Section 230 doctrine, victims could never get a hearing in court on whether these companies’ algorithmic systems facilitate or generate unlawful conduct online. The Malinowski-Eshoo bill directly redresses this problem by removing the safe harbor for companies that use algorithms in ways that deliver or otherwise amplify potentially unlawful content by design. In this way, it will help make powerful intermediaries accountable and finally incentivize them to abide by law.”—Olivier Sylvain, Professor at Fordham Law School and Director of the McGannon Center

“Recruitment is easier for extremist groups now than ever before. Large social media platforms enable coordination and their algorithms too often trap users in echo chambers that intensify fringe views. I commend Representatives Malinowski and Eshoo for their efforts to address online radicalization and the threats posed by rising extremism.” —Elizabeth Neumann, Co-Director, the Republican Accountability Project; former Assistant Secretary for Counterterrorism and Threat Prevention, U.S. Department of Homeland Security

“Online content providers have maximized their valuation and profitability by behaviorally hijacking users’ attention. Their algorithmic instruments learn from our every move, and too often serve radical and conspiratorial content to keep us tuned in. I applaud Representatives Malinowski and Eshoo for their proposed amendment to Section 230, a step toward a safer digital and physical world.” —Ramesh Srinivasan, Professor and Director, UC Digital Cultures Lab, author of Beyond the Valley

“The Malinowski-Eshoo bill is the only piece of legislation that I am aware of which would have a significant impact on harmful misinformation on social media platforms without harming free speech by users. Social media algorithms elevate the most inflammatory, shocking, or surprising content regardless of its truthfulness or potential harm. By privileging this information over less provocative content, they pour gasoline on the fire of every conflict or conspiracy theory in our society. Every other publisher is liable for potential harms caused by their editorial decisions, but social media companies exploit a loophole in the current law to avoid accountability. If a social media company harms Americans by choosing to serve harmful content on their platforms to millions of people, those harmed should be able to seek recourse in civil court. The platforms would have to think twice before designing an algorithm that encourages our worst angels. Instead, they could create a system where users talk to other users without an artificial intelligence putting their thumb on the scale. The Malinowski-Eshoo bill would return the internet to the vibrant, healthy public square that it was before algorithms began to tear us apart.” —E.J. Fagan, Assistant Professor, Department of Political Science, University of Illinois at Chicago

In January, following the attack on the U.S. Capitol, Representatives Malinowski and Eshoo sent letters to the CEOs of Facebook, YouTube, and Twitter urging the companies to address the fundamental design features of their social networks that facilitate the spread of extreme, radicalizing content to their users. Representatives Malinowski and Eshoo, along with dozens of their colleagues, called on the companies to reexamine their policy maximizing user engagement as the basis for algorithmic sorting and promotion of news and information, and to make permanent and platform-wide design changes to limit the spread of harmful, conspiratorial content.

Representative Malinowski represents New Jersey’s 7th congressional district. Last year, his bipartisan resolution to condemn QAnon and the dangerous conspiracy theories it promotes passed in the House of Representatives 371-18. He has led the effort in the House to restore funding for the U.S. Department of Homeland Security’s program to combat domestic terrorism and targeted violence.

Representative Eshoo represents California’s 18th congressional district, which includes much of Silicon Valley. She is a senior member of the Energy and Commerce Committee’s Subcommittee on Communications and Technology, which has jurisdiction over Section 230. She was a conferee for the Telecommunications Act of 1996, which included enactment of Section 230.

Connect with NJTODAY.NET

Join NJTODAY.NET’s free Email List to receive occasional updates delivered right to your email address!

Email ads@njtoday.net for advertising information

Send stuff to NJTODAY.NET

Like Us On Facebook

Follow Us On Twitter

Download this week's issue of NJTODAY.NET
Print Friendly, PDF & Email

Comments are closed.