Web Customers at Danger from State-by-State Regulation of Web Content material Moderation
In the midst of an ongoing federal discussion on the moderation of the content of digital services, several state decision-makers are pursuing laws on the moderation of content. Last week, for example, Florida Governor DeSantis held a press conference in which the legislative proposals for Internet companies are approved on the basis of their content guidelines. Because content removal is a means by which digital services protect the trust and safety of users and the general public at large, even if sponsors claim to protect them, these suggestions can harm users. While scholars and courts have generally considered Such proposals are excluded by federal law, including Section 230 and the First Amendment. Policy makers, users and digital services alike should be concerned about their impact.
As Kentucky items It was recently discovered that some of this state legislative effort may have stemmed from services enforcing acceptable account usage policies that former President Trump led to a riot at the U.S. Capitol last month in response to his instigation. Whatever the impetus for this flurry of legislative proposals, the impact would extend well beyond the events of January 6th.
Some of the legislative proposals, including bills in Kentucky, Utah, Oklahoma, Florida, and Arkansasappear to be based on text that has been circulating in state capitals for several years under the name “Stop Social Media Censorship Act”. As local reporters have investigated in Arkansas and UtahThese suggestions seem to be related to a controversial activist among whose other claims to fame belong to attempt marrying a laptop, ostensibly in protest against marriage equality. These bills would give internet users the right to sue a “social media website” that has their political or religious speech deleted or suppressed (oddly enough, not if the website belongs to a political party or religion, which is a limitation, which results in a noticeable first change flags). The consequences of these bills would go far beyond political or religious speech. For example, content that could be labeled political or religious speech could encourage practices that are detrimental to public health, put people in a pandemic at risk, and empower digital services, between avoiding litigation or protecting users from potential harm to choose.
Other recent calculations take different approaches. North Dakota House bill 1144For example, the language from the federal Good Samaritan provision is being reused. Section 230and would allow civil lawsuits against social media sites to restrict content, relieving not only the poster of the information but also any “person who would otherwise have received the letter, speech or letter from civil damages and legal fees Publication. “This type of legislation could force the services to guess any decision about removal and prevent quick responses to emerging threats.
Arizonas House bill 2180 goes in a different direction. Under this proposal, a person who enables online users to upload publicly available content on the Internet who, for “politically biased reasons”, has some degree of control over the uploaded content would (1) be viewed as a “publisher” ‘,’ ( 2) ‘does not count as a’ platform ‘and (3) is liable for damages that occur to an online user as a result of the actions of the person that could be raised by the AG or the user. In addition, the AG’s “publisher” would have to pay an annual fee for every user in that state who is authorized to upload publicly available content to its service.
This calculation starts from the misconception that online sites must be either “publishers” or “platforms,” a misconception so widespread that it has become a form of Pseudolaw. Many online sites are, of course, both: newspapers are publishers, no doubt, but “platforms” in terms of their comment sections. Many websites that are viewed as “platforms” also create their own content, independent of user-generated content. None of the terms appear in the relevant federal law, Section 230 (f)This only applies to “interactive computer services” and “information content providers”. However, if one started with the false distinction between publisher and platform, the Arizona bill would lead to some bizarre results, such as Arizona news publishers having to pay fees to run a moderated comment section on their articles.
These government efforts are not realistically manageable and are likely to be prevented by federal law. Their broader effect, should they come into force, would be to stifle digital services’ efforts to maintain the trust and safety of internet users by creating a patchwork of government regulatory obligations for companies that try to clearly suppress them to complain Content. Such breadth and vagueness in a law would also spark an avalanche of frivolous litigation – which Section 230 should reduce in order for American digital innovation to flourish.
These misdirected efforts can also introduce varying degrees of government legal risk to companies’ efforts to restrict content that is likely to be lawful but potentially harmful, such as: For example, promoting self-harm or religious intolerance or misinformation of foreign origin about vaccines or the ongoing pandemic. For these reasons, there has been regulation of Internet content at the federal level for a long time.