Purging Social Media

By Ashit Kumar Srivastava

Even as we have witnessed a pandemic of epic proportions, there is what is called as an infodemic. And human civilization faces a major threat from it.

An infodemic is the wide spread of accurate or inaccurate information across the globe. But it mostly refers to the spread of misinformation and disinformation. And in the digital age we live in, its impact cannot be belittled. Accessibility to the internet and various digital platforms has made it easier for an individual to broadcast his thoughts and even spread rumours.

The idea of ​​content-neutrality or content-regulation is to purge the content of its directional characteristics and non-factual basis. As social media platforms gain more traction and become vivid in human expression, they are becoming an alternate to conventional platforms of information, be it news channels or radio shows. People are increasingly using these platforms as a source of information. This is an alarming situation for any democratic country.

While the platforms have continuously maintained that they are neutral with no interference in the content posted, the fact that they are processing and ranking the information qualifies them as curators of information. Thus, polity demands that a much larger role be played by these platforms in handling “fake news” or “hate speech”.

Content-neutrality in relation to social media platforms refers to unbiased content posted there. And it is no surprise that most social media platforms are handling cases of fake news and hate speech at their administrative levels. However, the question that always pops up is whether there are good enough measures for purging the platforms of ill-content.

There are allegations against social media giant Facebook of its uneven application of community standards. There are allegations that Facebook might not be having sufficient resources and skills to be equipped with all the 22 official languages ​​in India and thus, not be able to control the spread of misinformation.

In fact, one whistle-blower, Frances Haugen, leaked documents which are now being termed as “Facebook-Paper” revealing that the company was aware of the spread of misinformation for years. In 2020, Facebook’s India Policy Head Ankhi Das quit amid allegations of bias towards right-wing content.

It is not that there are no steps taken at the platform level; Rather, sophisticated artificial intelligence-driven mechanisms are being introduced to counter the growth of misinformation. However, the sheer size of social media and micro-blogging websites makes it imperative to have meticulous scrutiny. Thus, it calls for better regulation to ensure content neutrality on the platforms.

This is exactly what most countries are following. However, questions of enforcing content-neutrality will always raise certain fundamental questions as to its societal impact, directional capacity and whether speech is an opinion or serves to be news.

Thus, any attempt to enforce content neutrality will lead to questions about the quality of the content and its inherent nature. Germany had already taken a step in this direction when it enacted the Network Enforcement Act (NetzDG law) for removal of hate speech from the platform within a set deadline. The deadline is as tough as 24 hours of the complaint being received, with a penalty which may extend to EUR 50 million if the platform fails to comply. Several free speech advocacy groups have called out the NetzDG laws for its stringent punishment process.

India too has come up with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021. The core principle underlying the IT rules 2021 is to revoke the save harbor clause, which time and again has been utilized by social media platforms in India to revoke their liability for third-party content posted on their platforms. In fact, in the preamble of the new IT Rules 2021, there is no mention of the save harbor clause (Section 79 of the Information technology Act, 2000).

The preamble of the new IT Rules 2021 quite clearly states that the new rules are in supersession of the old IT Rules 2011, which were based on the save harbor clause.

Rule 3 of IT Rules 2021 contains one of the grounds of information which are patently false or misleading. Not only are government bodies authorized under Rule 3 to report the grounds to platforms, individual grievance can also be addressed to the platform under Rule 3 (2).

Thus, content-neutrality is becoming a part of the mechanism, but what is interesting to observe is that it is not possible to regulate the content by only a single entity. Thus, it requires more hands and more eyes to regulate it and can’t be left only to the platforms to garner the mechanism to ensure content neutrality.

Interestingly, the Supreme Court of India in Shreya Singhal vs Union of India (2013) said that it will not be possible for the intermediary to industrially scrutinize its platform for all unlawful content. Its duty lies in removal of unlawful contention receiving actual knowledge through a court order or appropriate government direction.

Even Facebook (now Meta) has reiterated that it cannot act as a “Super-Censor” in violation of the Supreme Court ruling in Shreya Singhal (submission of “Meta” in the case of Maatr Foundation vs Union of India, going on in the MP High Court) knowing that there are billions of content posted on its platform every day.

There has to be a synchronization of efforts by the platform, consumers and the government to ensure content neutrality.

—The writer is Assistant Professor of Law, Dharmashastra National Law University, Jabalpur

Comments are closed.