Are Tech Giants the 4th Tier of Authorities?

Photo: via pxfuel

January 2021 US President Donald Trump’s Twitter account is “permanently suspended due to the risk of further incitement of violence”, Twitter stated.  This decision comes after a big Tech purge of the online platforms used by Trump and his supporters. Demonstrating the power social media platforms have on public discourse and the ability of tech giants to exercise public power with little or no public accountability. Lawmakers and the public have been calling for his banning for years, and former First Lady Michelle Obama tweeted that the Silicon Valley giants should stop enabling Mr Trump’s “monstrous behaviour” and permanently expel him. Similarly local SA celebrities and politicians declared their intention to deactivate Twitter and other social media accounts.  The action taken by these platforms may certainly be viewed as the ability for self-regulation to correct transgressions that occur on various platforms.  Is this a dangerous precedent, considering the legal and public instruments in place to safe-guard and regulate freedom of expression? Should this superpower be relegated to private companies? These actions have amplified lingering questions regards the regulation of social media content.

When it comes to account enforcement within Twitter, – a delicate balancing act where Twitter’s rules often flout and or insults the notion of free speech, inviting racists and chauvinists, to publicly present their toxic views on their platform. The question we should ask as the general public is – who has the power to decide what content is left up or taken down?

As campaigners with a keen interest in the ever-evolving discipline of internet governance, and being avid social media users, debates on Internet Governance and the regulation of social media platforms is a critical narrative to follow. The search for appropriate regulatory instruments that could be adopted to safeguard the public interest through regulation of Over The Top (OTT) platforms such as Video on Demand (VOD) services and more controversially, social media platforms is core to observing freedom of speech, parallel to curtailing hate speech, incitement to war, prejudice, violence and misogyny. There is a plethora of platforms added daily such as dating sites, Amazon, Netflix, eBay, Facebook, YouTube etc… These platforms are all in fact business models creating marketplaces which relies on what economists call ‘indirect network effects.’

Why regulate?

Having regard to our recent past of thought control, censorship and enforced conformity to governmental theories, freedom of expression “the free and open exchange of ideas”, is no less important than it is in the USA or globally. It could actually be contended with much more force that the public interest in the open marketplace of ideas is all the more important to us as South Africans, because our democracy is not yet deeply embedded. Therefore, we should be particularly astute to outlaw any form of thought control, however respectably dressed should it be outside the Constitutional parameters that govern free speech. However, technology has enabled easy sharing and access of inappropriate media, making this an extremely difficult aspect to regulate and mitigate. Thus, parents and caregivers need to be exposed to appropriate information to allow them to make informed decisions regarding the media content which they and the children in their care consume.

Our Constitution recognises that people must be able to hear, form and express opinions freely. For freedom of expression is the cornerstone of democracy. It is valuable both for its intrinsic importance and because it is instrumentally useful. “It is useful in protecting democracy, by informing citizens, encouraging debate and enabling folly and misgovernance to be exposed. It also helps the search for truth by both individuals and society generally. If society represses views, it considers unacceptable, they may never be exposed as wrong. Open debate enhances truth-finding & enables us to scrutinise political argument and deliberate social values.

What is more, being able to speak freely recognises and protects ‘the moral agency of individuals in our society’. We are entitled to speak out not just to be good citizens, but to fulfil our capacity to be individually human.” “Being able to speak out freely is closely connected to the right to vote and to stand for public office. That right lay at the core of the struggle for democracy in our country”. Shamefully, it was for centuries denied to the majority of our people. In celebrating the democracy we have created, we rejoice as much in the right to vote as in the freedom to speak that makes that right meaningful.

The right to freedom of expression is one of a “web of mutually supporting rights” the Constitution affords. Apart from its intense connection to the right to vote, it is closely related to freedom of religion, belief and opinion, the right to dignity, as well as the right to freedom of association and the right to assembly.”

So, what is Internet Governance?

Social media platforms connect consumers with similar interests to others (buyers and sellers) or in the case of a dating site to your potential future life partner. Platforms do not generally create their own content, and rely on the broader public, to upload content, thus creating community content or what we refer to as “User Generated Content” (UGC). It is for this reason that platform owners argue they are not responsible for what users produce and are thus exempt from the libel, defamation, laws and regulations that govern traditional media like newspapers and television. In other words, social media provides a platform created for “free speech” and owners assume limited responsibility for the content their users generate.

Internet governance refers to protocols designed to curb violations in cyberspace while safeguarding a safer internet engagement experience. It includes data protection, cyber security, content regulation dealing with prohibited and harmful content; it involves measures to eliminate the distribution of child sexual abuse materials (CSAM), fake news and extremist propaganda or any form of propaganda that could be harmful in modern democracies. There are two key schools of thought, largely polarized, on how social media platforms should be governed: one view is that governments have no role to play in the governance of the internet, citing the risk of authoritarian governments curtailing free speech and other liberties enjoyed in democracies.

On the other hand, it is incorrect to claim that platform owners do not exercise editorial control over its content. Traditional television and newspapers are what we call broadcast journalism, meaning that they provide the same content to a broad, general audience. Social media platforms, by contrast, are ‘narrowcasters.’ Given their ability to be directive, to pinpoint who you are, their algorithms choose content exclusively for what they think you want to hear and see, making frequent, personalized editorial decisions based on your browsing behavior on their platforms as well as other websites (e.g. when you use Facebook or Google to login), and geolocation information taken from your cell phone.

Social media platforms are also referred to as ‘natural monopolies,’. Explicit rules about what is or is not allowed on these platforms are implemented only when necessary, as they can constrain its expansion and are expensive to implement. You may recall the much earlier YouTube period when they allowed users to post any type of music, TV show or film content. Or as recent as 2019, when the gruesome live streaming of the New Zealand Mosque massacre, became an example of how extremists abuse online platforms if left unregulated. Social media sites, including Facebook, faced a massive backlash after having failed to remove the live stream of this attack.

“We cannot simply sit back and accept that these platforms just exist and what is said is not the responsibility of the place where they are published,” said Prime Minister of New Zealand Jacinda Ardern. “They are the publisher, not just the postman. There cannot be a case of all profit and no responsibility. It is unacceptable to treat the internet as an ungoverned space”.

Viewers and consumers were immersed in this video and according to reports, Facebook’s deputy general counsel, Chris Sonderby, said that none of the approximately 200 people who watched the live video flagged it to moderators. The first user report was filed 12 minutes after the broadcast ended. Only after significant legal threats from the media industry did the online video streaming platform begin to impose restrictions on copyrighted material.

Stickiness and the amplification effect

Platforms choose content based on maximizing user time (i.e. the contents ‘stickiness’) on their site. In the PR and media world, the adage, ‘If it bleeds, it leads’, refers to the fact that sensationalist, violent, or scandalous content provokes more emotions and sells more newspapers or advertising. This ‘stickiness economy’ encourages users to create and share content within their networks in exchange for likes and additional shares as a ‘currency’ of self-affirmation. Some have termed the hyper-personalisation bias of the platform’s algorithms as ‘filter bubbles’ or ‘echo chambers,’ and the fact that users are more likely to like and share the more polarizing topics has been called the ‘amplification effect.’

The rise of populism and recent activism such as the, #BlackLivesMatter campaign, as well as divisive, behavior often perpetuated online, is a topic of concern for sociologists, sociolinguists and political scientists. Hate speech is one example that concerns regulatory proponents, where it is argued that being subjected to constant online attacks on the basis of identity, e.g. cybermisogyny, can have seriously negative consequences as witnessed by the recent suicide by the young schoolgirl in Limpopo.

This could violate the rights of members of a certain race, place of origin or those with a particular sexual orientation amongst many other identities in our diverse societies. Whilst diverse opinions are to be celebrated in our fragile democracies, when social media platforms are not held responsible for the accuracy of the content they present, there is no incentive or an algorithmic firewall for them not to show you the most outrageous or deepfake news.  

Excessive social polarization is undesirable as it erodes the democratic institutions that protect free speech and other basic rights. Without some basic consensus on the common objectives of social welfare, democracies weaken and become dysfunctional or corrupt. Just like we are compelled to adhere to banking regulations to protect and safeguard our lifesavings, social media platforms should be regulated to mitigate its worst effects.  Self-regulatory controls that are in place have been effective in certain circumstances yet can only gain traction if consumers are made aware of these and encouraged to use them.

A relatively new space of debate is the collection and monetizing of big data. Yours and mine. These platform owners collect so much demographic and behavioral data from our online activities, they can create a very precise digital footprint or model of who we are with significant predictive accuracy. They then sell these profiles, our digital twins or avatars, to advertisers both in and outside their platforms. They do this with little explicit knowledge or consent from their users. Moreover, users have no rights over their meta-data. “It is a competently asymmetric relationship; a Faustian bargain where, in exchange for carrying out searches, networking and taking advantage of geolocation services, we as users allow these platforms into the most intimate corners of lives with little understanding of how or which of our secrets they sell”.

The consequences of social media platforms that function on a quasi-monopolistic scale are more recently being understood. Given their expansive service offering, we could not imagine a life without internet, today. Whom do you hold responsible for posting your private nude selfies or whom do you sue when your revenge pornography lands on Pornhub, or some google platform? This has led to a push for greater state led interventions to ensure greater public accountability from the major tech giants who have become global monopoly operators – a quasi “fourth tier of government”.

Ergo, the need for robust public discourse on the role of internet regulation. We cannot ignore the economic and social benefits of these platforms especially in emerging markets like ours, as we consider greater public accountability of these platforms, we need to ensure the regulatory interventions do not hinder the agility and innovation that sets them apart.  But, like many industries, there are undesirable consequences that work against the greater social and economic good. Serious conversations on how social media platforms should be regulated to minimize their social costs are critically needed.

Is there a middle ground?

David Kayes, the UN international rapporteur on free speech wrote a book advocating for a middle ground that requires collaboration between private companies and state entities. Critical to this debate is ensuring that public services are not outsourced to private companies with little or no accountability or recourse for members of the public. The global nature of the operations in these companies has also created a compliance nightmare for them as they learn to appreciate that varying jurisdictions have different legal instruments to regulate free speech as well as data protection. The founder, Chairman and CEO of Facebook, Mark Zuckerberg has often articulated Facebook’s ambition to create a global community, but as Africans we know that there is no universality on global values and norms.  It is these very values and norms that are the determinants of what is deemed to be harmful content that informs laws that regulate free speech across various countries.

All major social media platforms have ‘community standards’ with clear reporting mechanisms that is activated should the public feel their rights have been violated. So, an important question is, “who informs those standards?” What recourse do we as South Africans have should our rights according to South African laws have been violated? What recourse is available to you should a response to your complaint with the platform indicate that their community standards have not been violated yet one strongly believes personal rights have been violated? By example is the video posted by Adam Catzavelos in his racist rant, sparking Twitter outrage.

The strides that have been made by social media platforms to establish internal governance mechanisms that clearly outline the rules of engagement on their platforms should be commended. They have ‘Community Standards’ and ‘Terms and Conditions’ outlining what is permissible on their platforms. In the case of Twitter, they have a Trust and Safety team, with a team of content moderators, whose job it is to moderate or curate content. They review a tweet that has been flagged to determine whether it violates Twitter’s rules. If it does, moderators can usually enforce punishment at this stage, but Twitter requires a second layer of review for offenders who are considered public figures, in this case, a verified politician with more than 100,000 followers.

Social media platform owners claim that their community standards are reviewed on a regular basis and inform an algorithm that can immediately identify prohibited content once it has been posted. Where there is a universally clear standard on the prohibited content, as in the case of Child Sexual Abuse Material (CSAM), the content is immediately blocked from these platforms. Some of them have developed an escalation system where human beings review and moderate when content has been flagged for violating the rules of the platform.  Others use Artificial Intelligence and recognition tools. But are the tools in place sufficient when the rules themselves are not universally understood and applied by users across the globe?  Will the content moderators always understand the context within which statements are made when they are not exposed to the social context, values and norms within a region where a post originates? Are we not handing over too much public power to private hands with limited checks and balances that the public can use to hold them accountable?

These are not by any means exhaustive questions, and they are certainly not questions we hold all the answers to. We do, however, need to host a coherent debate as a country and a continent on how to develop effective governance mechanisms for these platforms without limiting innovative models of tech companies and the convenience they provide society. Africa is the new frontier for extensive growth of these companies as we have growth potential in connecting more people online. This means greater investment will be made to grow revenue in the African markets. As Africans we should be exploring regulatory instruments that allows for greater public accountability by all stakeholders. A co-regulatory system where both private and public players play a critical role in safeguarding the public interest, seems to be a viable option in the digital world.   

Get the best of CNBC Africa sent straight to your inbox with breaking business news, insights and updates from experts across the continent.

Get this delivered to your inbox, and more info about about our products and services. By signing up for newsletters, you are agreeing to our Terms of Use and Privacy Policy.

Regulatory coherence is one possible avenue that may be pursued to adequately safeguard the public interest, to ensure coherence in the existing regulatory frameworks. Tech companies cannot find itself chained to the current arrangements that are in place – it is simply far too bureaucratic. The multiplicity of layers of red tape, may prove far too burdensome for robust public discourse and the marketplace of free ideas. This administrative burden could stifle stifling innovation and agility of these tech companies.

Digital Literacy – A Critical path

Any regulatory system (self-regulatory/co-regulatory) that is not aligned to a clear digital literacy initiative will experience little success. All social partners need to embark on extensive digital literacy campaigns, educating users on the do’s and don’ts of social media, focusing on the consequences delinquent users may face should the agreed upon rules be transgressed.  The Department of Basic Education plays an important role in reaching a critical mass of our youth and has an important role to play in ensuring young people are equipped with the tools necessary to safeguard themselves online. A number of social partners are already implementing various initiatives aimed at improved digital literacy targeting youth, parents and children. These efforts should be commended and better coordinated to ensure consistency and coherence in messaging.

For now, all we can rely on are that platforms self-regulate, with lawmakers across the globe starting to question this practice, given its inherent deficiencies. The courts are a second option where citizens can pursue civil or criminal cases as we have seen with the recent cases of Penny Sparrow and Adam Catzavellos. This has proven to be an expensive and lengthy process. Poorer countries cannot be as litigious as those countries in the North where the cost of going the legal route is prohibitive. Looking after Gogo Dlamini’s interests, and those of her grandchildren, relies heavily on the public’s ability and willingness to engage tech giants who might from time-to-time crush her rights. Alternatively, we throw our hands in the air and retreat from public engagement, or face internet trolls head on and continue to allow those who choose to hide behind the veil of anonymity on these platforms to polarize us further as a nation.

Comments are closed.