On March 16, 2021, Facebook announced a Human Rights Policy which would supposedly govern its practices worldwide. According to this policy, the social media company would “remove content, disable accounts and work with law enforcement when […] there is a genuine risk of physical harm or direct threats to public safety”. It also details the types of content that would amount to a violation of Facebook’s Community Standards and can, therefore, be removed from the platform.

Of course, whether this policy can make any significant difference in the way the social media platform moderates online content, especially online hate speech, is a different question altogether. The mere formulation of a document means nothing if it is not implemented uniformly across the world. According to a Wall Street Journal report published in August 2020, a top public policy executive at Facebook India refused to remove a member of the ruling party from the platform despite a violation of Facebook’s hate speech rules. In contrast, Facebook has been relatively prompt in removing hateful content and content creators from its platform in countries such as the  United States and Germany. Former US President Donald Trump was banned on Facebook (and Twitter) after the Capitol Hill Riots which followed his election defeat last year. That said, during the four years of his presidency, Trump used social media to propagate misinformation without any resistance. Despite explicitly flouting the rules of Facebook and Twitter, he continued to use these platforms to incite violence. He was banned only after it became abundantly clear that he would be vacating the White House.

When it comes to hosting unregulated hateful content, Facebook is hardly the only offender. YouTube has also provided a safe haven to people who use threatening and abusive language regularly. The online video-sharing platform even awarded its Silver Creator Award — given to channels that reach or surpass 100,000 subscribers — to YouTuber Hindustani Bhau who threatened a female comedian with violence and sexual abuse and incited violence against various other individuals. Bhau’s handle was later removed, but thousands of such handles remain active on YouTube. The content they share undoubtedly violates the platform’s Community Standards but little or no action is taken to regulate them.

The code does not have a conscience and is unable to make a distinction between personalization and polarization.

Part of the reason is that every engagement is good engagement for social media platforms because their advertisement revenue depends on it. With an eye on increasing engagement rates, these platforms use recommender system algorithms that determine the content that users engage with and view. These systems create extremist rabbit holes and push a person, who may already have pre-existing biases, towards complete radicalization. These algorithms were initially designed to maximize revenue but they are now making it easier to promote extremist content. The code tracks user preferences through clicks and hovers and then fills their feed/timeline with content that is in keeping with their views/taste. The algorithm gets more efficient as usage increases. The auto-fills in search bars, suggested videos and posts, and polarizing political advertisements are a result of this system. They nudge users towards radicalization by exposing them to conspiracy theories, abusive content, and unscientific propaganda. The Christchurch massacre, for instance, was a direct consequence of the radicalization of the white nationalist shooter through an online forum called the 8chan. So far, Facebook, Twitter, and YouTube have kept mum on their algorithms. A small number of people are privy to their exact functioning, which makes them extremely dangerous. However, the fundamental principle underlying these algorithms remains the same – maximizing profit through greater user engagement. The code does not have a conscience and is unable to make a distinction between personalization and polarization.

As mentioned earlier, social media platforms have been relatively restrained in their engagement-maximization strategies in certain countries where these platforms are legally obligated to remove and regulate hateful content. In other words, they only implement their Community Standards in countries that already have specific laws on these subjects. Online hate speech was one of the factors that led to the February 2020 riots in New Delhi. In March this year, I reported on a vast network of Facebook pages and YouTube channels that platformed several far-right actors. Not only did these pages and channels incite users on these platforms but also mobilized many of them on the day of the riots. This is not an isolated example but symptomatic of online spaces in India. According to a report released by Microsoft in February 2020, the incidence of online hate speech faced by users in India doubled to 26% in 2020 compared to 2016. Another report, published by the Observer Research Foundation in 2018, stated that religion and associated practices were the most significant basis of hate, and religion-based violence rose from 19% to 30% during the one-year timeframe of the study. These incidents have become more widespread with an exponential increase in internet penetration.

Currently, India does not have any laws specifically targeting online hate speech. Instead, Indian Penal Code’s sections 153A, 153B, 295B, and 505(2) put “reasonable” restrictions on freedom of speech and expression. Under these provisions, people can be penalized for promoting enmity, hatred or disharmony on the basis of religion, caste, color, race, language, residence, region or community. These sections also criminalize outraging the religion or religious feelings of a community. In addition, the Indian Constitution protects historically marginalized communities from hate speech through The Protection of Civil Rights Act, 1955 and Scheduled Caste and Scheduled Tribes (Prevention of Atrocities) Act, 1989.

In 2008, Section 66A was introduced into the Information Technology Act, 2000 to prevent online hate speech. However, the Supreme Court struck down the provision as unconstitutional. The amendment vaguely mentioned terms like “hatred”, “enmity”, “annoying” and  “ill will” which makes it susceptible to misuse. The Supreme Court argued that the amendment violated the right to freedom of speech as it was vague, undefined and overboard. Laws which can influence the fundamental rights of citizens must be unambiguous and made after thorough consultations with the concerned stakeholders. These consultations were missing when the Indian government brought in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (hereon referred to as Intermediary Rules). These rules seek to impose a multilayered regulatory system on social media companies as well as online news portals in India, making it mandatory for these companies to appoint India-based officers to fix “liability” (these employees can face criminal charges if they fail to comply with content-removal orders) and give sweeping powers to the government to remove content and limit its reach. The government circumvented constitutional and parliamentary procedures while introducing these rules and held no public consultation about them. Again, the IT Rules vaguely mention terms like “decency” and “half-truths” without defining them. The rules are ultra vires the parent act (Information Technology Act, 2000) because they go beyond its scope with regards to content blocking and takedown, and regulation of digital media platforms.

A regulatory regime is, therefore, necessary to exhort social media platforms in India to enforce anti-hate speech policies and make online spaces safer. But online content has to be regulated without infringing upon the free speech rights of users. It is a well-reasoned fear among many policy experts that such regulations can descend into a regressive clampdown on dissidents and lead to undue censorship. The Intermediary Rules illustrate precisely these apprehensions.

An instance of a more suitable legal framework can be found in the United Kingdom’s Audio Visual Media Services Regulations, 2020 that lay down rules for video-sharing platforms (VSPs). The regulations define a VSP as an entity which “does not have general control over what videos are available on it, but does have general control over the manner in which videos are organized on it”. As such, it includes platforms such as Facebook, YouTube, Twitter etc. which, unlike OTTs, do not exercise editorial control over their content. This definition of VSPs is significant since poor implementation of Community Standards has allowed the spread of hateful content targeted at vulnerable communities on these platforms. These rules do not regulate the content itself but rather the moderation systems implemented by the VSPs. They stipulate specific instructions for VSPs to protect their users from harmful content.

How Does a Content Moderation System Work?

A content moderation system consists of various elements whose implementation, while tricky, is quite achievable. A report by The Alan Turing Institute on online hate details four key elements of an ideal content moderation system. Presented with the caveat that defining online hate is no easy task, the report is useful to make sense of the challenges of implementing a content moderation system in India.

1) Characterizing hate speech: It is difficult to define and implement a content moderation system that can be universally applied. Context and subjectivity make it impractical for a system to be uniformly implemented across the world. For example, the pejoratives used against women in the US may be different from those used in India. Similarly, a moderation system intended to regulate hateful content in France may not be able to regulate or even identify hate speech against vulnerable communities in India. While the basic structure of a moderation system is the same, it has to be adapted to the needs of different markets. Defining hate speech thus requires a thorough understanding of the culture and history of a particular area of operation.

2) Identifying hate speech: VSPs usually rely on user reports, artificial intelligence, and human reviewers to identify hateful content. However, each of these gatekeeping tools have their own limitations. User reports cannot always be trusted since there have been numerous instances of targeted mass reporting against individuals, even when their content did not violate any rules. Such targeted reporting, also known as “brigading”, is usually carried out by groups looking to silence those with whom they have ideological differences. AI may be unable to identify sarcasm or understand the context of a message. For example, AI may not understand the context of a Facebook post that decries racism but includes a racial slur, and flag it as hate speech. The nuance of a community reclaiming a racial slur that was once directed against them as one among many forms of oppression is usually lost on AI systems. Human reviewers can get psychologically affected by triggering content, and may also lack the required training to identify hate speech as such. Many content reviewers suffer a deterioration in mental health, including conditions such post-traumatic stress disorder and acute stress disorder due to this work. Given that each of these tools, used to identify hate speech, have their limitations, it makes sense to combine them to increase the efficacy of the monitoring process.

While the basic structure of a moderation system is the same, it has to be adapted to the needs of different markets. Defining hate speech thus requires a thorough understanding of the culture and history of a particular area of operation.

3) Regulatory steps to curb hate speech: Any VSP can deploy a range of regulatory steps to deal with hate speech. According to the Audio Visual Media Services Regulations, 2020, these steps must be proportional to the level of danger that the content poses. A VSP may ban or suspend users, remove the content permanently or temporarily, and/or ask for a user’s consent before letting them view the content. Content may also be regulated by displaying warnings, providing fact checks and counter-speech, making certain kinds of content unsearchable, and stopping people from sharing them. However, these restraints are to be used with great care to avoid trampling on free-speech rights of the users. Safe internet cannot mean a space that compromises rights and turns into a Faustian bargain.

4) Appealing decisions: Content moderation mechanisms are still being developed and, therefore, are bound to make mistakes. To remedy this, online platforms must allow users to appeal decisions on content removal and reach constraints. They must also inform users why their content was removed. This will make the moderation process more democratic, fair, and transparent.

Who Should Enforce these Regulations?

Under the Audio Visual Media Services Regulations, 2020, Ofcom (Office of Communications) is the official body mandated to regulate VSPs in the United Kingdom. Ofcom already regulates telecommunications, broadcasting (radio and television), and postal industries. It is financed by the government but works independently. Ofcom performs a range of activities, including licensing for TV and radio, and asking for consultations which help it make decisions on various policies. It is noteworthy that Ofcom was established through the Office of Communications Act 2002, and derives its powers directly from this act.

Governments have often used enforcement agencies to target their rivals and critics. In India, the apex level of the three-layered regulatory mechanism under the Intermediary Rules is an interdepartmental committee consisting of representatives from various ministries. Content takedowns have been ordered by various ministries and their associated departments under various sections of the IT Act. These orders, often arbitrary, have only served incumbent regimes. (Examples of such orders can be found here, here, and here.) Against this backdrop, an autonomous regulatory authority like Ofcom can ensure that the regulation of VSPs is undertaken without any political influence.

Besides, an independent enforcement agency will have the room to work strictly according to the rules laid down by a legal instrument. The Audio Visual Media Services Regulations, 2020 enables Ofcom to impose fines and issue enforcement notifications if a VSP fails to comply with the rules. A direction may be passed to suspend or restrict a VSP if it continues to flout rules (despite a notification and/or fines), and a breach of this direction constitutes a criminal offense.

This is not to say that India should simply replicate these regulations. Laws and regulations on digital governance can be borrowed from other countries. This is clear from India’s Personal Data Protection (PDP) Bill which, even as it includes some very problematic clauses in its current iteration, derives several others from the European Union’s General Data Protection Regulation (GDPR). Where countries have put in resources, through organizations such as the Ofcom, to seek public consultations and undertake research in the field, stakeholders such as the Indian government, civil society, consumers, and researchers must take this work into account while proposing context-specific laws and regulations for a safer internet.

Conclusion

It has become amply clear that, when driving user engagement, social media platforms prioritize profits over the safety of users. Content encouraging violence and hatred against various communities continue to flourish on the internet. As internet penetration continues to increase in India, the lack of media literacy and non-existence of anti-hate speech laws (like Germany’s Network Enforcement Act or the NetzDG law) has exacerbated the situation. We need a law that will mandate strict content moderation systems to curb hate speech on the internet (mainly social media websites). The Audio Visual Media Services Regulations, 2020 is a good example to look at while proposing such a law. Besides, these regulations should be enforced by an autonomous authority that is independent of government control. As India’s internet penetration continues to increase, more and more people are being exposed to hate speech. Targeted trolling of individuals is a part of this hate-speech ecosystem and needs to be dismantled promptly.