As the 67th UN Commission on the Status of Women (UN CSW67) gears up to tackle ‘innovation, technological change, and education in the digital age’ as its priority theme, we turn our attention to social media and its increasingly toxic gender dynamics, a site that demands urgent political and systematic intervention. Indeed, digital spaces have become a significant part of our public sphere – central not only to self-expression and dialogue, but also to the formation of culture, popular sentiment, and political will.

Barriers to equal political participation through shadow banning, trolling, cyberbullying, doxxing, non-consensual use and dissemination of images, cyberstalking, mob attacks, coordinated flagging of women’s content in online spaces have real-life implications and ramifications for women and historically oppressed groups, whose voices are further marginalized, silenced, and invisibilized by platforms through their weak content policies to address harm. Non-binding frameworks such as ‘Recommendation CM/Rec (2019) on Preventing and Combating Sexism’ (of the Committee of Ministers, Council of Europe) have not managed to lower the threshold for criminalization of sexist hate speech. Consequently, harmful practices such as gender trolling continue under the legal radar. Platform responses have also been found lacking in responding to user complaints about misogynistic hate. The inconsistent implementation of rules on hate speech and the lack of gender-disaggregated information on content moderation decisions in transparency reporting also remains as serious lacunae in the policies and practices of platform companies. The stifling of women’s online existence is an assault on their substantive autonomy, and a core – though insufficiently acknowledged – dimension of the digital and gender divide today.

As we are confronted with misogynistic, hateful, and sexist content on an unprecedented global scale, developing a feminist ethic of social media governance to counter this dangerous trend urgently needs to be a part of our collective political agenda. Evidence increasingly suggests that treating platform companies as neutral content conduits enables them to evade all accountability to users for building safe and inclusive communication spaces. It is imperative that platforms’ duty of care to prevent sexism and misogyny be framed as a binding obligation. As part of efforts to challenge and check the normalization of algorithmically-propelled misogyny, sexism, and extreme speech, we at IT for Change, have been working to tackle and understand the scale of the problem of pervasive online misogynistic speech. In April 2022, along with InternetLab, we organized a roundtable conversation among lawyers, academics, scholar-practitioners, and activists centered around the following three questions:

1. What do empirical studies of platform regulation (and self-regulation) tell us about addressing sexism and misogyny online?

2. What national legal-institutional frameworks may be appropriate to check gendered censorship and promote gender-equal participation in the online public space?

3. What kind of global responses may be relevant towards nurturing gender-inclusive online spaces?

The two days of engaging discussion, and the cross-pollination of ideas from diverse disciplinary standpoints provided us with innovative ways to think about online misogyny – its social embeddings and the trajectories of legal responses to it. In this article we present; albeit in much abridged form, some key thematic currents that emerged in the course of the discussion.

Re-constituting Our Ideas of the ‘Public’

One of the overarching themes was evolving a conception of the digital public sphere and the ways in which women occupy space within the same. The interventions at the session shared a recognition that networked technologies alter the seemingly self-contained bubbles of private and public life. In order to be able to more accurately apprehend the contemporary moment in which pervasive gender-based violence on social media is the brute reality, it is crucial to appreciate that the normative codes of public life implicate the personal in insidious ways. The increasingly platformized conditions of social exchange and interaction fuel and shape emergent forms of public action in the social media context. World over, governments and dominant social media companies are locked in a power tussle. Companies often complain of extra-legal pressure to remove content deemed offending, while governments are eager to quell content virality fueled by opaque social media algorithms. Social media companies are also often hand-in-glove with political elites, selectively enforcing rules against hate speech and keeping decisions regarding content moderation inscrutable to ordinary users. In this regard, it was considered important to view the communicative infrastructure of platforms as part of capitalist relations; and indeed, how the communication networks, conditions for algorithmic virality, and terms of (in)visibility themselves are shaped and governed by the logic of capital with special attention paid to the epistemic yet often overlooked gender bias underscoring the tensions between and within regional and international platform governance policies and content-moderation policies espoused by Big Tech platforms.

Modes of Resistance

Social media is both a site of pleasure and danger for women. Drawing connections to the feminist assertions of the right to ‘loiter’ as a means of occupying and asserting ownership over public spaces, interventions focused on how women choose myriad paths of resistance to claim space in the online public sphere. Participants in the roundtable presented subversive accounts of how women turn the security paradigm on its head by using their male kin’s digital devices – designed to surveil them – to access the internet.

 

 

Helani Galpaya’s research, for instance, showed that a statistically significant proportion of women respondents use multiple social media accounts, often taking up an alternative identity to access political debates. Reflecting on this finding, Galpaya spoke about the positive of how “there is a lot of ‘agency’ being explored by women in creating these alternate identities”, but at the same time wrestling with the contradiction that the conditions enframing such exercise of agency fundamentally rests on a compromise. The cultural politics of presence, participation, and publicness are therefore very complex and deeply imbricated in the contexts in which they arise. Mardiya Siba Yahaya’s study of Muslim women content creators further emphasized how the “design and organization of online gendered violence is centered on moral surveillance and public shaming of those transgressing normative frames of belonging, which disproportionately displaces the onus of safety on those victimized by gendered harassment”. In such a context, women are impelled by the system to make compromises, striking a “digital patriarchal bargain” to remain online, even as they pay a heavy price to visibly occupy public identities while navigating the patriarchal constructs of “good” and “bad” Muslim women.

 

Tracing the Structures of Online Hatred

The roundtable brought together rich empirical studies of online gendered political violence from different sites across the globe.

Anita Gurumurthy and Amshuman Dasarathy presented the findings of their work on the hateful, abusive, and problematic speech on Twitter, directed at Indian women in public-political life. In their research, they found that the trolling of women in political life runs rampant and remains completely unchecked by platforms despite their pervasiveness. They also found that women from the Muslim community and political dissenters received a disproportionate amount of abuse on the platform and that a large amount of abusive or violent speech directed at women in public-political life was in the nature of supposedly light-hearted jibes, misogynistic memes, wordplay, and regressive and stereotypical jokes about the place of women in society. Humor was thus perversely used as means to shroud violence against women, underneath a thin veneer of political incorrectness.

Presenting from ongoing research, based on interviews with women who are the targets of online gender-based violence in Brazil specifically, activists, politicians, and journalists, Yasmin Curzi de Mendonça shared the Brazilian context where the degree of violence is contingent on determinate categories of race, gender identity, and sexuality of the person who is subjected to attack, with the severity of abuse being determined by the degree of marginalization. “Of the seven interviewees who faced various forms of online attacks, five stated that they decided not to take any legal action against their attackers because they did not know how to do so, or whether they had any legal rights,” she observed, while noting that the law itself in most instances is inaccessible or is unable to translate the injury into legal norms. In order to challenge an individuated response to systemic violence, her recommendations argued for collectively confronting the problem while creating support systems for victims.

 

 

Fernanda K. Martins, in her intervention, presented another case study from Brazil, echoing the consensus that the attacks on women were both gendered and disproportionately directed towards women from minority groups. In addition, Martins explored the connection between disinformation and misogyny, and contended that the separated and isolated concepts of hate speech and disinformation are not adequate to understand the distinct nature of the violence. She proposed, instead, the necessity to think of “gendered disinformation”. Elaborating on this point, she highlighted some salient narratives that were peddled to advance certain false claims about issues such as abortion, homosexuality, and feminism.

To address the gaps in the law, policy, and regulatory response, Kim Barker drew on case studies, first-hand accounts, bystander revelations, judicial commentary, and political commentary to create a typology of harms caused by online misogyny and online violence against women (OVAW). She emphasized the universalized problem of gender violence and identified the fragmentation of the problem through the inconsistencies between domestic laws, criminal legal regimes, and communication laws. The internal gaps between contending legal frameworks often become the reason for inhibiting the translation of legal responsibilities into political action, thus suggesting a “joined up” approach that places equal responsibility on platforms, collective social action, and enforcement bodies. She also cautioned against the law becoming the primary site of struggle for feminist reform. “While there is a symbolic benefit in having the law ‘recognize’ a particular societal challenge or behavior by enacting legislation to, for example, criminalize the behavior and/or harm that sets expectations that the law alone will ‘address’ the acts,” she added.

 

Contextual Content Governance

What follows from this discussion is the question of what steps platforms can take to stop and/or mitigate the amount of violent and hateful speech online. Yet, the role of ‘contextual’ factors such as the internal dynamics of a society add a level of complexity that, as participants highlighted, platforms have scarcely come to grips with.

Under this umbrella, a set of presentations at the roundtable explored the role and efficacy of platform terms of service, or community guidelines, in tackling the issue of online gender-based violence. For instance, Damni Kain and Shivangi Narayan’s research addressed how most large platforms’ content guidelines failed to take cognizance of ‘caste’ altogether, anchored as they were in norms derived from the West. Drawing attention to the distinctive nuance of caste-based hate speech, which is often based on more covert forms of humiliation and intimidation, they stressed the importance of context awareness to properly situate and recognize iterations of online caste-based abuse. Arjita Mital’s work analyzing secondary literature on platform community guidelines and governing policies made a similar observation. She discussed how notions of obscenity and propriety are culturally variant norms and have vastly different meanings depending on the cultural context. When these concepts are used by platforms in content governance policies they are largely informed by an American sensibility, and thus work to privilege a particular way of seeing gendered bodies, and to deem forms of expression that do not fit this narrow mold as ‘obscene.’

 

 

Anne Njathi and Rebeccah Wambui’s study highlighted how in the Kenyan context, a queer-led sex-ed platform, and a black racial justice advocacy and education platform on Instagram were shadow banned or had their content deleted as a result of partial content moderation styles. The temporal and sometimes permanent content removal at the discretion of mostly automated moderation raises concerns about insensitivity to Global South cultural contexts while inadvertently upholding and promoting harmful sexist and racist content. Moreover, as they note “platform regulation of content conflicts with platforms’ commercial interests…and misogyny is reinforced, and social justice and fairness are obscured through digital harms, power imbalances, and misconceptions, complicating the safe use of the internet”.

 

 

Esther Lee’s submission was based on her findings on the Nth room episode which rendered women’s digital presence perilous by chat rooms that housed an archive of extorted content. The legislative interventions brought as a corrective further exposed the contradictions between platforms and their avowed commitment to ensuring safety for women. Platforms, in trying hard to sidestep intermediary liability and accountability to oversight boards, made a pernicious invocation of the “right to free speech”, which was meant to collapse regulatory interventions with online censorship models. This “conflation in equating protection-cum-regulation as censorship…seeks to normalize platforms as a safe harbor for misogyny… by relegating acts of gendered and sexual violence as understood within the bounds of the private – and therefore, beyond the scope of institutional or structural obligations of the state,” Lee noted.

 

 

Building on these presentations, a related set of inputs shed light on the unevenness in platform governance efforts in different parts of the globe. Quito Tsui critiqued the way in which social media companies are able to offset their governance responsibilities, distancing the human costs of content moderation from the geographies of their origin. The burden to keep platforms “secure” is unfairly placed on severely exploited cloud workers in the Global South, who are tasked with the responsibility to clean and flag violent content. These dynamics of content moderation being offshored to the Global South recall the ‘race to the bottom’ narratives that have long characterized the rapacious profiteering of large corporations. Tsui’s intervention sought to explore the ways in which feminist care practices and understandings of communities of care can offer a holistic lens for the redesigning of social media governance.

 

 

AI Mediation and Humans-in-the-loop

In exploring the problems around content governance, an issue that emerged was the ways in which artificial intelligence (AI) systems that were meant to efficiently flag problematic content were themselves plagued with biases and faulty dispositions.

Shehla Rashid Shora highlighted how contrasting with the sophisticated models used for moderating English content, even the most standard abusive terms used to attack Muslim women in local languages are not flagged or taken down by platforms. Thus, these companies are not doing even the bare minimum to prevent abuse against women in the Global South by engendering bias in the AI systems. Graciela Natansohn’s paper also highlights how misogynist-racist violence is not an aberration but a symptom of internet coloniality which only instantiates the colonial logic underpinning techno-capitalist ‘innovation’. Following the work of Rita Flenski, she argues that, “forms of violence must not be interpreted as deviations, misuses or flaws in the process of digital communication. On the contrary, they are outcomes or effects of the intersection of capitalism-patriarchy and racial digital coloniality”.

Arnav Arora, Cheshta Arora, and Mahalakshmi J. shared their experience of designing a machine learning tool to detect online gender-based violence and the challenges and limits of the system when applied to regional languages. They noted, “Machine learning (ML) models, while being incredibly good at narrowly defined tasks, tend to behave erroneously when used on data dissimilar to the ones used in training. Additionally, these models are black-box, and it is an active area of research to establish a clear reasoning path within the model, which makes it incredibly hard to interpret the predictions of a model.”

Participants also discussed the tactics used by aggressors to evade automated detection. Damni Kain spoke about how trolls mix caste names and words from different languages together, which makes it difficult for automated filtering systems to detect such content. Amshuman Dasarathy spoke about how trolls use special characters, alternate spellings, rhyme, and deeply-embedded cultural references to evade automated content filters. Yasmin Curzi de Mendonça found that trolls tended to screenshot posts by women and post them from their own profiles, rather than using the reshare/retweet function, so as to reduce the women’s ability to participate in the discourse.

Given these many impediments to the sound functioning of AI oversight, there was a consensus on the crucial importance of human moderators in detecting and tackling gendered violence on social media platforms. However, interventions from Mariana Valente and Quito Tsui served as an important reminder that simply parroting the human-in-the-loop line, without any material changes to the conditions of employment for human content moderators not only perpetuates their exploitative working conditions, but also does little in terms of countering the problem of the lack of attention to context in moderation decisions. She also highlighted that “despite the rapidly expanding list of issues – from misinformation to targeted content and the political manipulation of social media – governments have also been loath to wade into the quagmire of platform governance, preferring, instead, to rely on social media companies to find a way out of a mess they created. In practice, this has meant that the governance of the online world has largely fallen to platforms who still insist on the neutrality of their services, despite all evidence to the contrary”.

 

The Law and its Discontents

A central concern and a site for important debates within the roundtable was the role of law in mitigating the problem, and creating conditions for a transformation of the online landscape that could facilitate equitable, just, and substantive change for women’s participation in the digital public sphere.

An emphasis on law-based solutions as an integral part of these efforts was recognized within these discussions. Anita Gurumurthy argued that pervasive misogynistic speech legitimizes the discriminatory treatment of women in the online public sphere, and should therefore be construed as a harm that is relevant to law. Divyansha Sehgal too echoed the perverse algorithmic amplification of gendered hate and how it ties to the platform’s logic of virality which maintains its profit viability. Given the nexus binding platform interests and user-generated harm against women she argued, “Unless a community actively invests in developing safety tools and protections against existing patriarchal systems of power, minority members will always be at risk of harassment, exclusion, and self-censorship.”

That said, there was also a recognition of the extent to which the law itself seemed to privilege certain forms of claims over others. Shehla Rashid Shora, for instance, spoke about the discrepancy between the platform take-down mechanisms for content that is dangerous, violent, or abusive, as opposed to content that is potentially a copyright infringement. While copyright- infringing material is taken down immediately, dangerous or hateful content is allowed to remain on the platform, often despite repeatedly being reported by aggrieved users. What this pointed to, participants noted, was the lack of any clear political will to disturb the existing business model of social media platforms and tackle the problem of pervasive online gender-based violence head-on, as this would entail disrupting their highly profitable ad-tech revenue model.

 

 

In keeping with this theme, a key insight reiterated was the importance of legal interventions to recognize platforms’ complicity in fostering hostile communicative environments. Kim Barker underscored the need for a move away from well-rehearsed statements that platforms are “mere conduits” and simply “host” the content, and others highlighted how platform immunities from liability continued to function as a way of maintaining a safe harbor for misogyny.

While unequivocally emphasizing the importance of the law in combating online gender-based violence, a number of participants also reiterated that the law should be seen as one among many tools, arguing for the need for a more integrated, joined-up approach to address the problem. For example, Suzie Dunn raised the important point that women are often reluctant to approach the police or the criminal justice system, likely due to the internalized misogyny in the systems. She noted that legislative change is not the only thing to aim for, but also changes in public service support, education, and funding for organizations to help people get content taken down.

Finally, questions of norm-setting and the importance of building coalitions of advocacy at a global scale also featured prominently as an important strategic imperative. Nandini Chami highlighted the need to undertake a normative benchmarking exercise at the multilateral level to evolve common regulatory standards for content governance across social media platforms.

Conclusion

The varied and even divergent opinions on what constitutes a feminist mode of social media governance signal the amount of work required to take meaningful steps toward a more gender-just online public sphere. Yet, what was unanimous was the urgent need for these conversations to be initiated and taken forward in forums where substantive action could be catalyzed.

While the scale of the challenge is evident, the debates about increased platform accountability for online harms; unsettling the foundations of the online advertising industry; new and emerging approaches to imbue content moderation with an appreciation of context; and the role of law in combating online gender-based violence, all provided important signposts for future interventions.

We invite you to engage in depth with these discussions through our compendium on Feminist Perspectives on Social Media Governance, that features all the contributions to the roundtable.

*(Based on previous work by Anita Gurumurthy, Avantika Tewari and Amshuman Dasarathy)