Hate speech or vilification against women and girls is the discriminatory treatment that constitutes and causes the systemic subordination and silencing of women on the basis of their actual or perceived female sex. Sex-based vilification is prolific on digital and online platforms, and the harms caused by the subordination and silencing of women are also accommodated and magnified online.

Despite its prevalence, vilifying speech directed at and about women on the basis of their sex is unregulated by law in most jurisdictions. Furthermore, the issue of sex-based vilification has also not received much scholarly or policy attention. This means that women often rely on platforms to offer appropriate responses to sex-based vilification online.

Using Facebook’s Hate Speech Community Standard as an example, this article argues that digital platforms’ existing content moderation policies and practices may be ill-equipped to appropriately respond to sex-based vilification in ways that meaningfully mitigate its harms. It argues that existing content moderation policies and practices may at times even reinforce the ‘sex-based gap’ that can be seen in existing legal responses to vilifying speech. Nonetheless, this article proposes that ‘platformed’ regulatory responses to sex-based vilification are a crucial aspect of a multifaceted approach to addressing sex-based vilification online and suggests some ways forward.

The term ‘vilification’ is used throughout this article, rather than the term ‘hate speech’, because the latter, more frequently-used term shifts focus from the functions of discriminatory speech to its expressive qualities. This is misleading and unhelpful. In fact, it is not the ‘hate’ in hate speech that is of concern, but what such speech does.

The article also uses the term ‘sex’ and ‘sex-based vilification’ instead of ‘gender’ and ‘gender(ed) vilification’. It is unclear whether gender, as distinct from actual or perceived female sex or gender identity, is an axis of women’s systemic oppression in patriarchal societies in relevant ways for discussions around vilifying speech. For example, the vilification of women for their gender expression, including gender non-conformity, is an aspect of their vilification in patriarchal societies on the basis of their actual or perceived sex. Sex-based vilification is also distinct from vilification on the basis of gender identity. Gender identity as a category of vilification is typically addressed to vilifying speech directed at and about transgender and intersex persons, for being transgender or intersex. It excludes vilification directed at and about women, including trans women, on the basis of their actual or perceived female sex.

Online Hate Speech Against Women

Social and news media, as well as an emerging body of scholarship, contain numerous accounts of sex-based vilification or speech directed at and about women that prima facie expresses contempt for women. Such speech is directed at women for being women, or on the basis of their female sex. This means that it is about all women, even when it is directed at particular women.

‘Cyber mobs’ of more than one assailant often engage in campaigns of cyber harassment against women. Anonymity and invisibility of assailants online, as well as the cross- and multi-jurisdictional nature of cyber harassment make it difficult to measure the extent of any given mob.

The proliferation of digital and online media means that the prevalence and severity of sex-based vilification is more easily observable and documented than before. Recent accounts of speech that may be characterized as sex-based vilification cover everything from women’s experiences of being victim to offhand sexist remarks, to ‘revenge’ pornography, invective directed at female journalists and bloggers, and speech characteristic of the ‘Manosphere’ (a collection of websites, blogs, and other online fora promoting toxic masculinity). The vitriol experienced by women with public profiles on Facebook and Twitter, on digital news media platforms, and in other online contexts, as well as the less well-known experiences of women without public profiles, speak to the prevalence of the problem.

The problem of sex-based vilification is especially apparent in the context of the cyber harassment of women. Professor of law, Danielle Citron defines cyber harassment as “involv[ing] the intentional infliction of substantial emotional distress accomplished by online speech that is persistent enough to amount to a ‘course of conduct’ rather than an isolated incident”. According to Citron, cyberstalking is a type of cyber harassment that “causes a person to fear for his or her own safety or would cause a reasonable person to fear for his or her safety”. The ‘cyber’ label in relation to both practices captures the ways in which the internet facilitates harassment and stalking, as well as the ways in which it “exacerbates the injuries suffered”.

The networking capabilities of online technologies allow for cyber mobs to form and work together in ways that may fall short of thresholds for joint criminal liability or accessory liability under existing criminal laws.

The cyber harassment of women typically involves sustained and tactical campaigns engaging multiple forms of communicative conduct that may be described as sex-based vilification. ‘Cyber mobs’ of more than one assailant often engage in campaigns of cyber harassment against women. Anonymity and invisibility of assailants online, as well as the cross- and multi-jurisdictional nature of cyber harassment make it difficult to measure the extent of any given mob. Relevantly, if individuals take part in mob-based campaigns, but do not individually engage in a requisite course of conduct, it is unclear whether their behavior would constitute offences under existing criminal laws. That is particularly the case given that the networking capabilities of online technologies allow for cyber mobs to form and work together in ways that may fall short of thresholds for joint criminal liability or accessory liability under existing criminal laws.

Cyber harassment is often directed at women with public profiles who may be particularly targeted when they speak openly about issues affecting women. A relatively well-known example is that of programmer and game developer Kathy Sierra, who maintained a popular blog on software development. Anita Sarkeesian, a feminist blogger and gamer, was targeted after starting a crowd-funding campaign to create a series of short films examining sexist stereotypes in video games. Caroline Criado-Perez headed a successful campaign to have Jane Austen’s image replace Charles Darwin’s on the English £10 note. When Criado-Perez spoke out about the abuse, including during media interviews, the campaign of invective against her escalated. However, cyber harassment is not directed exclusively at high-profile women. Women may be targeted merely for being present in online spaces. Jill Filipovic, now an author, was targeted by anonymous posters on the social networking site AutoAdmit while she was a law student at New York University.

The Subordination and Silencing Harms

The subordination and silencing harms of sex-based vilification are systemic for two reasons. First, they accrue to women on the basis of female sex, which is an axis of structural discrimination and disadvantage in patriarchal societies. Second, sex-based vilification is authoritative in patriarchal societies at least partly because it abides by the rules of patriarchal oppression in such societies. Speakers play by the rules of patriarchal oppression when they engage in speech acts of sex-based vilification and are able to reinforce patriarchal oppression.

To what extent does sex-based vilification, including online sex-based vilification, subordinate and silence women, or to what extent will it do so over time? This is an empirical question that cannot be precisely assessed. However, what is important is women are, in fact, systemically disadvantaged and oppressed in patriarchal societies; those harms flow from the systemic subordination and silencing of women in those societies; and speech acts of sex-based vilification contribute to – by being speech acts of – that systemic subordination and silencing. That is, sex-based vilification constitutes discriminatory harm in and of itself and contributes to causing discrimination and violence against women in patriarchal societies.

The prolific and acute occurrence of sex-based vilification online has significance beyond its harms to individual women or women as a group. Much, if not most, public discourse now occurs online. For many women, online spaces are key loci of public discourse and engagement with public life. In liberal democracies, online spaces also enable women’s participation in democratic processes which, according to liberal arguments, legitimate exercises of public power over them. Women’s presence in and engagement within those spaces, or lack thereof, thus pertains to democracy itself.

Sex-based vilification, including but not limited to online sex-based vilification, is often directed at and is about women in positions of political leadership.

In practice, women typically feel threatened and humiliated by occurrences of online sex-based vilification and they adapt their own behaviors accordingly, by policing their identities, speech, and movements or by leaving online spaces and disengaging from public online life. Significantly, sex-based vilification, including but not limited to online sex-based vilification, is often directed at and is about women in positions of political leadership. In Australia, female politicians across the political spectrum have spoken openly about their experiences of sex-based vilification. For example, Mehreen Faruqi, an Australian Greens Party Senator, has written candidly of the intersectional and especially vitriolic sex-based vilification she is subjected to online as a Muslim woman of color. Speech that occurs in online spaces and constitutes sex-based vilification is, thus, a substantial part of the problem of sex-based vilification as a whole, and such online speech warrants careful and urgent consideration. A recent Swedish study found that women Members of Parliament in Sweden feel significantly constrained in their speech due to the sex-based vilification they are subjected to online. In particular, they “avoid certain topics that are perceived as generating a great deal of online abuse” One participant from the study noted that discussions around gender equality and migration “trigger the trolls rather quickly”.

The ‘Sex-Based’ Gap in Law and Policy

Despite the prevalence of sex-based vilification, especially as characteristic of cyber harassment, there is a ‘sex-based gap’ in anti-vilification laws. Apart from some notable exceptions at the domestic level in some jurisdictions, anti-vilification laws on the basis of sex (or gender) do not exist. Furthermore, the issue of sex-based vilification has not received much scholarly or policy attention.

In contrast, vilifying speech on the basis of other ascriptive characteristics, including, for example, race, religion, sexuality, gender identity, intersex status, disability, or HIV/AIDS status, is unlawful under international law and in many domestic jurisdictions. The socio-legal implications of the harms and regulation of those categories of vilification, in particular, racial and religious vilification, have also been more extensively considered at the scholarly and policy levels in many jurisdictions.

Platformed Regulation as part of a Multifaceted Approach

Considering the pervasiveness of and difficulties in regulating sex-based vilification, particularly online, a multifaceted approach is required to meaningfully address the harms of such speech. As part of such an approach, states would employ a range of legal strategies to respond to different manifestations of online (and offline) sex-based vilification, and law would be one aspect of a holistic response that also incorporates other regulatory and non-regulatory counterspeech measures.

Speech acts constituting online sex-based vilification may be regulated through a combination of content moderation laws and guidelines constituting a ‘platformed’ response. Content moderation laws and guidelines may be administered by state bodies through content moderation schemes, codes of conduct for social media firms and other platform hosts, or otherwise. Australia’s eSafety Commissioner, for example, is legislatively empowered to negotiate directly with platforms for the removal of some material, including some material constituting sex-based vilification. Corporations and organizations (for example, media and technology firms, including social media firms, internet service providers, and other platform hosts) may also be encouraged by states and other actors to commit to voluntary codes of conduct or put in place internal guidelines for classifying, identifying, and removing content constituting sex-based vilification. Academics, lawyers, policy makers, and others may also work with platforms in various capacities to better the design of policies, procedures, and governance infrastructures relating to the moderation of speech that is sex-based vilification.

Additionally, counterspeech in all forms, not merely through regulatory means, is an important aspect of holistic responses to vilifying speech, including online (and offline) sex-based vilification. In particular, platforms’ non-regulatory contributions to educational and capabilities-building resources enabling women to themselves speak back against sex-based vilification, as well platforms’ non-regulatory counterspeech on women’s behalf, constitute crucial components of any multifaceted approach to addressing sex-based vilification. For example, platform hosts may employ capabilities-building resources in the form of instructional materials to encourage women and other actors to speak back against sex-based vilification when it does occur, along with empowering them to effectively do so. That may be done in conjunction with states, other actors, or independently.

Any platformed measures taken as part of such an approach would need to be interpreted and applied by content moderators and, in some cases, automated decision-making systems. Accordingly, such measures ought to be accompanied by holistic and effective enculturation processes directed at their proper interpretation and application.

Consider in this regard, Facebook’s Hate Speech Community Standard, which primarily defines ‘hate speech’ expressively, rather than functionally. This may be too prescriptive and may cast the net too narrowly with respect to speech constituting sex-based vilification. Sex-based vilification often serves to objectify women in the absence of explicit references or comparisons to women as ‘objects’, ‘household objects’, or ‘property’, which are all terms that are expressly prohibited in the Community Standard, ‘Tier 1’. Similarly, terms such as ‘whore’ and ‘slut’ mean different things for, and have different impacts on, men and women, however, the Community Standard, ‘Tier 2’ covers such speech directed at or about either men or women. In doing so, it overlooks that male sexuality is not constructed as a source of shame for men as female sexuality is for women and that male sexuality is rarely, if ever, commented on in comparable terms in patriarchal societies. In other words, it broadly does not reflect that contemptuous speech directed at and about men (and boys) on the basis of their male sex does not, and cannot, systemically harm them in the ways that sex-based vilification harms women in patriarchal societies. It is important that platforms’ content moderation policies and processes engage with this level of complexity and nuance, in consultation with experts, and in contrast to the Community Standard as it is presently.

Utterances in which terms meaning ‘bitch’ or similar appear are almost always about women, even where they are directed at men and even if they are said in jest, and such utterances should accordingly be treated as directed at women.

It is also especially important with respect to sex-based vilification that platforms do not, through policy oversight or overly narrow administration, reinforce the ‘sex-based gap’ in law and policy relating to vilification regulation described above. This danger was highlighted as part of a Facebook Oversight Board case recently made available for public comment. The case involved a decision by Facebook to remove a post containing a video in which a term meaning ‘fag’ was used. A term meaning ‘bitch’ was also used in the video, as part of the phrase ‘son of a bitch’, however, this latter term and the associated phrase were not emphasized by the Board as a subject of their decision or as requiring comment. Words meaning ‘fag’ may, in the absence of further context, reasonably be characterized pursuant to the Community Standard (‘Tier 3’) as “words that are inherently offensive and used as insulting labels for a person or group of people on the basis of their sexual orientation. It is significant, though, that Facebook emphasized the use of such a word as the primary basis for the removal of the post. This emphasis is also prima facie inconsistent with the Community Standard (‘Tier 3’). Words meaning ‘bitch’ may, in the absence of further context, reasonably be characterized as “words that are inherently offensive and used as insulting labels” for a woman or women on the basis of their actual or perceived female sex. It is also difficult to see what jurisdiction, or language-specific context may mitigate a finding in the context which the term was used in the particular post. Importantly, “content targeting … [women] on the basis of their…[sex] with…cursing, defined as…profane terms or phrases with the intent to insult, including … ‘bitch’” is expressly proscribed by the Community Standard (‘Tier 2’). It is immaterial here that the term was used in the post as part of an attack on a man; the attack was nevertheless a ‘direct attack’ (Community Standard, ‘Policy Rationale’) on a woman (his mother). In any case, men are often criticized or degraded on the basis of their relation to (a particular kind of) woman. Utterances in which terms meaning ‘bitch’ or similar appear are almost always about women, even where they are directed at men and even if they are said in jest, and such utterances should accordingly be treated as directed at women. If the post in question was removed on the basis that it violates the Community Standard, it should thus have been removed with reference to its constituting both hate speech on the basis of sexual orientation and hate speech on the basis of sex.

Platforms must work with experts to train their moderators and algorithms to be sensitive to sex-based vilification in the range of ways in which it and its harms manifest.

This highlights the broader issue that vilification against women on the basis of sex is, generally speaking, not only ubiquitous but also normalized. That is, such speech is often simultaneously overwhelming and invisible. The treatment of women as inferior, for example, may be so central to social organization in some patriarchal societies that, unlike racist, homophobic, transphobic, or other categories of hate speech, it is imperceptible as harm or imperceptible as harm worth doing anything about. This phenomenon may partly explain the failure of platforms, like Facebook, to appropriately and adequately identify and respond to sex-based vilification in policy or practice. Again, it is important that this does not continue to happen, and platforms must work with experts to train their moderators and algorithms to be sensitive to sex-based vilification in the range of ways in which it and its harms manifest. Women often bear the brunt of online vilification, both in terms of volume and virulence, and it is imperative that such speech is addressed. Not doing so harms all women who exist online, as well as free expression and democratic legitimacy.

Conclusion

Using Facebook’s Hate Speech Community Standard as an example, this article has argued that digital platforms must engage with the complexity and nuance of sex-based vilification and its functions to appropriately respond to such speech in ways that meaningfully mitigate its harms. Platforms need to be especially careful that their content moderation policies and procedures do not serve to reinforce the ‘sex-based gap’ with respect to vilification regulation in ways that further normalize sex-based vilification and its harms to women. Appropriate and adequate ‘platformed’ regulatory responses to online sex-based vilification are a key part of a multifaceted approach to addressing such speech, and it is high time such responses are progressed at both the policy and corporate levels.

This is the fifth piece from our special issue on Feminist Re-imagining of Platform Planet.