This isn’t the post I intended to write.
Every year, from November 25 through December 10, feminist movements unite for 16 Days of Activism Against Gender-Based Violence. The global campaign that calls for solidarity to end violence against women and girls everywhere was launched by activists with the Center for Women’s Global Leadership (CWGL) in 1991. Since then, each year has focused on a particular topic; the theme for 2025 was ending digital violence. The campaign was wildly insufficient for the task.
The failure of our frames on TFGBV
My current organization, the International Center for Research on Women (ICRW), has engaged in efforts to end gender-based violence for decades and was one of the organizations that first pushed for a definition and framework on technology-facilitated gender-based violence (TFGBV) a few years ago. We fought to define and raise awareness of TFGBV, but now must ask: how is this frame applicable given today’s digital realities?
We fought to define and raise awareness of TFGBV but now must ask: how is this frame applicable given today’s digital realities?
In the wake of the 2013 Arab Spring protests, much ink was spilled over the potential of Facebook and similar platforms to bring democracy to repressive contexts. Yet these same platforms were inflaming violence, including mass atrocities against the Rohingya in Burma. Some researchers began to examine the ways in which platforms were aggregating and amplifying hate speech. In 2017, ICRW took on the problem of misogyny and violence against women online and released a brief defining the term TFGBV in 2018. Our goal was to enable a better understanding of the depth and breadth of the issue. At the time, we justifiably believed that evidence to expose the problem would galvanize governments, private sector actors, and other stakeholders to address it. This had, indeed, been the playbook for broader activism and had achieved notable gains, including internationally-recognized definitions, instruments, and investments to address gender-based violence (GBV).
There was progress, but the pace of destruction was far quicker. In 2020, UN Women convened a set of “Action Coalitions” intended to galvanize new global commitments to gender equality, including an Action Coalition on Technology and Innovation for Gender Equality—a too-timid effort to find common ground with a wide range of stakeholders, including the global corporations that were, by then, systematically purging women of color from their scientific research and technical leadership teams. By 2022, UN Women had reached consensus on a definition and framework for TFGBV. Following that, governments, including the UK, Australia, Chile, Canada, the United States, and the Republic of Korea, launched a Global Partnership for Action on Gender Based Online Harassment and Abuse. This was too little, too late.
The definitions of TFGBV provided typologies for both individualized and systemic patterns of harassment and abuse.
The definitions of TFGBV provided typologies for both individualized and systemic patterns of harassment and abuse. This included manifestations of intimate partner violence (IPV) where abusers publicize non-consensual sexual imagery and engage in online stalking and harassment of their targets. Advocates also raised attention to systemic violence against women politicians and journalists intended to drive them out of public life.
The distinction between private and public forms of violence mirrored broader research and advocacy on GBV. Over decades of careful work, advocates have had some success in achieving policies, protections, and services for survivors of domestic violence and IPV. This is not to say advocates failed to call attention to issues of structural violence. Indeed, the initial 16 Days Campaign was inspired by the advocacy of Latin American feminist activists galvanized by an act of political violence: the assassination of the politically active Mirabal sisters in the Dominican Republic in 1960. Latin American feminists have long called attention to the root causes of femicide and the need for structural approaches to what manifests as individual acts of violence.
Latin American feminists have long called attention to the root causes of femicide and the need for structural approaches to what manifests as individual acts of violence.
Yet TFGBV-related recommendations tend to gloss over the political economy of online spaces, and assume the possibility that those with influence will act in good faith –governments will address the private sphere by strengthening regulatory frameworks and supporting survivors, and that tech companies will develop stronger internal ethical guidance and controls. And yet tech firms have consistently opposed any form of regulation, and the governments they control have waged war on sexual and reproductive health services for women. So why, in 2025, do we continue to make these recommendations?
Misogyny by design
By 2019, we had sufficient evidence that even with knowledge of the problem, far from adopting corrective measures, platform business models and the digital tools they developed were being designed to exacerbate discrimination and TFGBV. Emerging so-called “artificial intelligence” (AI), more accurately, algorithmic inference, was creating a distorted, fun-house mirror reflection of authentic human experience.
Safiya U. Noble’s 2018 Algorithms of Oppression called attention to the intentional, monetized elevation of racism and sexism via commercial search engines. Noble’s research was galvanized by her own experience with search engines, provoked by the very simple act of googling “woman” and “Black.” The overwhelmingly negative imagery that surfaced, and the vast disparity between the coded assumptions that selected sexualized, racialized, and violent depictions of Black women as ‘typical’ provoked her critical inquiry into the discriminatory coding behind search engine algorithms. Ruha Benjamin’s 2019 Race After Technology expanded the analysis to demonstrate its consequences for real-world public policy decisions.
Researcher Timnit Gebru was famously fired by Google in 2020 for pointing out that data sets used to train large language models (LLMs) were data scraping from the skewed, racist, sexist metaverse of images Noble had exposed. Gebru outlined that training AI on such data sets would result in AI generating even more racist and sexist content in response to a growing range of unrelated queries – the now infamous “stochastic parrots” hypothesis.
Gebru outlined that training AI on such data sets would result in AI generating even more racist and sexist content in response to a growing range of unrelated queries – the now infamous “stochastic parrots” hypothesis.
Not incidentally, the researchers who saw most clearly the distortions in digital spaces were those who were most capable of recognizing the bias in real life – women of color. Instead of correcting these trends, Silicon Valley leaders doubled down on the embrace of a white, sexist corporate culture. This alone should have been a warning call to feminist activists that the emerging frameworks on TFGBV were wildly insufficient in the face of the real problem.
The recommendations on TFGBV fail to address what Noble, Gebru, and others have exposed: misogyny by design. Digital systems are intentionally trained on violent and sexualized images of Black and Brown women and set up to further amplify, and ultimately generate, content that reified their investors’ white supremacist world views.
Stochastic parrots at work
I was overwhelmed by the naivete behind this year’s 16 Days campaign theme. Does anyone believe we need to do more to ‘raise awareness’ of the extent of online violence, when every major platform is now cramming violence into our eyeballs? And in a landscape dominated by a sea of trolls, is a social media hashtag (#NoExcuse) even remotely likely to break the surface?
I wanted to write something for 16 Days about structural violence. I’d recently interviewed a well-known LGBTQI+ activist, Jessica Stern, and was thinking about the AIDS crisis and the ACT UP movement of the late 1980s in the United States. Their symbol, a black and pink Silence = Death triangle, was ubiquitous and extremely effective. It was a meme that predated the attention economy and a reminder of the effectiveness of offline symbols that provide a shorthand way of understanding systems and structural violence.
This is not the post I intended to write. Here’s why.
Wanting to write about later uses of the Silence is Violence slogan, reader, I did not Google it, having abandoned Google search precisely because, as Noble (among others) had pointed out, its search engine amplifies content promoting gender-based violence. My search engine of choice is Duck Duck Go, with its commitment to minimize filter bias and therefore (I have hoped) also to reduce or at least neutralize violent and misogynistic content in my feed. Yet a simple search of Silence is Violence resulted in no less than three pages of reactionary content, before even a single neutral or positive article appeared. Repeating the search on both Microsoft Edge and Google yielded more mixed results, with some neutral/positive articles mixed in with many critiques, possibly precisely because these other platforms were applying content filters for me, yet all platforms elevated a small handful of reactionary commentators on known right-wing blog sites and publications.
Yet a simple search of Silence is Violence resulted in no less than three pages of reactionary content, before even a single neutral or positive article appeared.
Further investigation revealed that, apparently, the entire fragile White social media influencer cadre was triggered by the use of the slogan during the 2020 Black Lives Matter protests responding to the murder of George Floyd, and never let go. Citations led to right-wing blogs or published opinion pieces in mainstream media sources that can only be described as gaslighting. Indeed, some pieces were so logically contorted that it’s likely they were intended to generate information dysphoria. Pieces that I will not cite, which purported to be about individuals who asked for us to respect their silence as evidence that they were deeply analyzing the issue. Yet said individuals found time to opine about the slogan without much analysis! Featured content providers claimed that speaking out about structural racism “silenced” those who actually disagreed that structural racism and sexism exist. Those using the slogan as intended, to explain structural racism and sexism, were harder to find.
The rise of the groomer elite
In 2024, Telegram’s founder, Pavel Durov, was indicted in France on charges of aiding and abetting child sexual abuse. Investigators determined that the platform’s failure to moderate and remove child sexual abuse material was intentional; this was exploitation by design. Durov, like much of the tech billionaire elite, has expressed a strong view that protections for civil liberties and free speech must encompass the right of individuals to amplify hate speech, calls for violence, sexual exploitation, and abuse. With billions of dollars at their disposal to further this view, a minuscule handful of elites have ushered in groomer capitalism.
She notes that the groomer elite have claimed that by scraping every known bit of data, they would be able to solve humanity’s problems, from climate change to preventing the next pandemic.
In Empire of AI, journalist Karen Hao details the massive investments that have been poured into the development of algorithmic inference (AI) tools. She notes that the groomer elite have claimed that by scraping every known bit of data, they would be able to solve humanity’s problems, from climate change to preventing the next pandemic. However, in reality, their imaginations seem to fail when confronted with actual problems that real policy experts and advocates have been working to solve. Early AI investor Elon Musk famously claimed he would donate $6 billion to anyone who could solve world hunger, only to renege when the UN World Food Programme stepped forward with viable solutions that concerned real people had developed. As Hao notes, the evidence points heavily to this cadre’s overwhelming interest not in solving humanity’s challenges but rather in monetizing and using their tools to generate an ever-greater quantity of pornography.
As Hao notes, the evidence points heavily to this cadre’s overwhelming interest not in solving humanity’s challenges but rather in monetizing and using their tools to generate an ever-greater quantity of pornography.
Hao details the heartbreaking situation of content moderators in Kenya as an example of where investors are putting their money, and who is abused along the way. OpenAI contracted moderators to review AI-generated violent, graphic imagery so that their responses could inform the development of ChatGPT. This went beyond standard content moderation, as the company was intentionally seeking to generate violent imagery so that ChatGPT could be coded to infer limits and parameters to what it could generate. The immense psychological harm to real people along the way was incidental to this project. The designers understood clearly that a major use case for ChatGPT would be the generation of such content.
And then last week, right in the middle of the 16 Days Campaign, the war between the groomer class and regulators surfaced as the EU levied a $140 million fine on Musk’s platform, X, for user deception. Musk quickly clapped back, with the Trump-led US administration at his back, in a major new offensive to destroy even the tame regulatory controls on platforms that we have now. The case has received far less attention than it merited, given numerous events this past week calling for stronger regulation to end TFGBV.
The Trump/Musk nexus is emblematic of the use of public policy to enable rather than curtail exploitation. Prior to Musk’s acquisition, Twitter had famously banned Trump for using the platform to incite political violence. Musk’s X embraces violence and welcomed him back. And the Trump regime has been clear that regulation of the groomer class is not a policy option. Earlier this year, Trump pardoned Andrew Tate, a well-known sex trafficker who has systematically used his social media feeds not only to amplify violent, misogynistic content but also to groom his followers. A relatively tame example of content intended to isolate the target from loved ones and divert loyalty and affection to the groomer himself was Tate’s recent post stating that “if you’re a straight man with a girlfriend in 2025, you’re gay.”
Meanwhile, in Kenya, Facebook has refused to acknowledge the legitimacy of claims by content moderators and others harmed by its dissemination of violence. But the petitioners and the courts persist, and justice may yet be served.
Resistance in an age of digital colonialism
In sum, in the epicenter of all our efforts to regulate and content moderate an end to TFGBV sit a very small handful of sociopathic men who champion violent hate speech but are quite happy to suppress speech when it calls attention to their own crimes. And these men are attempting to groom the digitally-dependent next generation to be either perpetrators or victims. But their capture of our spaces is not complete.
In sum, in the epicenter of all our efforts to regulate and content moderate an end to TFGBV sit a very small handful of sociopathic men who champion violent hate speech but are quite happy to suppress speech when it calls attention to their own crimes.
The researchers who first launched ICRW’s TFGBV research took stock this past week and concluded: violence is on the rise, and it’s not the technology that’s at fault. The problem is the very small handful of elite men who control the means of digital production. They wrote, “This is not a neutral distribution of power. These powerful actors are determining whose safety matters, whose harm is ignored, and who gets to shape the rules of the game. As those with the greatest power evade accountability, responsibility is pushed onto those with the least, such as civil society, parents, teachers, activists, and victim-survivors themselves.”
So what must we do? In September 2025, an important day-long event focused on ending GBV, Free Futures 2025, took place on the sidelines of the UN General Assembly. Acclaimed journalist and tech justice advocate Maria Ressa was one of the featured speakers. Ressa emphasized, as she has elsewhere, that it’s time to stop trying to reform existing spaces and pivot our movements to demand a digital public square that is collectively created and governed, and insist on the investment to make it a reality. Bot Populi’s Feminist Digitality project surfaced important contributions toward this vision. And a few voices broke through among the usual suspects in 16 Days events this past week. Argentine feminist activist Ana Correa, who is hacking ChatGPT with the OlivIA tool, spoke at one event about her work, directly challenging algorithmic bias from within.
Developing alternative feminist digital resources is another part of the answer. And we need to continue creating feminist narratives online and offline; a femisphere of content to counter their manosphere.
Hacking is part of what we need to do. Developing alternative feminist digital resources is another part of the answer. And we need to continue creating feminist narratives online and offline; a femisphere of content to counter their manosphere. All of this will need investment, and at least some will need government support. Not all governments need to submit to the groomer class and its digital colonization; large economies like Kenya can both regulate and support alternatives, and smaller governments can band together to mobilize the investment needed for a genuinely cooperative digital ecosystem. As some writers on this platform have already highlighted, entire regions can create their own digital infrastructure and, over the long run, reject re-colonization and create meaningful investments, decent work, and generations of children safe from colonial predators.
We won’t end TFGBV until we deal with the underlying disregard for humanity that is at its roots. This year and every year, our 16 Days campaigns need to remind us anew that the forces we are up against and their tactics haven’t really changed; they lack the imagination to do so. We are the ones who can imagine a different future.