Digital platforms have become a breeding ground for misogyny, toxic masculinity, and gender-based violence. Across the world, women from all walks of life regularly face vicious harassment campaigns, verbal abuse, and relentless cyberbullying. While policy-makers, international organizations, and activists continue to debate how to address these harms, the issue is often framed in terms of balancing between the task of protecting vulnerable groups and that of preserving freedom of speech.

However, such framing obscures a crucial dimension of the problem. With the dramatic expansion of digital economies, online spaces do not simply constitute a public forum, but also a workplace. In particular, the proliferation of platform-based business models that largely exclude workers from employment protections, expose the persistence of gender disparities in these spaces. It is therefore increasingly crucial to evaluate and address the dangers that online misogyny poses for the labor rights, safety, and livelihoods of women who depend on digital platforms for their work.

Online misogyny and digital creators’ labor rights

South Korea is a striking example of how online misogyny functions as a form of structural sexism, affecting women’s status and rights across various life domains. Over the past decade, the country has witnessed a sweeping anti-feminist backlash, particularly among young men who perceive gender equality initiatives as a threat to their social mobility and economic prospects. While these sentiments initially emerged within alt-right and incel communities in the digital sphere, their influence over daily life grew substantially as they were mobilized by right-wing political actors capitalizing on gender wars for electoral gains.

Fuelled by conspiracy theories, misogynist consumer groups have relied on accusations of hidden ‘feminist messaging’ to justify coordinated harassment, doxxing, violence threats, and demands for content revision and creators’ dismissal.

Digital creative content industries such as gaming, webtoons, and webnovels have long been a key battleground for these wars. Women creators, who are largely contracted as freelancers, have been subjected to systemic censorship and human rights violations for expressing feminist views or mere suspicion of being feminists. Fuelled by conspiracy theories, misogynist consumer groups have relied on accusations of hidden ‘feminist messaging’ to justify coordinated harassment, doxxing, violence threats, and demands for content revision and creators’ dismissal. These accusations can be triggered by something as trivial as a hand gesture in visual content or narrative elements that are construed as signs of misandry.

Most concerning is how tech companies have often caved in to pressure from angry audiences, or even endorsed it through a public apology. In several high-profile cases, artists and writers lost their contracts and suffered unfair penalties as a result. While such incidents have become less visible, discriminatory practices have continued covertly. According to industry informants, companies routinely demand that creators alter their work, such as character designs or storylines, to avoid provoking backlash.

Beyond direct discrimination, women also suffer from the sector’s permissiveness toward misogynistic content that drives high levels of engagement. Sexist tropes and scenes of violence against women in male-oriented webtoons have long drawn criticisms, yet company responses remain superficial. These criticisms escalated in 2024 due to a controversial webtoon titled ‘Otherworldly Pongpong’ published on Naver, Korea’s largest platform and IT giant. Initially featured as a first-round winner in an amateur competition, the webtoon triggered widespread protests and a long-term boycott due to its explicit use of anti-feminist narratives for sensational effect.

Following widespread protests and a long-term boycott, the webtoon was disqualified and taken down, but anger toward Naver’s double standards and lack of accountability persisted. In practice, the consequences fell on freelance women creators, who experienced declining revenues as a result of the boycott and pressures from female readers reproaching them for continuing to work with the company.

Caught between their dependence on dominant platforms and misdirected consumer discontent, women creators have confronted heightened precarity as well as emotional and physical burdens due to additional workload and expectations.

Countering the free speech vs. hate speech fallacy

Women’s organizations have repeatedly called for stronger measures to address online misogyny, gender-based violence, and discrimination. However, regulatory responses have remained hesitant and fragmentary, as policy-makers and industry stakeholders favor self-regulation by tech companies and independent industry committees. Such a stance partly reflects the lingering spectre of anti-Communist state censorship in South Korea, which prevailed under military dictatorships during the Cold War. Many cultural producers and industry representatives remain cautious toward state intervention in media and artistic expression, fearing that it would once again be used to restrict political views or creative freedom.

Much of the harassment directed at creators takes place on social media platforms where toxic discourse is algorithmically amplified and increasingly rewarded.

As a result, the sector has largely relied on internal moderation policies and self-regulation to address abusive behaviour. Some companies, such as Naver, have introduced guidelines intended to reduce harmful content and promote a healthier ecosystem in response to controversies around misogyny. On the other hand, critics are sceptical about the effectiveness of such voluntary measures. Labor unions continue to demand more concrete reforms, highlighting that companies have repeatedly failed to protect the rights of women creators, while taking advantage of poor regulatory oversight to victimize them.

This is all the more important as the problem extends beyond the creative ecosystem itself. Much of the harassment directed at creators takes place on social media platforms where toxic discourse is algorithmically amplified and increasingly rewarded. With companies such as X embracing “free speech absolutism” as a justification for tolerating hateful and extremist content, self-regulation appears ill-suited to address the structural dynamics that enable misogyny and anti-feminist backlash. Corporate actors can use freedom of speech as a shield against accountability under the guise of neutrality or democratic commitment.

“You need to be alive to enjoy creative freedom”

Women creators are themselves acutely aware of the tensions involved in regulating online speech and content. Many worry that externally imposed or ill-defined content moderation rules could ultimately limit artistic freedom or be used to suppress controversial opinions. Yet the daily realities of online harassment make it clear that formal accountability structures are desperately needed. Confronted with threats of violence, doxxing campaigns, and stalking, creators deal with chronic anxiety and depression, as well as fear for their personal safety.

Risks to personal safety beyond the digital sphere are not purely hypothetical. South Korea grapples with high levels of gender-based violence, including femicide, domestic abuse, and dating violence, which increasingly correlate with soaring rates of digital sex crimes. Online misogyny no doubt has the potential to spill over into the offline realm, feeding into mutually reinforcing dynamics of normalised extremism and institutionally enabled impunity.

In this context, the need for protective measures cannot be sidelined over concerns about creative freedom. As a labor union representative put it: “This is not just a problem of hate speech at this point…it’s gradually leading to actual crimes, so it has to be regulated. I don’t think this would undermine creative freedom. And in any case, you need to be alive to enjoy creative freedom…”

Against this background, ensuring a safe and equitable working environment in digital industries is paramount. It requires not only voluntary moderation guidelines, but clear legal standards and enforceable mechanisms that hold tech companies accountable both as platform operators and as employers.

Towards comprehensive protection against discrimination

For many stakeholders and activists, creating such an environment involves moving beyond digital governance to the broader legal frameworks governing discrimination and human rights. South Korea remains one of only two members of the Organisation for Economic Co-operation and Development (OECD) without a comprehensive anti-discrimination law. Although provisions against discrimination exist in constitutional and employment law, they fall short of guaranteeing adequate protection against harassment and discrimination for precariously employed workers and marginalized groups, both online and offline.

For almost two decades, labor unions and civil society organizations in South Korea have called for the enactment of an Anti-Discrimination Act. Such legislation, which has recently been reintroduced for deliberation at the National Assembly, would establish legally binding protections against discrimination on the basis of gender and other characteristics. This would provide enforceable mechanisms for victims to seek remedy, while strengthening the responsibility of corporate actors, including platforms, for promoting gender equality and harm prevention.

However, the bill continues to face significant political resistance. Conservative religious organizations and far-right groups have mounted a strong mobilization against its adoption, arguing that it would restrict freedom of expression and prevent them from publicly expressing their beliefs about gender and sexuality. Echoing the rhetoric used against content regulation, these arguments not only enact a false opposition between the rights to freedom and protection, but they also implicitly condone hate and violence as acceptable features of democracy.

Framing the problem of online misogyny as a trade-off between free speech and protection obscures the reality that unregulated spaces systematically undermine the rights and safety of certain groups, while privileging others.

At the same time, the ability of these actors to gain such traction reveals the complexities involved in changing entrenched political discourses and social norms. Acknowledging the complexities, some feminist activists and organizations in South Korea and elsewhere have begun to explore civil society initiatives that tackle the problem from its roots by building capacity at the grassroots level. These include, for instance, educational programmes aimed at countering the influence of extremist views on teenage boys and challenging perceptions of hate speech as democratic participation. Long-term structural change will therefore require both regulatory intervention and cultural transformation.

Framing the problem of online misogyny as a trade-off between free speech and protection obscures the reality that unregulated spaces systematically undermine the rights and safety of certain groups, while privileging others. In this respect, it is crucial that public and regulatory debates move toward labor and human rights as central concerns in digital governance and beyond. Ultimately, the task is not to choose between freedom of expression and protection from harm, but to build institutions that can safeguard them simultaneously.