The birth of the internet promised to bring freedom, connect people globally, break barriers, and help achieve liberty, democracy, and equality. This ambitious spirit still animates several digital ventures that remain faithful to the original promise of the internet.

However, the digital ecosystem that we now inhabit is far from its original ideals. Although we do connect, shop, and work remotely, the internet has also become a space for harassment, manipulation, surveillance, and polarization.

Large technology companies have become regulators and monopolize national infrastructures. They deliver goods and services, and determine the content we consume. Take Google, for example, which almost all of us use. It is far from the giant, virtuous public library it claims to be. The outcome of a search is the product of human choices embedded in obscure algorithms, and the product of commercial interests.

To address the indiscriminate extraction of personal data and its harms, this article argues for reconceptualizing privacy based on a feminist approach. It is not possible to reconceptualize privacy without challenging the data extractivism model, and the traditional liberal approach to individual privacy. Feminism can contribute to the shift away from the consent paradigm, which centers individual responsibility in securing individual privacy. In conclusion, a new conceptualization of privacy can build on the contextual and relational nature of privacy, and the infrastructures that our society can develop to handle and use data.

The Great Awakening

Legislators around the world are waking up to the dangers of the existing approach to the digital ecosystem in the face of rapid technological innovation. Regulating the digital is certainly not an easy task. As technology develops quickly and globalizes, the law as it is now can be very slow to react, and it is still too restricted by national borders.  As such, national initiatives struggle to account for broader dynamics, including a dispersed supply chain.

Nevertheless, countries are equipping themselves with new legislative tools to rein in the power of the big companies, through the intersection of privacy and anti-trust laws across the globe.

From the US to China and India, privacy laws are proliferating. The  most recent one, in China, is probably the most important event in privacy law since the General Data Protection Regulation in the EU, as it further cements China as a global commerce player everyone must reckon with.

In the US, where anti-trust action has historically been much preferred over regulation, there is a surge in initiatives against anti-competitive behavior. “It is the policy of my administration to enforce the anti-trust laws to meet the challenges posed by new industries and technologies, including the rise of the dominant Internet platforms, especially as they stem from serial mergers, the acquisition of nascent competitors, the aggregation of data, unfair competition in attention markets, the surveillance of users, and the presence of network effects”, President Biden’s Executive Order on Promoting Competition in the American Economy stated.

In the EU, efforts have been made to introduce new measures that regulate the commercial advantages of Big Tech, which are perceived as stifling innovation. For example, the Digital Markets Act (DMA) places data use restrictions on platforms that are designed to be gatekeepers. Specifically, the DMA prohibits gatekeepers from combining personal data from several sources without seeking the user’s consent, and bans commercial data ring-fencing, obliging gatekeepers to “refrain from using, in competition with business users, any data not publicly available, which is generated through activities by those business users”.

Countries are also tackling the issue of online harms, demanding companies provide greater action, transparency, and accountability around the measures they take to prevent harms and take down posts with illicit messages.

The Rise of Big Data and AI, and the Challenge to the Rule of Law

Recent decades have seen our societies becoming increasingly digitized, notably through technology, with elements of Artificial Intelligence (AI). As the academic discipline of AI grows rapidly, it has been doggedly making its way into digital decision-making systems. These AI systems are, in turn, replacing human decision-makers and institutions (both public and private) seeking increasing effectiveness.

Machines increasingly make decisions, predict behavior, and supplement (or even substitute) policy decision making, including allocation of resources. The problem is that these systems operate in an opaque way, often as black boxes. Their functioning goes beyond the cognitive capacity of humans, thereby inhibiting the transparency which underpins the rule of law as we know it.

The rule of law is based on a two-pillar transparency principle whereby the rule-making process is open to people through political representation and its enforcement allows procedural safeguards in the form of the ability to contest decisions. However, with the rapid digitalization of all spheres, it has been increasingly difficult for citizens to understand why they are seeing an advert, why a loan is being denied to them, or why a medicine has been prescribed to them. This is a profound challenge for the rule of law as we know it, and makes the legislative response difficult to conceive and implement.

Regulators and legislators are grappling with this challenge. The EU has published the EU AI Act (AIA) which sets out some controls regarding what is deemed as high-risk AI. Despite its limitations, this is a welcome first attempt at legislating something as complex as AI.

While AI is certainly not a new thing, its rapid use can be attributed to both the increase in computing capabilities and the indiscriminate collection of data, including personal data. The pervasiveness of data collection practices, which are necessary to train algorithms, fundamentally stretches the principles of data privacy law, including data minimization, necessity, and proportionality.

Beyond Legislation: Challenging the Sanctity of Data

Peter Hustinx, a former European Data Protection Supervisor (EDPS) chief, once described data as a “valuable intangible asset“. This is a useful description as it not only moves away from the distorted view of data as the new ‘oil’, which is something tangible, but also alludes to the difficulty of regulating such a complex asset.

Legislation is indeed complex, and it does require collaboration between different disciplines. However, it is also fundamental that we challenge some of the most important constructs that underpin the public discourse around data.

We have all been sold the idea that there is nothing more objective, neutral, informative, or efficient than data. This is wholly misleading. When an algorithm is fed data, a decision has already been made. Someone has decided that some data can be chosen, and other data cannot. In other words, when some data is elevated to train algorithms, other data is silenced. Consequently, if, in reality, data is people, then some of us are being selected while others are being silenced.

If, in reality, data is people, then some of us are being selected while others are being silenced. This is political, and this is what is missing in the discussion.

Let us look at how this plays out in the medical field. Medical research into what causes strokes has progressed rapidly over the past few decades, but it only emerged very recently that the symptoms of heart diseases in women are different from men’s, with very serious consequences in terms of prevention and detection of the condition for half the population. Why did this happen so late? Simply because, until then, most of the research was focused on men and used men’s bodies and data. Thus, the choice about which data sets are studied is being made by people – a subjective decision, and a political one. Each specific person, once entered into a data set, becomes part of a new transaction between them and the invisible force that has put them into a data set, used it to train an algorithm, which ultimately makes a decision about them.

This represents an asymmetry of power – an outcome of choice and power – that underpins the politics of data and, ultimately, the data economy. The data economy is political at every level because some organizations hold a huge amount of power over others by deciding who gets onto a data set, and who doesn’t, with all that this entails. It is far from the simple choice it might first appear to be.

Rethinking Data and Privacy through a Gender Lens

Within this wider landscape, the role of AI-driven advertising should not be underestimated. In the US, 63% of the top links related to abortion news on Facebook come from politically conservative websites. Many of the most widely-shared links referenced debunked or misleading narratives about abortion, such as the idea that Democratic politicians are endorsing ‘infanticide’ by supporting abortion access later in pregnancy.

It often feels like women are at the losing end of online manipulation. This is worrying since online manipulation thrives on the model of data extractivism that we are all a part of.

According to Consumer Reports, nearly a third of women in the US have used period tracker apps, which record intimate details regarding women’s health. Some of these apps sell this data to advertisers, which translates to more targeted ads for menstrual or baby products. In 2012, the marketing department of the US retailing giant, Target, even created an algorithm to predict when shoppers were pregnant. Consumer Report states that allowing an app to share this information can impact a woman’s ability to obtain life insurance and could determine the cost of coverage. Additionally, women might be subjected to workplace discrimination based on the information collected by the apps, including pregnancy status or health conditions.

These are just some examples of why it is so important to rethink the concept of privacy.

Privacy is not a gender-neutral topic. Doubtlessly, privacy, as defined by men, can clash with the feminist movement’s slogan, that the personal is political…but does it really?

‘The personal is political’ slogan was born out of the need to end the structural inequality underpinning our societies and to put women’s issues – from reproductive health to income inequality – front and center in the public domain. Women advocated for the state to intervene in what was, traditionally, the private sphere – the home – to enable them to fully participate in the public sphere. This required interventions in health, reproduction, labor, and family law to name a few.

Take menopause, for example, which is still a taboo topic for many, often at the cost of women’s well-being. Women  In other words, making these issues public has advanced the women’s cause. Women’s rights history has ultimately been about bringing to the public domain and the legislative space, what was considered part of the private realm.

This explains why privacy has often been a fraught topic for women, one that can cause an immediate negative reaction.

Data Extractivism is Harmful to Women

I want to examine two areas in which the data extractivism model can be extremely harmful to women – when data is fed into algorithms and used in ways that replicate, amplify, and fuel gender inequalities.

First, using AI and Big Data analytics in digital advertising like online recommendations, especially when they are achieved through clustering, which is the categorization of individuals into clusters of people with perceived similar interests and behavior. This means that all the individuals in a cluster are attributed a feature, and everyone who shares the same feature is targeted for being part of that cluster. This sort of datafied double life (our feature as our avatar) has consequences on our offline life. Clustering individuals into groups of ‘similars’ leads to social stereotypes being reproduced and reinforced through echo chambers and filter bubbles.

For example, Facebook has been accused of breaking equality law in the way it handles job advertisements, failing to prevent discriminatory targeting through them. Furthermore, its algorithm has been biased in showing certain job advertisements to certain people. In an experiment, almost all the job advertisements for mechanics on Facebook were targeted at men, while ads for nursery nurses were almost exclusively targeted at women.

The echo-chamber gives us an over-personalized diet of “information” that feeds our existing assumptions and prejudices, prevents reconsideration of our positions and polarizes us into camps with increasingly extreme views. It is one of the biggest risks to preserving and nurturing the autonomy of choice as well as democratic values in our societies. I have often argued that echo chambers and filter bubbles are a driving force of populism. In my book, An Artificial Revolution, I argue that feminist-hating populism is fueled by online manipulation, which has happened across the globe.

The contemporary vigor of populism must not be analyzed without considering that new technologies and AI-driven advertising enable movements to thrive by reshaping the political agenda in powerful ways. All of this is possible because of how the digital infrastructure works, the business model of social media companies, and the inherent inequalities of data sets.

All of us, and women in particular, should fear the AI-driven digital advertising ecosystem, which is bringing back arguments, language, and visions of the world that we fought hard and long to abolish. This is because of the power of microtargeting, which is based on sophisticated machines gathering data and putting us into clusters based on our more intimate traits and desires. Poignantly, Wendy Hui Kyong Chun suggests that network analytics can be read as ‘the bastard child of psychoanalysis’– and this is, sadly, true. If applied to the political sphere, it is dramatic.

The second way that the indiscriminate extraction of data is harmful is through the use of AI and data to legitimize techno-solutionism. For example, in the aftermath of a mass shooting in the USA, the Washington Post reported that the White House had been briefed on a plan to monitor ‘neuro-behavioral’ predictors of violence. By collecting data from smart devices, including Apple Watches, Fitbits, Amazon Echoes, and Google Homes, an AI ‘sensor suite’ would then flag mental changes that make violence more likely. Why are we opting for such a dystopian intervention when we know the one common element in all the mass shootings we have seen around the world is that the perpetrator tends to be a man, and often a man with a history of overlooked domestic abuse towards women? Why isn’t this looked at rather than trying to identify some (non-existent) correlation between how much someone walks and their likelihood to commit a violent crime? This dangerously short-sighted and an insult to women. Furthermore, by emphasizing on the alleged relation between mental health and violence, it also further marginalizes those with mental health issues who would think twice before asking for help out of fear of being put on a federal watch program.

Privacy as a Gender Response to Harms

My view is that the debate around privacy needs to meet with gender analysis to further its scope and potential. In this vein, I want to highlight two areas, which require cross-fertilization and that could perhaps form the basis of a global discussion and approach. They are:

1. The limits of the consent model must be acknowledged. It is important to draw from the feminist experience around the complexities of ‘free consent’. Currently, consent, often through unreadable privacy notices, is used as a free pass for data extraction. The complexities of the data collection and processing mean that meaningful consent is often not possible in real life.

Women’s lives feature a great deal of interdependence and little experience of life lived without servicing or accommodating the needs of others. To this date, women have an incomplete picture of the opportunities and resource they can lay claim to. This lived experience around autonomy is relevant in the context of consent. Most tools that require consent today are completely embedded in our daily life. Consequently, consumers often have little choice while consenting to privacy policies, thereby making consent anything but a meaningful choice. So, how do we move away from a consent-centric approach through increased focus on ex-ante scrutiny and ex-post enforcement ? To start off, we need to acknowledge the lack of meaningfulness around current consent practices. We also must ensure that good data handling is a requirement for a product in the first place. Furthermore, companies must be heavily sanctioned if their products do not meet the data-handling requirements.

2. The limits of individual privacy and the current infrastructures. The privacy discourse could draw from the feminist discourse to emphasize on the collective agency of individuals through the rationale of interdependence. Recently, academics like Helen Nissenbaum have done tremendous work on the social dimension of privacy, focusing on the importance of context, spaces, bodies, and relationships to explain the social value of privacy. This work has shifted the discussion around privacy from a traditional liberal agenda to a more collective space. However, we now needed much more in the context of both the conception of privacy in its relational and contextual nature, and in the infrastructure that we rely on to handle and govern data. For example, the notion of group privacy is becoming more relevant as it questions the wider impact on people as a group rather than as individuals. Similarly, cooperative infrastructures related to the administration of data could draw from collective experiences that increase social accountability and resist control from the bottom-up.

Conclusion

The discourse around privacy could be much better managed and this essay argues that the discourse and thinking around gender could be the fertile ground for a renewed approach to privacy.

In a nutshell, first, we cannot rethink privacy without looking at the hidden structures of domination underpinning the indiscriminate extraction of personal data. Second, we must stop thinking that individual responsibility (through the consent approach) is the only way to protect personal data. Finally, new collective forms of data control and management  should be established to decide how best to protect personal data.

The discourse and thinking around gender could be the fertile ground for a renewed approach to privacy

This agenda would require women to own this debate and potentially bring these arguments to public fora. For instance, through a new declaration of digital rights, which could be the basis of a global conversation hosted by the global institutions that we desperately need in a time of fragmentation and polarization.

This is the third piece from our special issue on Feminist Re-imagining of Platform Planet.