In her book Silicon Values, Jillian York recalls a meeting with Facebook executives in 2011, a few months after the start of the Arab uprisings. When the conversation steered towards Facebook’s role in these events, their Chief Operating Officer Sheryl Sandberg, York recalls, “placed her hand over her heart and proclaimed her pride at what Facebook had started in Egypt” (p. 72).

Other tech company representatives are similarly upfront about how they see their role: as key players in political affairs. And even those who don’t like to see themselves that way, are in fact making political choices (and policies). Whether it’s collecting and analyzing data to run social media platforms or building algorithms that decide the outcome of people’s mortgage applications, they are not merely providing technological infrastructure.

Creeping Takeover

Because of how incremental the takeover of public roles by the tech industry has been, many of us have not felt the change. As we go about our daily lives, we have become used to relying on tools created by companies that take our data in exchange for ‘free’ services. We don’t give it a second thought when we cannot make a doctor’s appointment without an email address, or if we have to use a commercial messaging service to communicate with our kids’ teachers.

People have to file freedom of information requests just to learn that an algorithm set the length of their prison sentence. The more public institutions rely on tech companies, the more we are losing democratic control over their processes and decisions.

The result is an ironic inversion of values: while the trading of data for services promises ease, transparency, and freedom from overly bureaucratic procedures, navigating our lives has become harder and more opaque – and discrimination more difficult to contest. The tools and rationales of Big Tech have crept their way into government and public administration as well. The effects can be horrendous: erroneous ‘self-learning’ algorithms to detect alleged welfare fraud, for example, have cost lives. For many who sense that they have been treated unfairly, it is impossible to find out what exactly caused the harm. People have to file freedom of information requests just to learn that an algorithm set the length of their prison sentence. The more public institutions rely on the tools, services, and rationales of tech companies, the more we are losing democratic control over their processes and decisions.

Some argue that focusing too much on privacy can distract from larger societal harm coming from Big Tech’s creep into public policy or even enable the continuation of that takeover. Tamar Sharon argues that privacy concerns regarding the tools provided by tech giants have sidelined the larger, political consequences of this development – such as the concentration of data, power, and agency with private corporations that have no public accountability. We nevertheless believe that privacy is important and useful. But we need to avoid two fallacies when we discuss it to reduce an over-reliance on it as a silver bullet: first, that privacy is primarily an individual good, and second, that ‘fixing’ privacy is the key to making digital practices more ethical. Both assumptions are wrong.

Why Individual Control is Not Enough

In most regulatory and ethical frameworks around the world, privacy is understood as an individual right of people. The key role that informed consent plays demonstrates this point. Informed consent symbolizes the right of individuals to have a say in how their data will be used. Albeit important, such focus on individual-level control is insufficient to tackle the problems of the digital age. First, the way in which our data is used can benefit or harm much wider ranges of people than the people who – in theory – get to give or deny consent. Mark Taylor drew attention to this in the context of genetic information more than 10 years ago. Since then, various scholars have troubled the assumption that privacy is necessarily tied to individuals (see for e.g., here, here, or here).

Which brings us to our second point: privacy is just as much a collective good as it is an individual one. Privacy is the right of people – both as individuals and as parts of a collective – to determine what parts of our lives others can see, and how they can use this information. Research shows that most people are happy to share their data when it is used to benefit themselves or others – and without harming anyone. Collectively, people want to have a say in how data is used, for what purposes, and who pays for it in some way or another. This ultimately relates to the question of what kind of society we want to live in. To give an example, our own research found that, in connection with Covid-19 contact tracing apps, many people view ‘privacy’ to be as much about excessive surveillance as it is about the security of their own data.

The strong focus on individual control over data has isolated the relationship between data subjects and data users from the political economy within which it is taking place.

In short, the strong focus on individual control over data has isolated the relationship between data subjects and data users from the political economy within which it is taking place. Addressing inequalities and harms in digital societies requires strengthening mechanisms of collective control over data in addition to individual-level control.

Why We Need Data Solidarity

To solve these problems, we need to do two things: first, we need to change key parts of our data governance frameworks. Second, we need to tackle those aspects of our economy and society that have enabled the harmful data practices that we see today. One approach that helps us to get there is data solidarity.

To start with, regulators and policymakers must pay much more attention to the public value that specific instances of data use create. This includes looking at the risks that different kinds of data uses (not only different kinds of data types) pose.

This is not the same as deciding whether data use is in the public interest or not. The latter question is typically answered with a clear yes or no. For example, using people’s health data to develop a tool to detect skin conditions in light-skinned people will often be considered in the public interest – as is a digital tool that does the same for all skin tones. The latter, however, provides more public value because the number of beneficiaries is larger, and these beneficiaries are more likely to be members of marginalized communities. As a society, we should pay more attention to the public value when deciding what types of data use we support, which ones we tax, and which ones we prohibit. Data uses where the imbalance is particularly extreme – where risks are high and public value is low or absent – need to be prohibited altogether, with severe fines and effective enforcement. Where public value and risks of harm are high, data solidarity requires effective harm mitigation instruments and low-threshold access to recourse.

As a society, we should pay more attention to the public value when deciding what types of data use we support, which ones we tax, and which ones we prohibit.

A stronger orientation towards public value would also acknowledge that different parts of the public are not affected in the same way by the same data use. While the use of facial recognition software by the police may be in the ‘public interest’ in some cases, the same practice can put others at such high risk that the public value of this data use is negative. Focusing on public value gets us out of treating ‘the public’ as a homogenous entity. (Also for this reason, we are currently developing a tool for the structured assessment of the public value of different data uses that prioritizes re-distributive and environmentally sustainable data uses.)

In addition, deeper and more drastic things need to change. Societies that chase profit and economic growth will always incentivize irresponsible data use that sacrifices the wellbeing of people for the sake of economic profit. People should not have to submit their freedoms to companies to be able to reap the fruits of digital societies. Scholars and activists – including Nobel Peace laureates Dmitry Muratov and Maria Ressa – are proposing concrete steps in this direction. And people are also already looking for practical alternatives. The purchase of Twitter by Elon Musk spurred millions to join the non-profit social network Mastodon. It seems that people won’t put up with just any behavior from large tech companies – if alternatives are available. Policymakers need to explore how they can facilitate the design, launch, and scaling up of such alternatives. Now really is the time to end the misery in the metaverse.

The authors are grateful to the Lancet & Financial Times Governing Health Futures Commission for their support and inspiration of work on this topic. They would also like to thank Guli Dolev-Hashiloni, Tamar Sharon, Jill Toh, and Hendrik Wagenaar for helpful comments on this article.