When Disha Ravi, an environmental activist, was arrested in India in 2020, it caused shockwaves because it revealed the threat of the combined forces of government and corporate surveillance to activist groups. Ravi was part of a larger group of activists who were communicating online to produce a social media toolkit supporting the protests against a series of farm bills passed by the Indian government. This toolkit contained strategies to amplify the farmers’ protests in globally mediated environments. According to the Delhi Police who arrested her, Ravi and her fellow activists Nikita Jacob and Shantanu Muluk were accused of defaming India and creating unrest in the country. Though the charges against Ravi were later dropped due to lack of evidence, the fact of her arrest raised weighty issues of how democratic activists can act safely online. The fact that Ravi’s IP address was traced through a Google Doc added to safety, privacy, and security concerns.

From a feminist perspective, safety and protection are not simple subjects. Activists and scholars have long argued for thinking of digital spaces as places where gender norms are both subverted and reinforced. Digital spaces threaten non-normative, minoritized populations with harassment, arrest, surveillance, and further ‘offline’ organized acts of violence against communities. Concomitantly, the ideals of protection and security can inadvertently bolster the power of states, corporations, and self-appointed protectors of the weak in the name of creating safety online. While most discussions of safety in online spaces focus on what tech corporations should do to stop online harassment, hate speech, doxxing, and other harms, we can learn grounded practices of cybersecurity from activists who have had to develop them in the interstices of communications systems that are valuable, not trustworthy. Focusing on social media companies is an important step in holding these powerful institutions to account. Yet, incremental policy interventions or technical solutions that, for instance, use AI to detect hate speech, will never address the larger conditions that have evolved alongside and through these technical systems. These include the well-funded ability of right-wing groups to harass progressive voices online, the value proposition of social media companies that depends on circulating inflammatory content as a proxy for the efficaciousness of advertising, and the increasing density of algorithmically-mediated watching in the name of protecting national space and majority sensibilities.

In digital spaces, hustles describe how people, especially those who are included in them mostly as a means of predation, put together different gigs, opportunities, and platforms to try to take advantage of the terms of engagement that platform companies offer.

By contrast, activist communities make holding social media companies accountable into only one aspect of producing safety online. In what I’ve taken to describing as ‘a cryptographic hustle’, I’ve witnessed democratic activists in the Indian diaspora taking numerous measures to protect themselves as they work to build anti-caste, anti-Islamophobic, pro-farmer coalitions online. A hustle is a way of eking out a living through a system designed for someone else based on practices of economic ingenuity. In digital spaces, hustles describe how people, especially those who are included in them mostly as a means of predation, put together different gigs, opportunities, and platforms to try to take advantage of the terms of engagement that platform companies offer. The hustle is a form of activity that can also be applied more broadly to how those for whom digital systems have not been designed learn to work within these systems and bend them to their purposes.

In truth, I continue to wrestle with how well a ‘hustle’ describes these cryptographic practices. While hustling in its economic sense can be associated with bravado and ultimately mastery, in terms of adapting to regimes of state surveillance and corporate incentive structures, attaining mastery may be forever out of reach. Expensive surveillance products like the Pegasus software system and tech companies’ default setting that prioritize visibility over safety make hustling an increasingly tricky endeavor. Still, I hold on to the idea of the hustle, along with other ideas like folk cryptography and layered cybersecurity to indicate how activists approach this fraught communications field: with bravura and multiple, tested practices of evading being watched at crucial times and of watching back all the time. I try to understand these folk knowledges in terms of their possibilities, internal contradictions, and even continuities with the regimes they are opposing. In other words, I hope the idea of the hustle recognizes without romanticizing.

Cryptographic Practices Beyond Cybersecurity

A cryptographic hustle exists in the interstices of formal practices of cybersecurity, a field of study that often describes users as the weakest-link in creating safe computer systems. Cybersecurity can be an alienating term due to the field’s characterization of online safety as consisting of “elite consultants” on the one hand, and “lazy and uninterested” individual consumers, on the other, to cite descriptions provided by cybersecurity scholars Gioe, Goodman, and Manless and by the Reconfigure Network, a feminist security collective. In my current research project, I interview participants from democratic social movements located, broadly, in the Indian diaspora, that is, they are located on the continents of North America, Europe, and Australia, and Asia, and they identify as members of the Indian polity-at-large. This project traces the relationship between maintaining democratic practice and highly mediated environments. As a ‘Covid’ ethnography, I conduct interviews from a corner of my home, over the phone or through encrypted video calls. I ask my interlocutors about how they communicate, what is important to them in their communications practices, and how they feel about being safe online. I learned that the activists whom I interview do not often use the term cybersecurity. But they do often talk about making their communications more secure. The tactics they use to secure their communications do not insist that humans are at fault. Instead, they recognize that the systems themselves are risky because they are made to serve the interests of corporate, state, and para-state actors.

I conceive of the tactics of the cryptographic hustle as forms of secret writing and decoding other messages that create plausible deniability of participation in any act of dissent. It involves protecting messages from compromise. It also involves decrypting messages from various actors on the internet, from unofficial right-wing actors to state officials, to understand how they are being watched. As postcolonial and feminist science and technology scholars like Anita Say Chan, Gabrielle Hecht, and Ruha Benjamin contend, these efforts demonstrate that technologies have politics. These techno-politics make some data available for appropriation and reappropriation even while other kinds of data are blackboxed. In assembling folk knowledge about how to write in secret, activists create their own infrastructures for practicing dissent that co-construct social and technical practices.

Through practices of secret writing and habits of decoding adversarial messages, activist communities produce what Fred Moten and Stefano Harney have called “fugitive publics”, spaces of refuge “from the people who took us in”. In their work The Undercommons, Moten and Harney argue that fugitive publics are spaces of refuge that “need to be conserved, which is to say moved, hidden, restarted with the same joke, the same story”.

It makes sense in the context of this essay for readers to expect a description of the cryptographic hustle that activists use. But beyond describing the conditions that make a cryptographic hustle necessary for activists, I would rather invoke an ethnographic refusal to protect these tactics from open discussion. It is not that these tactics are unusual, surprising, unknown, or in themselves secret, but rather that in telling about specific cases, it is very hard to know what else I might be revealing about the people who are doing this work. That caution nicely illustrates the anti-democratic surround that most activists operate in. On the one hand, open communications channels, like Twitter, allow activists to build support for local causes; on the other, sharing means being vulnerable because of the default open settings of the technologies they use, and the surveillance practices of state actors.

The well-known case of Disha Ravi, Nikiti Jacob, and Shantanyu Muluk which started this essay is one example of this type of vulnerability. It was Twitter, or more precisely, a tweet by climate activist Greta Thunberg, that led to their arrests. Thunberg tweeted out a link to a social media toolkit, originating in Vancouver, that activists across the globe edited as a guide to supporting the Indian farmers’ protests online through hashtags, links, and concise statements. The protest, originating in Punjab, India, involved tens of thousands of people combating a series of new laws designed to privatize agricultural markets. The support for the farmers’ protests in India was a global phenomenon, with actors sharing hashtags, support marches, gatherings, along with car and tractor rallies organized in the U.S., Canada, and Europe. Despite this significant, if scattered, international media attention, it was Greta’s tweet – coupled with an early message from the international pop-star Rihanna – that really caused consternation among India’s central government officials and its unofficial civilian supporters, who had been claiming that the protests were an internal matter and that international actors should not get involved. Members of the government IT cell and enthusiastic anti-protestor volunteers went into digital forensic mode, quickly surfacing that the toolkit was created by a group from Vancouver, and was subsequently edited by activists across the world, including Ravi. The activists made their written contributions to a public Google Doc, and that’s how they were caught. The public document was easily accessed by digital sleuths, and that information was turned over to the Delhi police. Google, for its part, provided the IP addresses of the editors to the police. The editors located in India were arrested immediately.

The laws were repealed by the Lok Sabha on November 29, 2021, and Ravi was granted bail on February 23, 2021. The charges included conspiracy and sedition, the latter originating from the 1870 version of the Indian Penal Code passed under British rule. In light of these arrests, and the arrest of Dalit activist Nodeep Kaur, the social media site Twitter acceded to the government demands to ban accounts associated with hashtags supporting farmers’ protests. Although the case against Ravi was dropped, her arrest illustrates the overlapping fields of corporate platforms privileging value sharing over privacy; the negotiations that these corporations make with national governments to maintain their market share, nation-states’ use of digital surveillance to govern public protest, and the geographically uneven relationship to risk that these factors engender.

The particular case of these activists illustrates the climate of suspicion in which activists work to sustain democratic dissent in the US, Europe, South Asia, and the Pacific operate. Black Lives Matter protestors in the US experienced police violence, social media surveillance, and continued harassment in the summer of 2021, and during the uprisings in Ferguson. This even led some to suffer from depression and suicidal thoughts, as detailed in the pages of Allisa V. Richardson’s magisterial Bearing Witness While Black.

Navigating the Cryptographic Struggle

In groups that I am a part of, activists, both from India and abroad, have concluded that the pressure to turn the tide in India can come from three sources: protest movements on the ground, the realm of international human rights, and the court of public opinion. So, while it might be better to entirely avoid tools of collaboration that are leaky, or to condemn people who neglect their security practices by forgetting to make a document private, they recognize that people are fallible and that these systems are a necessary evil.

While it might be better to entirely avoid tools of collaboration that are leaky, or to condemn people who neglect their security practices by forgetting to make a document private, they recognize that people are fallible and that these systems are a necessary evil.

In one example of how they have adjusted to this reality, activists decided who would be author on a group document based on who had the most real-world vulnerabilities. First, in terms of being exposed to violence or having family members who might be exposed to violence; second, based on who had the most economic security; and third, based on identity categories like race and caste, which would make a member a particularly attractive target for doxxing, harassment, and abuse. This is an instance of layered cybersecurity. These decisions were made in real-time, as a particular set of events was unfolding. Rather than trying to fix technology or technology companies, it moved within and among how technology companies are incentivized to practice its own politics. Since technology companies design products in terms of individualized identity and authorship, collective practices of authorship and organizing can be ‘hidden’ behind a particular identity that takes on the responsibility of posting, authoring, or interacting with various communities or social media platforms. Sometimes, this may involve creating a group moniker or account. However, at other times, especially on platforms that demand individualized identities, this practice requires a real-time analysis – a form of reading the social situation, which is a form of decryption that tries to track how specific platforms might afford protection to, or leave exposed, particular individuals. These collective practices of cybersecurity are rarely recognized in the design or understanding of issues of online safety.

These layered practices do not mean that there is always, or even mostly, agreement among what risk is, how it should be assessed, and how safety (or how much safety) is made approachable. In another instance, activists located across three continents debated whether using a social media hashtag that would directly use the term ‘rape’ coupled with the Prime Minister’s name was a good or bad idea to publicize a case of violence against an oppressed caste woman. In an exchange conducted over an encrypted channel, the activists focused on the degree to which connecting the name of the Prime Minister with the word ‘rape’ would be ineffective – because the hashtag would almost certainly be almost immediately banned – and expose those who might use it or be linked to those who use it in India to arrest and violence. In the end, some, but not most of the participants, used a modified version of the proposed hashtag that did not make this linguistic connection between the word ‘rape’ and the name of the Prime Minister. For a day, one participant located in Europe, who was a big proponent of the first version of the hashtag because of its potential virality, left the group. She returned later after having a side conversation with another member.

Layered cybersecurity necessitates these debates; it requires working out in conversation what the hustle allows and what the limits of the hustle should be. In other words, when finding refuge in online spaces, they must also give rise to dissent around how to use the infrastructures of networked communications. Since the way these infrastructures operate is constantly changing, activists do their own sort of forensics and tests to understand what they are dealing with: which tactics will be taken up by the algorithm, which will be banned, which campaigns are right-wing adversaries using, how do they use the systems of moderation put in place by tech companies to ban minority groups’ access to mediated publics, what vulnerabilities do different platforms present to groups as they communicate, and what do the latest scandals around backdoors, government surveillance, and unintentional leaks mean for current organizing?

Infrastructures of Dissent: Democracy and the Cryptographic Hustle

All of these questions outline the infrastructural conditions for producing dissent. Dissensus, as continental philosopher Jacques Rancière describes it, is a precondition for democracy because it entails changing the distribution of what is sensible, by widening what subjects are part of common decision-making. Yet, to the degree that Rancière turns his analysis to historical examples, his model of dissensus can tell us little about how dissent is maintained in arenas where mediated spheres keep communities separate, joining them together mostly to capture attention. In this arena, the structures that maintain dissent rely on the embodied experience of standing outside the status quo along with careful attention to the modes of communicating dissensus that social media affords. In this relay between the felt, sensory experience of being disparate (set apart) and the tactics used to articulate a different mode of making decisions in common, a new practice of dissent can emerge. This practice requires sharing tactics, creating an aesthetics of expansion that values alternate ways of celebrating and reminds participants of why and how they dissent, while working to protect channels of communication that enable dissent in the first place. What I have been calling the cryptographic hustle joins a long, historical list of secret writing and strategic decoding that surround resistance movements. From the tape recorders and radio transmitters used by the South African liberation movement to the evasive decoding of protesters in Hong Kong, to the even earlier use of chapatis to circulate knowledge in the countryside, the cryptographic hustle develops to meet a surveillant terrain. Yet, it is important not to overstate its possibility for success. For one, the capacity for governments to infiltrate communications networks is perhaps unprecedented, especially given the predilection of social media companies to mostly work within government demands for the sake of access to populations. For another, anti-democratic language tactics emerge that take on the language of dissent almost simultaneously with the publication of dissent strategies in social media outlets. This language counters one experience of exclusion with a mirror-image of that same exclusion. For instance, a conference in the name of combatting Hindutva is immediately countered by a conference in the name of combatting something that the organizers class Hinduphobia. These moves and countermoves show how dissensus can be captured, it can be turned into its opposite, a consensus around how to do politics (through move and counter-move) that closes off the space for alternatives. As Meredith Clark demonstrates, social media platforms become “an arena of contention” where actors “develop collective identities to symbolically resist and devalue subjugation”, but are also spaces where those identities can be mimicked, betrayed, and subverted.

Rather than focusing on cybersecurity as a problem that rests solely on the shoulders of communities to solve for themselves, the cryptographic hustle shows how communities work within and around a given system and its constraints.

The cryptographic hustle protects not only communications that are tactical. It also protects the spaces where new modes of doing democracy can evolve. In other words, rather than focusing on cybersecurity as a problem that rests solely on the shoulders of communities to solve for themselves, the cryptographic hustle shows how communities work within and around a given system and its constraints. These practices are, at times, countered and defeated by forms of surveillance and modes of co-option and mimicry that subvert their potential to reach toward another mode of democratic action. Yet, the degree to which the hustle moves beyond individualized and gender-normative ideas of protection can also provide a refuge for forming unlooked-for modes of politics that may undo, for a time, the normal way of resolving democracy into a battle of interests. Instead, it can reach for a politics that widens the scope of who and what are participating in decisions in common.

This is the fourth piece from our special issue on Feminist Re-imagining of Platform Planet.