“You cannot depatriarchalize without decolonizing.
You cannot decolonize without depatriarchalizing.”
-María Galindo
Artificial intelligence (AI) is the holy grail of capital accumulation and socio-political control in contemporary societies. Consequently, companies and governments are rapidly adopting AI as a fundamental tool to increase productivity and efficiency. However, the deployment of AI has also shown great potential for harm, especially for racialized women, non-binary people, and groups in other vulnerable situations like platform workers, immigrants, and refugees. The AI industry is predominantly located in a few countries, which increases the wealth gap between high-income economies in the Global North that produce these technologies, and low-income countries in the Global South that consume these technologies and provide the data for the AI industry to grow. Therefore, this hegemonic industry is contributing to the deepening of systemic inequalities globally.
To respond to the growing concerns of civil society about the negative impacts of AI, large tech corporations have developed ethical frameworks to reduce the potential harm associated with the deployment of AI technologies. In general, these frameworks provide narrow technical fixes to socio-technical problems, and in many cases, are just part of a broader strategy for ethics washing. However, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, signed by 193 states in November 2021, provides a more comprehensive approach to AI risks. The framework considers the complete AI lifecycle (design, development, deployment, use, and disposal) and pays special attention to gender equality, cultural diversity, and the environment. However, to see this or any other ethical framework materialize, we need to remember that AI is a matter of power. It is used as a geopolitical tool that allows the concentration of wealth, knowledge, and control. This power concentration generates global asymmetries that contribute to the reproduction of oppressive systems such as capitalism, colonialism, and patriarchy.
In this essay, I suggest advancing the feminist and decolonial imagination to develop AI technologies under ethical frameworks that consider intersecting systems of oppression at every step of the AI lifecycle. I also provide examples of AI projects that help us imagine alternative feminist and decolonial technological futures. I conclude by offering possible interventions to build technologies aligned with the right to a dignified life for humans and other beings, and to ensure that the rights of nature are respected.
Decolonizing and Depatriarchalizing AI
Decolonial studies comprise a heterogeneous set of thoughts characterized by the epistemic domination of the rational and modern Western thought that marks the beginning of modernity and the consolidation of the capitalist system. According to the Peruvian sociologist Anibal Quijano, the coloniality of power is the “colonial reason” that controls material resources, bodies, territories, subjectivities, intersubjectivities, and knowledge to accomplish the colonial, modern, eurocentric project. Coloniality is the sediment of the colonial experience in contemporary times. It explains the logic that sustains the supremacy of the West. This form of dominion is asserted through the imposition of universal forms of being (social hierarchies of gender, race, class, ability, language, and other intersectionalities), knowing (positivism, rationalism, binarisms), feeling (separation of affect and cognition, subordination of emotions and affects, secularism), doing (efficiency, utility, productivity), and living (consumerism, commodification, individualism). Colonization birthed modern capitalist societies through the dispossession of territories, labor exploitation, and violence against women. Dismantling these interconnected oppressions — materialized in the coloniality of being, knowing, feeling, doing, and living that they sustain — is necessary to decolonize the current hegemonic socio-technical systems. AI reproduces dispossession through data extractivism, epistemic violence, social orders of classification, and automated impoverishment. Therefore, to achieve gender equality, social, and environmental justice, we need to decolonize and depatriachalize AI, moving it away from the circles of capitalist accumulation, and from colonial and patriarchal relations.
A feminist decolonial imagination seeks to eradicate multidimensional socio-technical violence in the interconnectedness of oppressive systems. In this endeavor, it is crucial to unveil how the AI lifecycle and its associated social processes operate cognitively, emotionally, and pragmatically to construct worldviews and ways of relating and conceiving social existence. For example, AI technologies, particularly machine learning (ML) models, are used to decide who will be more inclined to commit a crime, who fits better to get financial aid or a job, what languages and whose knowledge is elevated, etc. Due to the biases incorporated in data and algorithmic models, and the punitive use of these technologies, the results of the deployment of intelligent systems in society are often discriminatory against women, non-binary people, immigrants, low-income groups, non-white populations, and non-English speakers. For example, the US Department of Homeland Security relies heavily on cloud services, surveillance technology, and predictive analytics to keep immigrants under check. This way, these technologies are deployed to deepen social asymmetries and knowledge gaps. Therefore, the questions that need to be asked are: Who collects and uses whose data? How is this done? Why is data collected? Who designs the algorithms and for whom? Who is benefiting from the deployment of ML models? What purposes does automation serve? What notions of gender, race, class, and justice are reinforced? What narratives, visions of the world, and imaginaries of the future are promoted and at what cost?
AI reproduces dispossession through data extractivism, epistemic violence, social orders of classification, and automated impoverishment. Therefore, to achieve gender equality, social, and environmental justice, we need to decolonize and depatriachalize AI, moving it away from the circles of capitalist accumulation, and from colonial and patriarchal relations.
The effects of AI are visible in the configuration of multiple relationships at the micro, meso, and macro levels. On the one hand, at the micro level, our personal, social, cultural, and political relationships are algorithmically mediated. For example, there is evidence of digital interactions on social media negatively impacting the mental health of teen girls. Similarly, the commercialization of data implies a privacy risk, because our digital traces can be linked with our identities and used to influence our behaviors. At the meso level, algorithmically-enabled misinformation has a differentiated impact on minorities. Simultaneously, at the macro level, algorithmic manipulation has endangered Global South democracies by fueling extremism and polarization. Human-machine interactions also redefine the conception of what it means to be human, the limits and possibilities of human agency, and the power of algorithms in our lives. Additionally, new questions emerge with the advancement of Internet of Things (IoT) and 5G technologies, which foster ubiquitous communication between machines, and lead to the permanent surveillance of human life. As a result of capitalist, colonialist, and patriarchal systems of dispossession and exploitation, AI material production is dissociated from its effects on racialized women’s bodies and the territories they inhabit. Marginalized women and girls in the Global South are fundamental agents in the global supply chains that enable technological development. The territories they inhabit provide the natural resources required to produce technology. Women’s labor is fundamental to the digital economy. However, women and girls, especially from Southern countries are excluded from the circles of privilege that determine the direction of global AI governance, design, or audits. Despite this reality, when deployed, AI systems disproportionately affect Global South communities and territories, deepening the economic, socio-cultural, and knowledge inequalities between regions.
The Feminist and Decolonial AI Imagination
The “Not my AI” project, developed by Coding Rights and Paz Peña, underscores AI as a feminist issue. It critiques ethical frameworks and principles that do not intend to change current power asymmetries:
“We don’t believe in a fair, ethical and/or inclusive AI if automated decision systems don’t acknowledge structural inequalities and injustices that affect people whose lives are targeted to be managed by these systems. Transparency is not enough if power imbalances are not taken into account.”
Relatedly, we can ask the following questions: Do we need a feminist and decolonial AI? Is it possible in the first place? Why do we need a feminist, decolonial, and intersectional approach when framing the current ethical debates? How can we question AI as power, at the intersection of oppressive systems? Multiple responses to these questions from different situated contexts and embodied experiences can achieve gender and social justice and address the socio-technical reproduction of inequalities. For instance, the “Incubating Feminist AI” project by the A+ Alliance is testing hypothesis to develop AI from a Southern feminist perspective. The outputs are yet to be shared, but the alliance offers its vision of how this goal can be achieved.
It is essential that we listen to and learn from the embodied experiences of datafication, algorithmic mediation, and automation in the lives of women and girls, indigenous communities, immigrants, refugees, platform workers, non-binary people, and rural communities across the globe. Their situated knowledge(s) can help us understand algorithmic harms as a more complex phenomenon than a straightforward experience.
As a response to datafication, algorithmic mediation and automation of social life, communities worldwide are trying to pursue justice on their terms, developing the technology they need, committing to the community’s best interests, and building pathways to autonomy and a dignified life. We have explored some such initiatives and the ideas underpinning them below. These initiatives provide insights about different dimensions of AI technologies: feminist values applied to AI design and development, communitarian principles of AI governance, indigenous data stewardship principles, and the recognition of original languages and cultures.
Feminist values
An intersectional analysis of socio-technical reproduction of gender binaries is important to improve the experience of trans persons, non-binary people, and women with AI. Intersectional and decolonial feminisms provide the theoretical and methodological tools to develop AI initiatives that account for intersectional identities. There are many projects that embody feminist and intersectional values in terms of design, data collection, and algorithmic accountability, like the Algorithmic Defense League, which is developing tools for equitable and accountable AI. Other feminists use art and speculative fiction to imagine alternative feminist futures for AI. These feminist manifestos also reflect on the consequences of AI development on the environment and the centrality of affect in our relationship with technologies. Techno-feminist radical imagination through play, narratives, and speculative fiction, such as The Oracle for Transfeminist Technologies, provides provocative and farsighted visions of AI that shows how epistemic diversity will help imagine and create techno-diverse, feminist futures.
Decolonial values
Technological development rooted in colonial rationality is an everyday experience for majority of the world. Some projects look to reverse digital universalism by developing AI technologies that reflect their worldviews and needs. Many indigenous communities around the world are creating alternative AI technologies that highlight their culture, languages, and communitarian forms of governance.
Many indigenous communities around the world are creating alternative AI technologies that highlight their culture, languages, and communitarian forms of governance.
For example, the Papa Reo project developed by a Maori community in New Zealand, is a “Multilingual language platform grounded in indigenous knowledge and ways of thinking, and powered by cutting edge data science.” The project explores how decolonial values can be applied to the development and governance of technologies under non-Western epistemic traditions. To that effect, Papa Reo foregrounds decolonial values through cultural and linguistic diversity, communitarian governance, and data stewardship principles.
- Cultural and linguistic diversity: Many communities around the world, including New Zealand’s Māori community, do not speak English. However, it is well known that speech recognition and natural language processing (NLP) technologies are unavailable or insufficiently developed for languages other than English. To overcome this deficit, the Papa Reo project developed an automatic transcription tool for Māori that uses speech-to-text application programming Interface (API) for the first-ever synthesized Māori voice.
- Communitarian AI Governance: Under the Papa Reo project the NLP tool in the Māori language was developed by the community and for the community. Such an anchoring in the community ensures that project proposals and development respond to the community’s needs. This is an example of an AI technology that’s rooted in community values, like communitarian benefit, distributed decision-making processes, autonomy, and sovereignty.
- Data stewardship principles: Papa Reo developed its own software license, kaitiakitanga, based on collective guardianship. The license gives importance to the community over the individual, by upholding principles such as communitarian benefit, as opposed to extractivism. Unlike the principle of ownership, which implies appropriation from a private or commercial perspective, this vision of data protection and stewardship implies that data is a common good of the community and, therefore, must be protected.
The Papa Reo project website states that data is not owned but cared for under the principle of kaitiakitanga and any benefit derived from data flows to the source of the data. “Kaitiakitanga is a principle that expresses guardianship rather than ownership of data,” the website states. It further adds that Te Hiku Media, which developed the license, “is merely a caretaker of the data and seeks to ensure that all decisions made about the use of data respect it’s mana (spiritual power) and that of the people from whom it descends”.
Papa Reo is part of an indigenous innovation movement that has pioneered AI development from alternative ways of knowing and being in the world. This project shows how technologies can be developed to serve the needs of a specific community. This project may not be of interest to Big Tech, but it is an opportunity for the community to express its identity and sense of belonging. Similar initiatives like the CARE Principles for Indigenous Data Governance, promote standards and protocols based on indigenous principles or methodologies that do not involve extractive processes or relationships.
For this purpose, mixe linguist and activist Yásnana Aguilar, urges us to think of “tequiologies”. Tequio, in indigenous communities of Mexico (similar to minga in the Andean cultures or mutirao in Brazil) is the practice of working collectively to achieve a common goal (building a school, a house, cleaning the land, etc.). She argues for the development of technologies based on collaborative labor that emphasizes living with dignity.
Conclusion
A feminist intersectional and decolonial approach can reveal and help us understand the harms associated with the AI lifecycle and how they get embedded in social relationships of power and wealth. In imagining a decolonial and feminist AI, we should center a discussion on developing AI ethics and power in a more comprehensive and nuanced manner that acknowledges the interconnectedness of historical oppression systems at the macro, meso, and micro levels. Ethical decisions must be made at every level to guarantee gender equity and social, cultural, and environmental justice at a global scale. In this regard, we need to question the governance of AI and the role it plays in global geo-politics, the extractive processes contributing to the production of AI, the negative impacts of AI in use, the harmful environmental and social effects of AI maintenance and disposal, and the production of universal narratives and imaginaries of the future.
Decoloniality and feminism as praxis require de-linking from every form of oppression, new and old. It is important to stop, dismantle, and repair the systemic process of dispossession and multidimensional violence at scale. Through a feminist, decolonial imagination, we can explore the possible forms of AI that can lead us towards constructing dignified lives, convivial tequiologies, and techno-diverse futures.
This is the seventh piece from our special issue on Feminist Re-imagining of Platform Planet.