In ancient mythologies, creatures defying all sense of logic and possibility roamed the lands. They tamed seas, made mountains, and were responsible for the myriad of systems that governed our natural world. They could also be capricious and cruel, punishing and playing with humanity. At times, they were gods.

In many ways, this formula has been transferred to technologies. Echoing the genre of myth, emerging technologies and discussions around them are today infused with a sense of incomprehensibility, or a fundamental inability to understand or audit the “decision-making” of predictive tools, and an inviolable sense that these technologies defy our mortal ethical frameworks. In the pantheon of the technological gods, artificial intelligence (AI) would be Zeus.

It’s no wonder then in all these processes of AI myth-making that it’s increasingly difficult to know where one stands, or where we collectively ought to stand in regard to AI technology. To figure this out we have to undertake the parallel processes of demystifying AI and developing appropriate ethics.

This is easier said than done within an AI landscape that often defies simple communication. Storytelling around AI often overshadows sober discussion. Indeed, narratives surrounding AI pitch it as something beyond our wildest dreams — and therefore capable of re-making reality itself. Experts positing AI as a silver bullet solution add to its dizzyingly elevated status, dwarfing attempts to articulate an ethical framework for the technology. A purposeful move, the mythologization of AI serves to overwhelm, its scale seemingly out of reach of our pedestrian ethics. At the same time, we come to this question with reactive ethics that is playing catchup with AI — where AI, even though not fully formed, gets to shape our ethics.

Experts positing AI as a silver bullet solution add to its dizzyingly elevated status, dwarfing attempts to articulate an ethical framework for the technology.

The perceived ubiquity of AI, as both a solution and inevitability, makes us feel like we do not have a scale of ethics to match. But it is vital to query this very scale of supposed incomprehensibility. This article seeks to explore how active querying of the realistic scale of AI can help us be proactive in developing an ethical framework that repositions AI well within our ethical grasp and intellectual comprehension.

Establishing the Limits of AI

Understanding the limitations of AI technology can empower our ethical stance. But the semantic duality in discussions around AI can prohibit our ability to discern the limits of AI technology. Particularly problematic is the linguistic distortions that result from the anthropomorphization of AI. Champions of AI point to the ways that the proximity of AI to humanness, its self-aware consciousness, or supposed ability to mimic cognitive functions of perceiving, reasoning, learning, […] and exercising creativity are used to evidence the technology’s power. Anthropomorphization overstates the capacity of AI as it justifies its shortcomings. We are told simultaneously that AI technologies possess the capabilities to existentially threaten mankind, but that the mistakes and limitations of AI are evidence of its ‘humanness’ – mere foibles that should endear us to the technology.

This is a heady mix to hold the two claims as simultaneous truths, but this is an unsustainable duality that provides an entry point into unraveling the stories that make an AI ethics seem hard to grasp. Broadly, we can understand the key mistakes of AI thus: AI technologies struggle with grasping complexity and AI technologies are hamstrung by the limits of their original training datasets. We don’t see either of things as innovative or really excusable when undertaken by humans but are told to indulge the machines. Instead, we should see them as what they are: clear technological limitations that should inform how we use AI technology.

Anthropomorphization overstates the capacity of AI as it justifies its shortcomings.

In the moment when AI’s defenders are pointing to its supposed humanness, they are showing us exactly the reasons why AI cannot provide the answers or solutions on the scale we are being told it can. Once we have peeled away the layers of myth surrounding AI, we are left with a curiously limited technology from which we can begin to consider AI as an object within the scope of our ethical understandings and subject it to established principles of social justice.

Ethics of Action

So, what then constitutes an ethics of AI? AI ethics often focus on what we shouldn’t do, or focus on abstract goals such as transparency and accountability. All of this is important, but it does not actively inform us in trying to assess the utility and appropriateness of AI for the vast array of tasks we envision it to fulfill. Undoing the mythologization of AI technologies requires closing the gap between the AI’s technical capabilities and its governing ethics.

We should see this vagueness as a direct product of the ephemerality AI has been imbued with. In our ethics, much like our understanding of AI, we can instead seek tangible answers. What should we use AI for? Identifying and clarifying the use case, and by extension the purpose of developing AI, is fundamental to sober assessments of the technology.

In this article, I want to focus in particular on what a directive and proactive ethics of action for AI could look like in the context of development. A proactive ethics opens up space for us to consider how we might use AI to create space to re-imagine equitable development. In other words, how could AI be in service of something new and better, rather than AI in itself be seen as new and better? The development context allows us to embed the reclamation of technological progress into the question of shared economic upliftment.

So to this end, what is the potential of an ethical approach to AI that prioritizes AI for creative development? How might we use the process of demystifying AI to allow us to posit an AI ethics that is proactive: one that allows us to push against the historical determinacy of capitalism, and empowers us to seek collective forms of upliftment?

Re-orienting the Direction of AI

In the context of economic development, one key mechanism of mystification is at work: the creation of distance. AI is a technology fueled by extraction, taking information and skills from one place and transferring it into a product that serves another. By severing the connection between the producer and product, AI technologies make it more difficult to ensure that economic gain accrues to all spaces and places that contribute to its production. Content moderators for social media platforms or data labelers for machine learning who are paid a pittance in the Global South, very rarely have connection or claim on the Global North corporations profiting from their work. Indeed, data labelers in Kenya were not told who or what product or purpose their labeling is for, nor that their work was for the multi-billion dollar company Scale AI – a supplier to some of the biggest names in AI. Both ScaleAI and its competitors have spread this approach to AI labor across the Global South, segmenting further the production line of AI in the pursuit of cheap labor.

It is vital to reorient the direction of AI, instead of AI fueling widening economic outcomes and extending the technological gap between spaces, AI should be used to close this gap.

The diffused network that feeds AI’s expansion exacerbates this inaccessibility. AI takes the production line to new heights, where before, individuals placed a single bolt onto a car door ad infinitum, now they do not know that the bolt is for a car door, let alone the car. By creating ever more diffused layers, AI obscures the means of its production. Meaning, ill-treated workers or communities living with the environmental impacts of building AI models alike have little capacity to challenge the systems wreaking havoc on their lives.

This distance is venerated by economists and other proponents of AI, who point to ways AI advances the supposed frontier of innovation. They point to abstract improvements in processing speed or indicate the efficiency gains AI technology can contribute to the economy. This is a misnomer; what they really point to is the way AI elevates the gap between human and machine, rendering humans mere cogs in the AI machine. In this environment, processes of mystification thrive.

I would argue that it is vital to reorient the direction of AI, instead of AI fueling widening economic outcomes and extending the technological gap between spaces, AI should be used to close this gap.

The politics of development itself inhibits the potential economic utility of AI in the Global South. The historical lineage of the current model of development being propped up by plunder, extraction, slavery — systems of white supremacy have set up the global minority to enjoy the dividends of plunder both in the past and present. Within this paradigm, the global majority has long experienced reduced opportunities, a differential that AI explicitly amplifies.

In fact, we can contextualize AI in a larger trend of technology widening the gap between the global majority and global minority. ID4D and expansion of digital ID systems in the global majority world underpinned by global minority companies means profit and data both flow back to the global minority. The biometric technology underpinning these systems is also often developed or provided by global minority companies. Together the process of technology being used by global minority donors, governments, and corporations to entrench the spoils of ill-gotten gains is clear.

Rather than feeding the gap between global minority and global majority countries, AI should instead be aligned with trying to close this gap, to narrow the spaces in between, and reduce the margins on the edge of global development.

What Does this Mean in Practice?

There is currently very little room to dream about alternatives to these models, or even what capitalism might look like in a very different landscape to the one it emerged in. Those trying to tread a different path face high-opportunity costs, in part because there are high barriers of entry to creating the kind of infrastructure and living standards to support economic growth and societal development. Additionally, trying new things is risky in an environment where incorrect choices can have substantial long-term consequences.

By focusing the gaze of AI on responding to the needs of the global majority, and mobilizing AI for those purposes in a directed manner, we can reign in its mystical status.

There are two clear ways that a proactive AI ethics focused on reducing distance can be useful in addressing these central challenges:

1. Processing information we already have about infrastructure

As discussed earlier, a key facet of AI is the ability to process large amounts of data in a fairly mundane sense. Tapping into the more mundane abilities of AI to help process existing information on prior developmental and infrastructure decisions from urban planners, architects, and policymakers can help paint a picture of how prior infrastructural decisions have panned out. The ability to then sift through this data and augment it with contextual and historical knowledge can allow for new pathways to be mapped out, and previous pitfalls to be avoided. Rather than turning towards AI technologies for answers, planners and policymakers can look to it as a synthesizing measure which is beholden to wider concerns and interests.

2. Supporting decision making about alternative futures

Imagining is often an expensive game, and at times, an impossible one given the strength of the status quo. But having the opportunity to dream, to explore different ways of growing is critical to decision making. In this way AI technologies can assist in visualizing different scenarios, making alternatives more tangible for assessment and reflection. Using AI in this assistive rather than deterministic manner can reduce the start-up costs of undertaking economic development differently.

In this ethical paradigm AI is limited to its utility in aid of uncovering new and more contextually rooted pathways of development. By making AI technologies functionally in service of larger imaginings we can unpick the threads that have woven a narrative of AI as itself autonomously capable of imagining a new era.

Where to Next?

A proactive approach to AI ethics posits that we should be explicit about the direction we want AI to face, the direction in which it should serve. By focusing the gaze of AI on responding to the needs of the global majority, and mobilizing AI for those purposes in a directed manner, we can reign in its mystical status. In some ways this may seem limiting, and may at first glance appear insufficiently capacious for all that AI could be used for. And that is exactly the point. In developing an ethics of AI explicitly designed to limit AI’s reach, motivated by a desire to close economic and technological distance rather than increase it, we are forced to engage earnestly with what AI can realistically, meaningfully, and ethically provide.

Saying no to the voracious nature of techno-solutionists advocating for the AI-ification of decision making, is saying no to the whims of rudderless innovation, it is saying no to a surveillance capitalism under which environmental impacts and the loss of labor rights is seen as the cost of doing business. Though this approach does not reshape capitalism fundamentally, in bending the capitalistic tendencies of AI to the majority will, it is possible to find a clarity of ethics rooted in a clear-eyed understanding of AI.

Perhaps the most pernicious myth of all is that of AI’s ungovernability. The tale that tells us we are incapable of either comprehending or curtailing AI’s power. An ethical framework that is able to concisely inform and direct when AI is used can serve as an important reminder that sometimes myths really are just mere stories.