At first glance, AI’s everything, everywhere moment seems to have no end in sight. Ceaselessly, AI commentary has swallowed all the policy, business, and cultural attention that opens up. Indeed, this tech phenomenon is one that has shown no signs of letting up.

Yet, for as long as the AI hype has existed, so has the fear of the dreaded ‘bubble.’ Despite fervent mythmaking, high finance has watched nervously as, over the years, the yawning gaps between AI investments and their returns continue to grow. In a spectrum that ranges from staunch belief in the fully automated era to utter skepticism about the house of cards it seems to be propped up on, the truth about AI can seem unknowable and uncertain, much like the fate of the cat in the famed thought experiment by Schrodinger.

What is not uncertain, however, is that as with all paradigms that promise to unleash great disruption, developmental justice is being reshaped and remade in ways that only take us farther from the promise of egalitarian and sustainable economies and societies.

The well-established political economy of the digital has already entrenched deep-seated North-South inequities, effected a backsliding of rights, and expanded corporate power to new heights. In doing so, it has concentrated wealth and resources, legitimized reckless speculation as a business practice, and continues to test the limits of the planet without care or consequence.  The AI phenomenon, controlled as it is by the most powerful players of this constellation, is currently forging its path on this dangerous precedent. The window to course correct and re-steer its directions is rapidly shrinking. The primary question that should concern us then is not if AI will survive the hype bubble but if humanity’s future will survive its current trajectory.

But equally importantly in reducing AI’s future to two bleak scenarios, ‘the takeover of the machines’ or ‘bubble and bust’, we deny ourselves the opportunity to truly appreciate the possibility of multiple AI-led futures. Ones that are oriented towards different life-affirming goals other than profit-making or human obsolescence, ones that are cast in new southern-led imaginaries, beyond trite Silicon Valley speak, and ones that center sustainability rather than mitigate unsustainability.

In the convergence of these two goals, the possibility of a regenerative AI emerges. In the interviews that follow, scholars, activists, and practitioners who came together for a two-day workshop, ‘Towards Regenerative AI: Frames for Inclusive, Indigenous, and Intentional Innovation‘, engage with these questions. From laying out problem areas to teasing the possibilities for solutions and alternatives, these conversations encapsulate a rich slice of a larger and multi-faceted dialogue around the future of the AI economy.

Speaking about addressing data and AI harms, James Farrar, Founder and Director of the Worker Info Exchange, throws light on emerging trends of dynamic work and pay systems within the gig economy and the ways in which they can trigger a race to the bottom. Contesting the idea that there is no alternative, he urges regulators to avoid the trap of inevitability and not slide back on well-established human rights and labor protections, while fortifying the same for new complexities wrought forth by AI.

 

In deliberating about the possibility of building South-led AI economies, Shyam Krishna, Research Leader at RAND Europe, comments on how AI has made value generation a fragmented process that does not allow communities to come together. Challenging the centrality of sovereignty, which has been a frame for southern economies to chart their place in the AI ecosystem, Krishna points to the limits of sovereignty as an inherently state-forward concept. He urges us to broaden our thinking about South-led AI economies by highlighting cooperative and community-based forms of ownership and value-making.

On designing AI models for local context and control, Ramya Chandrasekhar, a researcher at the French National Centre for Scientific Research (CNRS), stresses the importance of operationalizing what we mean by the ‘local’ when we talk about localized AI. Commenting on the current trend of data-intensive foundation model building, which is the preoccupation of most large AI companies, she shares with us the challenges and opportunities of trying to build alternative data sets and models rooted in communitarian contexts and values that are more responsible and purpose-specific.

 

Considering what it takes to tackle Big Tech’s market dominance, Rafael Zanatta, Director, Data Privacy Brazil, lays out the complexity of the global AI supply chain and highlights the inadequacy of antitrust laws in preventing vertical integration and data-driven mergers and acquisitions. Moving beyond price, he points to the accumulation of data by monopolies and its power to shape and modify the behaviors of people in today’s age, arguing for a rebuilding of the theories of harm and the case for the generation of public good.

When asked about making the right choices for sustainable AI economies, Mandvi Kulshreshta, Senior Program Advisor at Friedrich-Ebert-Stiftung, cautions against the current trends in dilution of ESG measures, noting that these may prove incapable of mitigating the ecological harms that AI may bring. She asks us to look towards the practices and successes of smaller actors in social enterprise who have married ecological preservation with capital gains.