When it comes to creating and deploying responsible artificial intelligence (AI), there’s a lot to think about, including its many trade-offs. It is not enough to just make the technology ‘safe’ and then market it as ‘responsible’ to users. We also need to find ways to delve into the harder questions of how AI shapes and shifts resources, rights, and power.
Balancing Resources: Data, Computing, and AI Talent
Imagine AI as a puzzle or a game board, with three crucial pieces: data, computing power, and AI talent. These pieces are essential for any AI system to function well. But here’s the problem: only a few big players hold these pieces. This creates an imbalance, making it hard, if not impossible, for other players to join the AI game.
In an age of planetary competition over AI resources, ‘planetary cooperation’ is the need of the hour.
For instance, data used to train AI models mostly comes from a small group of leading institutions and companies, mainly in the US. This means that the AI model learns from a limited perspective, missing out on diverse experiences, languages, and viewpoints from elsewhere. Additionally, the large amount of computing power needed for AI model development is controlled by a select few, creating another bottleneck for new players to compete fairly. While these actors benefit from this arrangement, they often also externalize the high costs of human feedback, its water footprints, and the energy consumption needed to train ever-larger AI models. The AI talent pool is also limited with top-tier researchers often working for a small number of elite institutions and companies. Overall, what this means is that AI development and research are driven by a narrow group, which leads to structural biases or tiered outcomes.
Solving the Resource Imbalance: Towards Planetary Cooperation
Talking about responsible AI without adequate accountability is like talking about damages and losses without assigning liability and compensation to redress the harms caused by one party to another.
In an age of planetary competition over AI resources, ‘planetary cooperation’ is the need of the hour. In planetary cooperation, countries, companies, and actors team up to share resources to reduce risks by investing, creating, and using more digital public goods, such as open data and knowledge commons like the Digital Bargaining Hub and open educational resources (OER). By pooling our strengths, we can make AI fairer, more accessible, and better, if we need it at all.
Do we really want responsible AI that is only accountable to those with resources, rights, and power?
One idea moving us toward planetary cooperation in AI is a principle called, ‘common but differentiated responsibilities and respective capacities’ (CBDR + RC), that has been used in negotiating climate action. Based on the principle of equity, it’s akin to sharing chores and responsibilities at home: everyone helps (ideally), but some do more based on their abilities or on the basis of how they may benefit or be harmed. This principle could help level the AI playing field, ensuring that more people and places benefit from and have a say in its development and use, while its costs, risks, and harms to individuals, groups, and the planet are taken into account. This will make AI more sustainable today and lead to a more equitable future.
Recentering Rights and Politics in Making AI Truly Responsible
Another challenge is making sure that AI respects our rights. Companies and institutions usually focus more on optimizing AI by making it safer than on making it responsible with actual accountability. Yes, AI should follow sets of ethical rules that reflect and align with our values. But there’s a catch: these rules and their technical responses often avoid or are devoid of rights as politics. This is a bit like having a driver in a safe, well-functioning car who does not – or only selectively – follow the traffic rules!
Responsible AI must be political and have politics to address and properly mitigate its socio-technical harms.
Imagine if AI made decisions that went against what’s fair or just. There is a high chance the individuals and groups harmed might never know about it if there are no critical and independent oversights that are adequately resourced and empowered. That’s why we need to bring politics into the picture in making AI truly responsible. AI is imbued with politics. Thus, responsible AI must be political and have politics to address and properly mitigate its socio-technical harms
Furthermore, rights, too, are political as they can be denied, challenged, weakened, or completely circumvented, such as in the misclassification of ride-hailing drivers as independent contractors and not employees. Our rights – like the right to privacy or dissent and freedom of expression or association – are connected to politics, where politics enacts and allows us to claim rights. This means that we not only have a say in its use, but can also demand greater transparency, reporting, and accountability mechanisms in how AI is developed, without placing undue burden on those who are already at risk or are being harmed.
Responsibility is Rights as Politics
To make sure AI is responsible and respects our rights, we need a mix of technical rules, standards, assessments, audits, and tests, together with public conversations, political decisions, and rights as politics in equal measure. This means involving people from different backgrounds, cultures, and perspectives. This could be done by fostering a resourced and healthy civil society, an engaged citizenry with data literacy, and a diverse media coverage about AI.
Furthermore, there is the need to have greater independence and separation of interests and funding between academics, companies, standards bodies, and governments for a more vibrant and resilient AI ecosystem. These are conditional on having institutions that are empowered to protect and guarantee the interests and voices of those with less resources, rights, and power. By having a diverse group of voices that recognizes AI is political and has politics, we can make sure that AI is not just efficient and safe, but also inclusive and should affirm our rights as politics, and politics as rights.
Shifting Power: Who Controls and Benefits from AI?
Lastly, let’s talk about power. AI can change the way we live, work, and interact with each other. But who gets to decide how AI is developed and used?
To make sure AI is responsible and respects our rights, we need a mix of technical rules, standards, assessments, audits, and tests, together with public conversations, political decisions, and rights as politics in equal measure.
Right now, a small group of big companies and institutions have a lot of say in AI’s direction. Often the problem definition (such as its risks and dooms) and the defined solutions to these problems (such as a more responsible and safer AI) are set and captured by some same select individuals. This can lead to decisions that benefit them by its very design, but not necessarily the rest of us. Essentially, they have the solutions to the problems they defined and created. Meanwhile, they can make more money from both ends – what a steal.
This is where the concept of power comes in. AI can shape new possibilities and opportunities, so decisions about it must be made inclusively with care and foresight. We need real counterbalances and voices away from the usual places and peoples. We need to make sure that power is distributed so that more people can benefit from this technology, while we also internalize and take into account its costs on people and the planet today and tomorrow.
Balancing Power: Inclusive Decision-Making
To rebalance power, we need to involve more people and perspectives to decide how AI is developed and used. This means including different voices, from ordinary citizens to experts to have difficult dialogues, and to shape our common future with AI in a way that empowers us and enhances our quality of life. By doing this, we can better confront a situation where a few powerful players capture and control AI’s direction.
Creating more responsible AI is both aspirational and imperative. It is also a complex task that is doable. However, the real question is do we really want responsible AI that is only accountable to those with resources, rights, and power?
To realize a more responsible AI, we need to go further than just the technical responses and get to the ‘underlying challenges of imbalanced resources, rights, and power’ to ensure that AI benefits the rest of us. Let’s embrace these challenges in the trade-offs inherent in responsible AI as opportunities. By working together, sharing resources, and involving diverse voices, we can make AI a possible tool for good.