New developments in artificial intelligence (AI) over the last year have created new opportunities as well as risks. Governments worldwide have identified policy priorities pertaining to AI. These have included:
- Algorithmic accountability, transparency, and standards: As AI models start making more decisions that affect the lives of people, including workers, consumers, and public figures, questions of algorithmic accountability become critical. New legal provisions in different countries seek to institute transparency requirements for models and/or their development and deployment. Some sectoral standards also apply to AI models.
- Liability: AI models, especially those employed in sensitive sectors like healthcare and education, may be subject to liability conditions that other software or humans in these sectors are subject to. Since AI models can often ‘hallucinate’ (provide incorrect information), their inclusion even in less sensitive applications, such as customer service, must be carried out with a view to the cost of inaccuracy.
- Governance of computational infrastructure: A complex supply chain of computation makes today’s AI models possible. This includes chip design, manufacturing, assembly, testing, and packaging. Governments have been paying an increasing amount of attention to the economic governance of computational markets.
- Antitrust/competition policy: AI markets are extraordinarily concentrated. Vertical integration between cloud service providers, chip designers, AI model developers, and AI service providers is of particular concern. As AI is expected to affect a large number of industries, AI market concentration can have severe effects on the general concentration of wealth and power.
- Data governance: AI models require a large amount of data for training and fine-tuning. Questions of personal data protection appear with renewed urgency, especially as Big Tech firms use data from their products and services to train AI models. The value of non-personal data also becomes evident, as does the question of who captures this value.
- Geopolitics and unequal development: Geopolitical rivalries dominate some parts of the AI policy sphere, especially on control over computational infrastructure. Countries in the Global South have particular concerns about unequal access to computational infrastructure and unfair global digital taxation regimes.
- Misinformation and disinformation: AI models make it possible to generate mis- and disinformation at scale. Already, deepfakes have been used in election times and have affected various political and other public personalities. Vulnerable populations are at increased risk of being targeted, and even facing violence through disinformation.
- AI-mediated unemployment: The widespread automation caused by AI in various sectors is likely to threaten many livelihoods. Unlike previous iterations of technology, there is little indication that AI will create more jobs than it replaces.
- Environmental harms: Data centers that make AI possible require a tremendous amount of energy and water. Governments are interested in a cost-benefit analysis of AI products and the effect of their production on the environment.
AI Regulation and Trade
At the World Trade Organization (WTO), trade rules being negotiated right now will have a lasting impact on governments’ ability to regulate AI. There are many issues with the negotiations on the Joint Statement Initiative (JSI) on E-commerce.
- Article 17, open government data: Government data, like any other data, is a resource. It may be legitimate to use this data for public welfare. However, it is difficult to justify its use by Big Tech firms which do not share their own data as a resource. Data is a primary asset to improve the capabilities of AI models. Making government data available openly, especially at no or reasonable cost as the text stipulates, means disadvantaging the public sector and smaller firms in favor of larger firms. The provision in Article 17.7(c) disallowing governments from restricting users from “using the data for commercial and non-commercial purposes, including in the process of production of a new product or service” is particularly concerning. While the language of the provision is only that governments must “endeavor” to carry these activities out, such language can still represent an obligation. Governments must instead:
- Explore more controlled ways of sharing public data, especially to create competitive markets and avoid capture by Big Tech; and
- Work on methods to develop mandatory sharing of non-personal data by firms that currently hold the largest amounts of such non-personal data.
- Provisions that state, for instance, “for greater certainty, this Article is without prejudice to a Party’s laws pertaining to intellectual property and personal data protection”, or “this Article applies to measures by a Party with respect to data held by the central government, disclosure of which is not restricted under domestic law” are still insufficient to ensure that governments can make their own laws with regards to data. This is because it is unclear at what time such laws must already have been in place for them to not contravene the agreement, and if laws made in the future are exempt.
- Article 10.3, electronic authentication and electronic signatures: This provision deregulates the security of electronic transactions, disallowing the government from making rules for commercial transactions. With the increasing use of AI in authentication, there are specific cybersecurity and fraud risks that arise due to model hallucinations and other shortcomings. Governments must retain their existing powers to regulate authentication and electronic transactions as they see fit. The agreement text does not allow governments enough leeway to do so – laws and rules must already be in place to be legitimate. Provisions in Article 10.4 are not sufficient because governments may require more than standards or certification for authentication.
- Article 6, prudential exceptions: While the provision allows governments to ignore the agreement for prudential regulation, this subsection makes that power too contingent. If measures that do not meet obligations in the agreement cannot be used to avoid obligations in the agreement, they are not really exceptions. As discussed above, prudential regulation with the use of AI in finance might be particularly important and a more explicit carveout is necessary.
- Article 5, security exception: The agreement adopts the same security exceptions as Article XXI of the GATT 1994 and Article XIV bis of the GATS. This exception is extremely limited. The exception applies only to fissionable and fusionable materials, supplying military establishments with arms, ammunition and services, and wartime policymaking. The use of AI in military applications is now rampant and not just in wartime. Additionally, security considerations are also important when AI is used to run or compromise critical public infrastructure. Instead of being limited by this stringent constraint, governments worldwide must have the ability to regulate the use of AI in defense in the public interest, as well as to enter agreements to prevent cascading and catastrophic consequences from the use of AI in such applications. In general, the sovereignty of a country in determining the use of technology in defense applications must not be limited by trade agreements.
Other Issues in Digital Trade for AI Regulation
- Free cross-border data flows: Provisions banning governments from adding conditions to cross-border data transfers have been part of earlier drafts of the JSI E-commerce Agreement. Such provisions would be disastrous for governments’ ability to protect their citizens’ personal data and tax data adequately, and meet national security obligations. The effect of these restrictions on antitrust options would also be significant since the separation of data resources from computational resources and model development could form a core of antitrust policy in some countries. They would also threaten national competitiveness in AI development, as some countries with more AI preparedness, in terms of computational infrastructure, would be able to freely exploit other countries’ relative competitiveness in data, and thus develop superior AI models.
- Restrictions on source code disclosures: Any provision that would ban governments from requiring disclosure of ‘source codes’ would threaten algorithmic accountability, transparency, and standards. The widespread use of AI in all sectors means that such a provision cannot pass muster in practical terms.
- E-commerce moratorium: Renewing or making permanent the moratorium on customs duties on electronic transmissions would erode a significant revenue base of many countries in an age where not just services, but also products, have a digital component. Any product or service that contained an AI component, for instance, could evade customs duties for that component. Given that much of the value of such products now accrues to the digital component, this evasion represents a significant loss for countries that import these products. Already, an analysis by South Centre’s Rashmi Banga shows that between 2017-2020, developing countries and least developed countries (LDCs) lost tariff revenue worth USD 56 billion due to the e-commerce moratorium being in place. It is therefore not advisable to renew the e-commerce moratorium under current conditions.
AI today is dominated by Big Tech; regulating this sector is a priority for several countries. No country has yet developed an adequate regulatory package to tackle the impact of this sector on the global economy and society in general. A premature and restrictive agreement that goes beyond its mandate of trade can hamper countries’ abilities to find new and creative ways to appropriately regulate this sector in the public interest. An agreement on the digital aspects of trade is important, but if it is not arrived at through true multilateral consensus, significant issues of concern to many countries, such as those outlined above, will be missed.