Me: My flight was canceled, and I want to check if the refund claim I made yesterday is being correctly processed.

Chatbot: Hmmm. Sorry, I didn’t understand that. I’m a new Chatbot and still learning. I’m better with simple, short questions. If you prefer to choose a topic from our menu, type ‘Help’ (31 March 2023).

Chatbots are not new. Neither is artificial intelligence (AI), for that matter. However, up until recently, the type of questions that chatbots could reply to were extremely simple, and interacting with them was usually frustrating. And while we have been interacting regularly with advanced AI – Google’s search engine and Amazon’s marketplace pricing system being two prominent examples – something has changed since ChatGPT was released in November 2022. Not only was its rate of adoption faster than that for any other platform before, but the effects of integrating ChatGPT, and more generally Large Language Models (LLMs), into our everyday lives have already gone far beyond those we witnessed before.

Still, what continues to remain unchanged is Big Tech dominance: Currently, AI development hugely benefits US Big Tech.

Dominance of US Big Tech over Frontier AI

I asked Bing’s integrated ChatGPT whether Microsoft currently wields intellectual monopoly. Here’s what it responded with: “I’m sorry, but I couldn’t find any recent information about Microsoft being an intellectual monopoly. The most recent information I found was about the United States v. Microsoft Corporation case in the late 1990s and early 2000s, which I mentioned in my previous message. Is there anything else you would like to know?”

Judging from the above answer, ChatGPT does not seem to be aware of its own positionality in the Big Tech ecosystem. But US Big Tech companies, in particular Microsoft and Google, today control the AI field, appropriating public research and tapping into scientists’ and intellectual talent from various universities.

While patents seem to be central to Google’s strategy, secrecy is Amazon’s preferred appropriation mechanism, which also explains why it presents less research than the other tech giants in AI conferences.

My research has looked at the US Big Tech dominance of the AI field in detail. As a first step, I proxied the frontier AI research network of actors by plotting the network of organizations with the highest frequency of presentations at the top 14 AI scientific conferences. US and Chinese Big Tech are all part of this network, with Google and Microsoft occupying the most central positions within this. What’s more? Microsoft’s node in the network is the bridge connecting the Western and Chinese organizations, a clear sign of its strategic geopolitical role. Microsoft is the only US giant that is well-positioned in China, where it opened its first major R&D campus outside the US in 2010. This presence has also resulted in regular collaborations with major Chinese AI players, from Alibaba and Tencent to leading universities for the development of AI. As it regularly co-authors AI papers with all of them, as well as with many major Western organizations involved in AI R&D (the latter do not frequently co-author papers with Chinese organizations), Microsoft connects the whole AI field, profiting from the research of the most talented scientists and engineers from around the world.

US Big Tech companies also loom large over these conference committees. For instance, every Big Tech organization has at least one member on the organizing committee of NeurIPS, the most coveted machine learning annual conference. Google, which got the largest number of accepted papers in this academic convention in 2022,* had nine of its representatives part of the 39-member committee. The company is also the leading acquirer of AI start-ups, as seen in the case of DeepMind acquired in 2014, which is one of the most significant players in the AI race and has invested heavily in patents. All these signs point to Google profiting from AI, even if it is under stress since Microsoft and OpenAI took the lead in the artificial general intelligence race.

Interestingly, when it comes to AI patenting, the behavior of Big Tech companies varies. While patents seem to be central to Google’s strategy, secrecy is Amazon’s preferred appropriation mechanism, which also explains why it presents less research than the other tech giants in AI conferences.

Unlike Google, Microsoft’s recent strategy has privileged investments in AI start-ups, rather than patents and acquisitions. Often, companies receiving such funding formally remain separate but are ultimately, at least partially, controlled by Microsoft. OpenAI, which developed ChatGPT, provides a testament to this strategy. Microsoft’s first investment in OpenAI at USD 1 billion dates back to 2019. In exchange for funding, Microsoft negotiated an exclusive license to GPT-3. Shortly after Microsoft stepped in, a group of AI researchers left OpenAI due to internal tensions over its research direction and priorities, demonstrating the company’s growing hold over the trajectory of the latter. Crucially, OpenAI depends on Microsoft’s computing power, without which training LLMs would have been impossible. After ChatGPT’s success and almost immediate integration into Microsoft Bing, Microsoft committed an additional USD 10 billion investment in OpenAI. According to interviewees, by early 2023, Microsoft had gone on to own 49% of OpenAI.

We currently use LLMs as black boxes, almost without a clue about how they work. This raises the question, what is the space left for human learning from digital technologies when only usage of the application is permitted and takes place under the conditions set by a few companies?

Investing without acquiring OpenAI has proved to be a more favorable strategy for Microsoft compared to Google’s acquisition of DeepMind. Microsoft has been able to expand its AI services’ sales base because its rivals also deploy OpenAI’s ChatGPT Plus. Additionally, operating through OpenAI diverts attention from regulators and keeps public concern at bay. Microsoft CEO Satya Nadella even claimed that the new wave of AI was not privileging incumbents like Microsoft but entrants like OpenAI, without mentioning that the latter operates as almost its satellite company.

Why we Shouldn’t Let ChatGPT Learn at the Expense of Human Learning

LLMs are not just an extra step in the development (and monopolization) of AI. Unlike other AI models, which were ultimately developed for a specific purpose, such as defining prices on Amazon, generative AI is incredibly ubiquitous. Its functions range from writing essays to coding following detailed prompts. Every day we see new applications emerge. However, this fast adoption with ubiquitous uses has not been coupled with an expansion of real access to the underlying technology. We currently use LLMs as black boxes, almost without a clue about how they work. This raises the question, what is the space left for human learning from digital technologies when only usage of the application is permitted and takes place under the conditions set by a few companies?

LLMs are a special type of deep learning algorithm. Deep learning refers to models that improve by themselves the more they are used, i.e., the more data they crunch. Every time we ask a question to ChatGPT, more data suitable for retraining ChatGPT is produced. This creates constant streams of inputs to make ChatGPT better and better. An Amazon corporate lawyer even told employees that they must not provide ChatGPT with “any Amazon confidential information (including Amazon codes you are working on)” and added that this recommendation was “important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material)”.

There are far too many signs that point to the consolidation of a deepening divide in learning capabilities between a few corporate giants and the rest of the world. Big Tech companies have attracted the most talented AI scientists and engineers, either as full-time employees or with part-time jobs that allow them to retain their academic affiliations while they work for Big Tech on secret cutting-edge projects. In fact, Meta continued hiring for AI-related positions, while conducting massive layoffs. It is not only universities or small firms that suffer from the consequent brain drain. My interviews confirm that even large multinational corporations find it hard to find computer scientists and engineers to develop AI. Thus, other actors will be forced to purchase off-the-shelf cloud AI services from Amazon, Microsoft, and Google, with these three big players holding 65% of the market.

This is not a call against using AI but against letting a handful of companies capture the cutting-edge technologies of our time, deciding how it is developed and regulated.

What a paradox for the ‘knowledge economy’! A small group of people and machines learn, and the overwhelming majority of the world risks losing learning skills as intelligent chatbots spoon-feed us with (not necessarily reliable) answers. Besides the economic effects and the associated rise in inequalities, this process is expanding gaps between knowledge and ignorance, even potentially affecting the chances to envision and develop alternatives.

Could Open Source be the Solution?

Since ChatGPT was introduced, other private alternatives have entered the scene. Several LLMs have also been made available as open source. However, these are not as comprehensive as private models and do not represent, at least not so far, a serious threat to the tandem of Microsoft-OpenAI. One of these open-sourced LLMs was developed by Hugging Face, a start-up backed by Amazon. It has around 176 billion parameters – a sign of how advanced the model is – but current frontier models have more than a trillion parameters.

Another limitation of open source LLMs is that they cannot be directly used. They are a general-purpose technology that require further adaptations for user adoption, for instance through interfaces like ChatGPT. US Big Tech has gained an advantage here, having already integrated different frontier LLMs to their clouds and other services, making adoption easier for companies and other organizations. Also, these models require a lot of computing power, which further favors their adoption through the cloud. Given the above-mentioned shortage of AI scientists and engineers, it is also highly unlikely that other companies will have the capabilities to tailor open source LLMs to their needs. Applying open source AI may even end up being more expensive. Simultaneously, the pressures for applying AI are mounting globally.

Applying AI: Damned if We Do, Damned if We Don’t?

Shall we accept that Big Tech companies have the best AI technology and use it widely, or reject AI in an attempt to mitigate the exacerbated intellectual and economic concentration it is generating? Certainly, neither of the above. This is not a call against using AI but against letting a handful of companies capture the cutting-edge technologies of our time, deciding how it is developed and regulated.

An extreme focus on the agency of generative AI risks overlooking the role of AI agents, i.e., Big Tech companies as their main controllers.

Much of the policy discussion since the release of ChatGPT revolves around the agency of generic AI models. Even Big Tech and OpenAI’s top management have advocated for regulating AI uses, and diverting attention from regulating what type of AI is coded, by whom, and who profits from it. In other words, an extreme focus on the agency of generative AI risks overlooking the role of AI agents, i.e., Big Tech companies as their main controllers. AI is co-produced by many, as evidenced when looking at top AI conferences’ participants. Furthermore, everyone contributes to AI’s self-improvement by using it. But Big Tech disproportionately captures the profits.

Society must discuss in democratic spaces whether and what type of AI should be developed, by whom, and for what. This discussion is indissociable from agreeing on what type of data will be harvested and how they will be governed (who will decide accesses, how they will be accessed, etc.). There is still time to shape technologies and redistribute its gains. However, with the clock ticking and more people and organizations adopting ChatGPT and the like, the harder it will be to reshape routines for the production and use of AI that helps us solve major global challenges instead of AI that replaces labor, fosters inequalities, and ultimately worsens the critical times we live in.

* Affiliations appear as Google, Google Research, Google Brain, and DeepMind.