In 2023, we witnessed a five-month-long tussle between the Writers’ Guild of America (WGA) and Hollywood studio executives. Amongst other things, artificial intelligence (AI) proved to be a key point of contention in the negotiations. While much has been written about the inevitability of AI replacing humans across a variety of industries, Hollywood was perhaps not an obvious choice for a site of struggle. Nevertheless, in the ostensible battle between human beings and AI, it was the humans who drew first blood.
The WGA’s concerns with AI had to do with the fact that studios could use the technology to underpay writers by, for instance, asking ChatGPT to come up with an entire script for a movie, and then hiring writers to merely fine-tune it. By doing this, studios could claim that writers’ contributions were minimal. Specifically, the WGA demanded that AI should not be used to write or rewrite scripts, and AI-generated writing should not be considered source material, which would lead to writers losing out on credits. Similarly, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), which represents actors, objected to studios using the likeness of human actors to generate AI actors that would recreate their performances. Both the WGA and SAG-AFTRA made it abundantly clear that they do not consent to having their work be used to train AI models that are owned and operated by the studios.
The deals struck by the trade unions representing Hollywood workers present other unions with an outline of how to protect themselves from AI encroaching on their work.
A representative from the WGA, at the start of the strike in early 2023 stated, “I’m not worried about the technology. I’m worried about companies using technology that is not, in fact very good, to undermine our working conditions.” It should be evident that the reason why studios are looking at tools like AI is to cut costs. However, AI-generated content has not yet reached the levels of creativity, amongst other things, that human-generated content has displayed for centuries. This may be due to how AI models operate. If we take the case of ChatGPT, the model uses complex statistical methods to generate sentences wherein it guesses what the next word is most likely to be based on the previous word. The jury is out on whether humans also operate in the same way, but one school of thought argues that human experiences contribute to how we process information and create stories, music, movies, and so on. Additionally, large language models are trained on so much more data than any human will ever have access to which leads to the quality of the content produced lacking a certain individualistic characteristic.
The framing of the conflict as being between humans and AI is in and of itself a distraction from the real issue, which lies in the ownership and control of the tools.
However, this victory should not be viewed in a vacuum. While Hollywood tends to be the center of attention, it is by no means the most important of battlegrounds where man and machine are in conflict. News articles that capriciously state that AI is coming to take our jobs come with lists populated by the usual suspects—data entry and administration, customer service, manufacturing and assembly line work, basic analytic roles, etc. These are not categories of work specific to one or two fields, but rather rudimentary components of nearly all current work. Public services, i.e., services provided by the state for the benefit of its citizens, are by no means an exception. Like the decision-makers who sit at the top of the hierarchies of Hollywood, Big Tech, automobile manufacturing, etc., those running public services around the world are in the process of determining how to, or in fact, how much of AI they can integrate into their fields.
The basic consideration has to do with cutting costs. Public services around the world already tend to be underfunded. For neoliberal schools of thought that advocate for a diminished role for the state in favor of privatization, tools like AI become lucrative and will inevitably be used in a way that undermines the interests of the workers. This is what makes the Hollywood workers’ victory over the studio executives so important. It has created a pivotal precedent for the role of AI in work relations. However, the framing of the conflict as being between humans and AI is in and of itself a distraction from the real issue, which lies in the ownership and control of the tools.
Now, there are obvious concerns about the efficacy of AI systems. There is much research about how these systems process data, the inherent biases in their design, and the accuracy of their results. Numerous instances have been recorded in the US, Austria, Australia, and so on, where deserving recipients of welfare services wrongly had their benefits cut off due to AI-based decision-making. This is a deeply concerning issue on its own, but another feature of AI that has not yet received the scholarly attention it deserves has to do with how these systems, with the help of machine learning and access to vast quantities of data, can develop a level of expertise that at its best can be on par with its human counterpart. A trade union leader representing welfare workers in Norway first brought this to my attention. They were uncomfortable with the fact that these systems were essentially developing a high level of intellect based on the information that was fed to them by practitioners who themselves had spent years expanding their knowledge base through skill and effort. Is it fair that workers are expected to help train the very systems that could potentially replace them, or at least diminish their autonomy, in the years to come? Of course, it is not that the systems are in and of themselves a singular entity. They are built by technology developers whose expertise lies in coding, testing, debugging, etc., and owned by capitalists, whose expertise lies in amassing wealth. Yet the same question remains, why should the makers of technology have free access to knowledge bases across any number of fields?
For neoliberal schools of thought that advocate for a diminished role for the state in favor of privatization, tools like AI become lucrative and will inevitably be used in a way that undermines the interests of the workers.
Throughout 2022, I interviewed public sector employees worldwide in healthcare, education, and social welfare regarding their experiences working with AI. Western Europe and North America were notably the most advanced regions in terms of technology integration. AI has significantly impacted healthcare and social welfare, where decision support systems (DSS) were commonly used in hospitals and welfare centers. In social welfare, DSSs were used in unemployment services where an individual’s data, such as work history, education, disability status, and criminal record, were entered into the system. The DSS would then match individuals with available jobs based on profiles of those typically securing similar positions. Likewise, in child welfare, information like parents’ employment status, health details, and criminal records, along with metrics that measure a child’s well-being, like school attendance and health details, were used to assess a child’s risk status. This helped determine whether a child should remain with their family or be placed in foster care. DSSs provided case-by-case recommendations, with policies dictating the practitioner’s flexibility in overriding system decisions. The outcomes, needless to say, significantly impacted the lives of those served by these systems.
Furthermore, the rise of ChatGPT and other generative AI tools has sparked significant legal debates, particularly concerning data scraping. Data scraping involves using bots to extract information from websites, a practice contested by many site owners. A notable case involved LinkedIn, which sued the now-defunct data analytics firm hiQ for scraping data from public profiles without permission. LinkedIn argued that hiQ’s actions were illegal due to the lack of authorization. However, the courts ruled in favor of hiQ, stating that LinkedIn users assume some risk when posting public information and that LinkedIn does not own this data and therefore cannot grant or deny authorization. The court also emphasized that allowing platforms to control access to public information could lead to information monopolies, which would be against public interest. Another way to look at it, though, would be to question why generative AI models should be allowed to evolve by freely accessing data that does not belong to them. This is more or less the same question asked by Hollywood workers and is the central point in the discussion about public sector workers and their interaction with AI.
If we use the court’s reasoning, we may conclude that since the information is public there is no question of the tools needing permission, but even so, the individuals to whom the information pertains should at least be able to consent to having their data used to train these models. The EU’s General Data Protection Regulation (GDPR) provides clear guidelines on accessing and using personal data. Article 6 of the GDPR outlines six lawful bases for data collection and processing: consent, contract, legal obligation, vital interest, public task, and legitimate interest. Additionally, the French data protection authority mandates that individuals must be notified when their public data is accessed. Consent is a critical concept in the GDPR, which was not a factor in the LinkedIn case ruling. In the US, a bill introduced in Congress in 2022 aims to clarify these issues, but details are still being discussed, and a vote is not expected for several months.
Is it fair that workers are expected to help train the very systems that could potentially replace them, or at least diminish their autonomy, in the years to come?
The deals struck by the trade unions representing Hollywood workers present other unions with an outline of how to protect themselves from AI encroaching on their work. In the case of the welfare workers, it may be argued that the data they are entering is not about themselves but rather individuals who have consented to having their data processed. However, the workers play a crucial role in terms of data generation. Without them, it would not be possible for the models to access the data because the workers are not merely performing a clerical task of data entry. They spend time talking to and understanding the welfare recipients, oftentimes building relationships with them which gives them a unique insight into the situation and/or condition of the individual in question. What’s more is that even if the workers do not have the authority to withhold information from DSSs, they should have the right to determine how these systems are used in the workplace.
This was the crux of the research I was carrying out in 2022—to come up with a framework of rights for public sector workers that would prevent them from being harmed economically, professionally, and emotionally by the integration of AI into their work processes. Many of the workers I spoke to believed that such tools could be and are of benefit to their work processes, but almost all of them, including the ones represented by strong unions, felt they had little say in how these tools were designed and how they would be used.
The problem, therefore, is not the technology itself, but how the owners and makers of the technology want to use it. If they could prove that using it in the way they currently intend benefits everybody, then they would be worth listening to. But they can’t, because it doesn’t. Technological developments since the start of the Industrial Revolution have always been leveraged as a means to exert more control over the workforce by capitalists. Middle and lower-level work tends to be the target of such centralization and it often provides the capitalist a stronger impetus when it comes to decisions about downsizing and/or underpaying their labor. Against such odds, the strengthening of workers’ rights is imperative across all types of work and Hollywood workers have shown that it can be done. The owners of technology will keep peddling false narratives that aim to promote fear of AI, as though it is some incomprehensible force, but the real trouble is not with the tech, it is with the people behind the tech. The very notion that work done by humans, the kind that requires years of learning, practice, mistakes, and successes that contribute to an individual’s relationship with and understanding of their work, can instead be replicated by a machine is by itself revelatory of what people running studios, hospitals, schools, etc., think constitutes healthcare, art, education, and all the other fields that fundamentally need people to be at the center of the practice. AI does have a role in the workplace, but the workers must be allowed to decide what the contours of that role are to be.
There currently exists a gap in the literature as to how to integrate the voices of the workers into the design, deployment, and usage of AI tools in the workplace. While digital data protection laws exist and continue to evolve around the world, there does not appear to be a rights framework for unions to refer to when negotiating these conditions with their employers. Such an absence risks unchecked exploitation of workers which only serves to further exacerbate existing economic realities that favor capital.