I want to start with a small thought experiment that can also be run on whatever generative AI system happens to be open.
Type: “Write a short story about a significant holiday.” Then stop. Don’t specify a country. Don’t specify a religion. Just hit enter. Notice what happens. The system has to choose. It can’t leave “significant” open. It has to decide what counts as a holiday worth building a story around without a cultural anchor.
When I tried it from the United States, it gave me Thanksgiving. That’s predictable. But I’m less interested in which holiday it chooses, and more interested in the fact that it must choose. The system cannot keep the world open. It has to turn an underspecified prompt into a particular world.
Let this question remain in the background of what follows: what does the system treat as the default “significant” holiday when no one tells it whose world we’re in? And what kinds of lives and histories only appear once they are explicitly named? That’s the provocation I want to put on the table: these systems draw from a cultural commons and decide which parts of the commons get to stand in for ‘the ordinary case’.
Defaults are not neutral
We often talk about generative AI as productivity tools: faster coding, faster reading, faster writing, faster summarizing. But the little experiment with which we began reveals a broader design pattern. These systems don’t just respond to us. They complete the world for us.
They are built to resolve ambiguity by supplying the ‘obvious’ example. And once that becomes visible, it becomes harder to treat them as neutral assistants, faithfully carrying out delegated tasks. They are also engines of cultural common sense. They make some people and experiences easier to summon, and they make other people and experiences show up only when explicitly requested.
Normativity can travel as neutrality because the default doesn’t have to introduce itself.
This shows up in small patterns that repeat. Ask for a “wedding” and see who appears without being specified. Ask for “a professional” and see what cues the system reaches for. Ask for “a safe neighborhood,” or “a traditional meal,” and see what the model assumes is meant. The defaults aren’t random. They are the system’s learned sense of what can remain unspoken.
This matters because the fight over training data is not only about what gets taken from the commons. It is also about what the system returns to us as the baseline version of culture. Normativity can travel as neutrality because the default doesn’t have to introduce itself.
Personalized outputs disperse contestation
There is another feature of these systems that makes this politics especially difficult to see and difficult to contest. Outputs arrive one interaction at a time. The politics gets dispersed into a million private moments.
Older media made representation contestable because it was legible as a common object. A public could point, collectively, to what sat on the shelf of a bookstore or appeared on the television screen. A shared argument could form around what wasn’t there. The object of disagreement was shared and public.
Underrepresentation becomes structurally difficult to show, and therefore, structurally easy to dismiss.
Generative AI systems don’t give us a shared text. They give us an endless series of plausible instances: “good enough” stories and images that vanish as quickly as they appear. The experience is personalized and hard to aggregate. That makes defaults harder to see, and absences harder to demonstrate. Underrepresentation becomes structurally difficult to show, and therefore, structurally easy to dismiss.
This is one of the reasons why debates about representational harm often get stuck. The system can produce a counterexample on demand. If someone points to an absence, someone else can ask for a different output and declare the problem solved. The argument gets pulled away from patterns and into anecdotes. The public object dissolves and, with it, the possibility of collective witnessing.
From knowledge to information, and the loss of obligation
There’s another layer underneath this, and it matters directly for cultural rights, the public domain, and the scramble for training data. These systems are built by converting messy, situated knowledge into something that can move through a pipeline. Text and images become units that can be recombined and optimized. In that conversion, knowledge starts to behave like information; it becomes portable and abstractable.
But neutrality is an effect of stripping away context: where something came from, what obligations traveled with it, what it meant in a particular place, what forms of life it belonged to.
But neutrality is an effect of stripping away context: where something came from, what obligations traveled with it, what it meant in a particular place, what forms of life it belonged to.
Once cultural production is built on that kind of abstraction, it transforms what we think of as culture. The system doesn’t just ‘use’ the commons. It reorganizes what the commons is for. It encodes a theory of legibility: what counts as the obvious case and what reads as reasonable.
Enclosure is not only about content
So when we talk about enclosure in the AI moment, I want to suggest that it’s not only an enclosure of content. It’s also an enclosure of legibility. Shared cultural resources become a one-way input into systems that stabilize a narrow baseline world, and then return that world to us as everyday language and ‘common sense’.
This is why I’m not satisfied with the familiar response that diversity is simply a matter of asking for it. That response reframes a claim about justice and public culture as a matter of consumer preference. It assigns the work of specification to the people who already carry the burden of being marked in public life. It concedes the deeper point: some people get to remain unmarked, to be the default human in the machine’s imagination, while others have to announce themselves in order to be rendered visible. The system can generate differences, but that difference is treated as an exception that must be requested, while a particular baseline continues to flow as the unmarked norm.
What governance would have to notice
If we take the public domain seriously as infrastructure, governance cannot stop at whether something was legally scraped or licensed. We also have to ask what obligations attach to the production of defaults.
One obligation is default accountability. Model providers should be required to measure and disclose what becomes ‘standard’ in ordinary use. Not only the dramatic failures, but the mundane drift toward particular norms. The predictable rendering of ‘the professional’, ‘the wedding’, ‘the neighborhood’, ‘the holiday’. If defaults are where normativity travels, defaults have to become a governable object.
Another obligation is reciprocity. If these systems are built from shared cultural resources, return flows should strengthen shared cultural infrastructures: archives, local language resources, and community-governed collections, as a condition of legitimacy. The public domain should not function as a one-way feedstock for private systems. If the commons is being drawn down, it is also essential to invest in maintaining it.
A closing question
The thought experiment is a small doorway into a larger question about legibility: which worlds arrive as the baseline, and which worlds have to be specified into view. Ultimately, it allows us to ask: What would it mean to govern generative AI as a technology of cultural visibility, in a way that protects the public domain not only as a resource to be drawn from, but as a plural space of meaning, where no single baseline gets to pass as “the human” simply because it’s the easiest for a system to reproduce?
This think piece is commissioned under a project of the Subgroup on IP and Culture that IT for Change co-leads under the UNESCO Global CSOs and Academic Network on AI Ethics and Policy. Watch this space for more thinkpieces and the upcoming issue brief of the subgroup.