Dear Reader,

In recent weeks, long-simmering tensions have erupted into open conflict, as the U.S. and Israel’s unprovoked assault on Iran has triggered a devastating regional war alongside a rapidly escalating economic and energy crisis with no clear resolution in sight. From the outset, the conflict has exposed the deeper and more troubling dimensions of Silicon Valley’s ties to America’s infamous military-industrial complex. Allegations that artificial intelligence may have played a role in the catastrophic strike on an Iranian elementary school have already cast a shadow over the technology’s expanding use in warfare.

Emerging reports suggest that the system involved was Project Maven—a name familiar to those who have followed labor activism within the tech sector. Originally developed by Google, the project became the center of intense internal backlash in 2018, as thousands of employees protested the company’s involvement in military AI initiatives. Workers organized walkouts, circulated petitions, and, in some cases, resigned in protest, ultimately compelling Google to withdraw from the contract. The project did not disappear, however. It was subsequently taken over by Palantir, which has spent the past five years integrating and deploying the AI system within U.S. military operations.

While this episode stands as a testament to the moral concerns of white-collar tech workers and underscores the importance of building stronger alliances with them, it also reveals the limits of isolated resistance when confronted with entrenched structural forces. A similar pattern appears to be unfolding today in the case of Anthropic. Recoiling from the use of its AI model in the latest military campaign in Venezuela, the company has found itself in a public dispute with the US government. It has filed a lawsuit, seeking to place restrictions on the ways in which its technology can be deployed by the pentagon. In retaliation, the Trump administration has moved to sanction the company as a national supply-chain risk, threatening to debilitate its business operations. Meanwhile, rivals like Elon Musk’s xAI have already jumped to volunteer as a replacement for the military contract.

All that said, the deep irony in Silicon Valley’s rush to reinforce American geopolitical power is that, in doing so, it may be undermining the very foundations of its own dominance. While it is now widely acknowledged that the war has spiralled beyond U.S. control, commentators have also begun to warn of its potentially severe consequences for the AI boom. Chief among these is the surging cost of energy, an increase that many analysts suggest could take years to reverse. This poses a serious challenge for an industry whose energy demands are expanding at an exponential rate. At the same time, it is important to remember that the Gulf states have long functioned as a critical node in the financial architecture underpinning Silicon Valley. Their investment commitments are deeply embedded across the ecosystem: from hedge funds and venture capital to start-ups and large-scale data centre projects. The instability now affecting these countries threatens to disrupt these financial flows and relationships, raising the prospect of significant setbacks for the industry as a whole.

Stepping back to the broader digital policy landscape, it is crucial to track the political dynamics unfolding at this level. Analysts have pointed to a recent directive from the Trump administration instructing U.S. diplomats and affiliated actors to challenge and obstruct data sovereignty initiatives around the world. When viewed alongside the prominence of these issues in last year’s tariff negotiations—and the administration’s approach to key multilateral forums such as the G20 and the WTO—it becomes clear that efforts to assert digital sovereignty are being treated as a significant threat to American hegemony. In this context, it is likely that the full weight of U.S. sanctions and diplomatic pressure will be deployed to deter countries from pursuing such policies. Given the stark asymmetries of power involved, countering this pressure will require the formation of new alliances and cooperative blocs. Strengthening solidarities among countries in the Global South may prove essential for advancing a more autonomous and equitable technological future.

Also in digital policy news, this month brought a decisive outcome in a legal trial in the Los Angeles Supreme Court that may prove to be a turning point for social media regulation. The lawsuit, selected as a representative case from thousands of similar filings, named Meta and Google as defendants and argued that the structure of their platforms reinforces addiction and other forms of psychological harm. The jury found these claims persuasive and ruled in favor of the plaintiff, awarding damages of $6 million.

The broader significance of the case lies in the legal question it foregrounds. As commentators have noted, the central issue is whether companies can be held liable for harms arising from the design features of their platforms. This has far-reaching implications, as it could begin to erode the intermediary liability protections that these firms have long relied on to shield themselves from accountability. In effect, the case helps to politicize platform interfaces and the design choices embedded within them. The scope of this shift is potentially wide. It includes algorithmic feedback loops, gamified engagement systems, and features such as infinite scroll and autoplay. Now that these elements have been recognized as possible grounds for legal action, they may open the door to greater scrutiny and accountability in how platforms operate.

The trial also appears to mark a new threshold in the growing wave of concern over the harms associated with social media. It followed closely on the heels of another major court decision in New Mexico, which held Meta liable for child sex trafficking crimes conducted through its platform. Together, these developments suggest that momentum is building toward a more expansive legal reckoning for the social media industry.

Finally, further down the value chain, this month has seen signs of a growing wave of resistance. A recent report brings together grassroots accounts from across the Global South: expanding movements opposing the spread of energy-intensive data centres in Chile, new gains secured by workers in the data labelling and content moderation sectors in Kenya, and emerging forms of collective organising challenging algorithmic management practices in the Philippines. Taken together, these developments point to a clear pattern of contestation over how AI systems are being introduced and embedded in local contexts. Notably, many of these initiatives appear to be organic responses that have arisen spontaneously. This makes them a significant complement to the more coordinated struggles that platform workers in these regions have been building over a longer period.

Indeed, these more established worker movements have made meaningful breakthroughs this month as well. Following sustained pressure from workers, South Africa’s Ministry of Employment and Labour has announced a series of proposed amendments to national labor law that could extend key social protections to gig workers. This development is part of a broader wave of regulatory action across the continent, with countries such as Egypt, Kenya, and Nigeria also being pushed to introduce new measures to support platform workers, particularly in the ride-hailing sector.

Notably, South Africa’s proposal does not target platform work explicitly. Instead, it introduces a broader legal presumption that anyone providing a service to another party should be classified as an employee unless proven otherwise. This approach sidesteps familiar debates over worker classification and shifts the burden onto platforms to demonstrate that workers are genuinely self-employed. As analysts observe, this is a difficult case to make given the high degree of control platforms typically exert over workers’ conditions and activities. Such strategic legal approaches offer important lessons for future organizing and advocacy around platform labor, especially in the lead-up to the International Labour Conference scheduled for June.

The Sins & Synergies Lounge

This month, tune into Lisa Nakamura in conversation on her new book, which interrogates the attention economy by asking who must be ignored for it to function and foregrounding the often-erased contributions of women of color in building the internet.

For a dose of early internet nostalgia, explore the Web Design Museum, a playful archive of Flash-era games and interfaces that captures a very different moment in the web’s cultural history.

Also read Digital Futures Lab’s policy brief on advancing open-source AI in India, which offers timely recommendations for building public alternatives and reducing dependence on dominant global tech infrastructures.

Don’t miss the Ada Lovelace Institute’s investigation into the growing use of AI transcription tools in social care contending with the promise of reduced administrative burden and the risks of bias, inaccuracy, and weak oversight in high-stakes public services.

Also check out MIT Technology Review’s analysis of how AI is turning the Iran conflict into spectacle, showing how synthetic media, betting markets, and dashboards are reshaping the theater of war itself.

Lastly, read FIAN International’s assessment of the FAO’s role in the digital transformation of agri-food systems and how its data protection measures and digital initiatives for smallholders fall short of upholding human rights principles and protecting marginalized groups.