When the European Union (EU) looks back on the decade of 2014-2024, it will see political cooperation among independent states that frantically legislated on digital topics, while the region struggled to work out how to establish its own strategic interests.

However, where have the European communities landed in practice?

The concept of digital sovereignty emerged in France as early as 2012 in response to the ever-increasing dominance of US web giants. Senatrice Catherine Desailly proposed a European information super-highway with parts and services built in France, to help accelerate an autonomous and self-determining European economy.

However, it was the election of Donald Trump in 2016 that pushed digital sovereignty to the front and center in other larger member states, like Germany. Trump would clarify to EU leaders, without sugarcoating, that the strategic interests of the US had shifted and that taking responsibility, and ultimately paying, for European welfare and defense was no longer a core concern. Especially for Germany—which for generations has hosted American troops and military bases on its territory, and which struggles to find young people willing to serve the country’s still under-invested army—this was a rude awakening.

Meanwhile, a Commissioner was also appointed to take care of the European way of life, including European values on the internet, in the wake of the refugee crisis of 2015. All this led to the 2019 appointment of the next EU Commission, where the concept of European digital sovereignty was elevated to a top priority. As part of this, a EUR 750 billion fund was allocated to assist European countries in directing strategic digital investments, and in think tanks, such as Stiftung Neue Verantwortung and Munich Security Forum, to identify German and European interests in the world. Additional cyber-crisis aversion funds were proposed within the EU Chips Act, which aimed at establishing hardware sovereignty through a crisis toolbox.

More than ever, especially in strategically important areas, such as sovereignty and security, Europe struggles to work past its policy of the least common denominator, aptly described as the “balance of impotence.”

However, the European recovery funds program, in stark contrast to its American counterpart, is nowhere described as geopolitically threatening. While the US has the competence and desire to spend its government money on promoting, defending, and advancing American companies, innovation, and interests, the EU’s investment program has, as its explicit goal, to maintain the status quo. The lack of both domestically and geopolitically relevant strategies within these efforts leaves the impression of the EU hammering squares into round holes. More than ever, especially in strategically important areas, such as sovereignty and security, Europe struggles to work past its policy of the least common denominator, aptly described as the “balance of impotence” by Wolfgang Streeck in 2018.

Consider this. No less than five cybersecurity acts have been established in the last five years alone to help leverage the size of the European market for improved security. Unfortunately, the torrent of legislation makes member states feel either threatened or overwhelmed, and they often try to avoid implementation. With weak enforcement, any adherence to commonly established rules and norms becomes selective and is more frequently pursued by larger member states to advance their own interests at the expense of smaller member states, than it is used to genuinely aspire towards the goals of the commonly established rules. The response of European institutions is to then make yet another law, the implementation of which can yet again be ignored by member states. The problem is now so obvious that there are calls for making the entire next legislature one of implementation rather than legislation.

Globally, we’ve become used to the EU being busy with its own internal machinations, rather than skillfully and swiftly executing impactful policies. Even in times of dire crisis and war, it is the US that acts as a primary coordination point for member states in choosing their responses—consider for instance both the immediate and longer-term responses to the 2022 war in Ukraine by European governments. The only time when the highest functionary of the European Commission, Ursula von der Leyen, managed to express a forceful opinion on a foreign policy issue—the Gaza war—she managed to prioritize her national, German policy over the commonly decided foreign policy of European member states, landing Europe in a position of complacency in the face of potential human rights violations rather than one of a staunch defender abroad of the principles the region claims to champion at home.

With this in mind, it’s difficult to see how the EU’s call for global tech leadership in data protection and artificial intelligence (AI) fairness could be effective. In fact, the European AI Act is a most pertinent example of how Europe’s lack of internal coordination makes it lag behind in global leadership. Meanwhile, China and the US both have national standardization strategies for new technologies that they are not just putting down on paper. They include methodologies for developing national positions and advancing them in international standardization fora with the help of domestic corporate expertise and strategic foreign investments.

Before the proverbial ink has dried in the EU’s online legal database, EUR-Lex, the International Standards Organisation (ISO) has already released an internationally recognized standard for designing fair AI algorithms, championed, of course, by American and Chinese actors who are leading AI algorithm developers. Meanwhile, Europe is dedicating itself to carving out exceptions for law enforcement to the principles it just adopted, missing the race not only to the technology itself, but also to the values that it tried to imbue them with.

Europe’s internal consensus-making machinery suffers from the same inefficiencies and politics as multilateral global fora for standardization, thus leading to constant delays and weak, inflexible compromised positions, even before geopolitically strategic discussions are reached.

Where, Chinese and American companies are highly motivated to participate in multilateral as well as industry-driven standardization processes, going so far as even launching their own platforms for standards development, European companies, by contrast, prefer to latch on to processes initiated by someone else.

Europe is dedicating itself to carving out exceptions for law enforcement to the principles it just adopted, missing the race not only to the technology itself, but also to the values that it tried to imbue them with.

Thus, the introspective and ineffective EU of the digital sphere stands in stark contrast to the EU of the ‘Brussels Effect,’ a term coined in 2012 by Professor Anu Bradford. The term refers to the “global power that the EU is exercising through its legal institutions and standards, and how it successfully exports that influence to the rest of the world.” The export happens through the use of market mechanisms and “without the need to use international institutions or seek other nations’ cooperation,” and it reflects a decisive EU to which many of its Brussels-based functionaries only wish they belonged. ‘The Brussels effect’ is the attribution to the EU of leadership that it may well have been able to carry and that it has, in fact historically carried in areas as diverse as social welfare and automatization.

Consider, for instance, that in the EU, Germany has some of the longest-standing privacy protections in the world. This can be traced back to the constitutional reforms adopted by the German government in the years immediately following the Second World War, which prevented any future centralization of identification data which had been abused by the Nazi Government. Other countries, like Sweden, were early adopters of data protection-specific legislation, to spur and aid the then already ongoing computerization of society. At the Council of Europe, automated data processing has been a recurring topic in the context of personal data and fundamental rights since the 1970s.

Today, data protection in the EU and the European Economic Area (EEA) is governed by the European Convention of Human Rights, to which all EU member states are signatory parties; the EU Charter of Fundamental Rights since the entry into force of the Lisbon Agreement in 2009; and the General Data Protection Regulation (GDPR). The GDPR is arguably the toughest data protection law in the world.

By contrast, India’s Digital Personal Data Protection Act (DPDPA) was passed in 2023 after undergoing a long journey of about 17 years. The first Private Member’s Bill on data protection was tabled in Parliament in 2006, long before the European Commission put the GDPR on the table in 2012.

At the time of writing, DPDPA 2023 has not yet come into force. The Rules under the Act, which will provide further detail about the implementation of the Act are also awaited. While the Supreme Court of India held that privacy is an inalienable, fundamental right protected under Article 21 and Part III of the Indian Constitution in the Puttaswamy judgment (2017), the erstwhile Minister of State for Electronics and Information Technology said in an interview in 2022 that “the GDPR is a little more ‘absolutist’ in how they approach data protection and such a law is not possible for India because of the country’s ‘thriving ecosystem of innovators.’”

The European AI Act is a most pertinent example of how Europe’s lack of internal coordination makes it lag behind in global leadership.

It is pertinent to note the view that personal data protection may be detrimental to innovation in India, and also remark the contrast between this view and the legislated call for data protection by design and data protection by default enshrined in European law. While the seven data protection principles from the GDPR were adopted by the DPDPA 2023, the differences between the two pieces of legislation stand out considerably:

  1. The DPDPA 2023 does not differentiate between ‘personal data’ and ‘sensitive personal data’ while the GDPR has several categories of personal data.
    • India has different kinds of diversity in its population, including race, ethnicity, and religion. Caste, sexual orientation, health-related information, and political, religious, or philosophical beliefs, or genetic and biometric data are thus not protected under the DPDPA 2023.
  1. The GDPR covers both digital and non-digital personal data whereas the DPDPA only considers data in its digital form, either collected digitally or collected via non-digital means and digitized later.
  2. The GDPR contains exceptions for the freedom of expression and information such as for journalistic, academic, artistic, or literary purposes. The DPDPA 2023 does not make these exceptions which has been a cause for concern.
  3. The GDPR has much stricter requirements for the transfer of personal data outside of the EU. The DPDPA follows a blocklist wherein a third country may be denoted not suitable for data transfers, whereas the GDPR generally prohibits data transfers outside the EU with the possibility of allowing data transfers to a jurisdiction once it’s been scrutinized and approved.

The Information Technology (Amendment) Act 2008 (ITAA) and its Rules are by now outdated and do not address newer data collection practices. While 16 years ago, the legislators focused on data collectors rather than data controllers, today Indians are still waiting for an effectively enforceable legal framework for privacy and security.

The ITAA comes into the picture only when data breaches occur, but contains few, if any, mechanisms to prevent breaches beforehand. Even for the data security practices that are contained in Rules, the Rules are poorly enforced, allowing companies to collect more data than necessary. This has led to businesses in India collecting, processing, and transferring vast amounts of data with broad consent requests that do not clearly explain what data is being used for. Inconsistent data collection, bundled consent, and excessive transfer of data, often beyond what is truly needed, lead to a chaotic situation and unempowered citizens.

Indian companies heralded as especially innovative by the previous Minister of State for Electronics and Information Technology, would rather exit the European market than adopt sound data protection and security principles.

It does not then look, at the face of it, as if the Brussels Effect is particularly strong or indeed even real, at least in so far as concerns data protection and security in India. To underline this point even further, the websites of many services in India, such as large travel aggregators and multinational hospitality chains, are not available in the EU because of their non-compliance with the GDPR.

Indian companies heralded as especially innovative by the previous Minister of State for Electronics and Information Technology, would rather exit the European market than adopt sound data protection and security principles. However, this is not to say that India itself manages to be a leader or an inspiration for others.

Like the EU, India’s IT policy is focused inwards on domestic problems, and they end up inspiring only domestic solutions. Until now, the vastness of the Indian home market is sufficient for would-be entrepreneurs, but like the EU, India must raise its gaze if it wants to be a leader in the multipolar world of tomorrow. And maybe here is where we find a chance for the Brussels Effect to redeem itself.

In terms of the EU AI Act, Indian AI companies that are collecting data as described previously will need to make significant changes to their systems to ensure compliance. Indian AI companies may have to make sure they do not slurp up EU data by mistake, but with such safeguards, they can be competitive in the EU markets for automated customer support or in security applications as well.

While India does not currently foresee creating legislation specifically for AI, it can also not ignore the conditions globally. The EU ethical initiatives will remain nebulous and poorly enforced, but maybe it’s an opportunity for India to grab territory in an as-yet emerging market—continuing to reap successes similar to the success stories of Infosys and Wipro in early digitalization support industries.

If one thing is certain, it is that the EU is unlikely to organize itself behind any powerful push to impose its values on India any time soon. It is occupied by conflicts in its immediate vicinity, still struggling to realize that other entities further away do not share its sense of urgency. Unfortunately, the European Union also often tries to be a pioneer in balancing individuals’ rights with corporations’. Let’s hope the EU finds its footing, well enough to enforce its values, in the upcoming legislature.