In the face of rapid advancements in artificial intelligence, the European Union has a unique opportunity to take the lead in shaping a global regulatory framework for future technologies. As AI continues to push the boundaries of what is possible, it brings with it a multitude of potential benefits and risks. The EU, with its relative neutrality and strong regulatory tradition, should spearhead the effort to balance these competing forces. By seizing this moment, the EU can establish itself as a crucial player in AI diplomacy, guiding the world toward responsible AI development and use. The time is ripe for Europe to take initiative and shape the future of AI governance for the betterment of humanity.
The COVID-19 pandemic highlighted several paradoxical phenomena exacerbated by recent technological developments. In the global information age fueled by social media, humanity’s need for communal belonging seemed to take precedence over following best practices and guidelines recommended by healthcare professionals, a phenomenon compounded by the proliferation of online misinformation and charlatans who spread dangerous falsehoods about the virus. Despite these challenges, the pandemic also demonstrated the capacity for scientific collaboration in the modern era. Researchers were able to develop effective vaccines on previously unthinkable timelines, thanks to both the power of technology to facilitate collaboration and data sharing, and the support of policymakers.
The pandemic also revealed our collective inability to comprehend the implications of exponential growth. Governments and citizens alike struggled to employ the necessary measures needed to curb the spread of the deadly virus on the timeline it would have required. Now, we once again find ourselves at the very early stages of an exponential growth curve both regulators and broader society seem wholly unprepared for.
The ChatGPT chatbot launched by OpenAI, one of the fastest-growing companies in human history, captured the public imagination in a matter of weeks and demonstrated the state of artificial intelligence capabilities for the broader masses. More importantly, while companies like OpenAI have been working on large language models (LLMs) for years, recent iterations of these models have shown just how rapidly AI is developing. For several years, vast resources have been poured into research and development in AI technologies by some of the richest companies in the world.
Moreover, there has been a recent explosion of interest and investment in AI, which is likely to accelerate the speed of development even further. Perhaps most importantly, it is also conceivable that researchers will be able to develop recursively self-improving AIs in the near future—ones which build or improve other AIs with limited or no human intervention. This means that we could witness far greater social disruption in the post-pandemic decade than anything humanity has ever experienced before, presenting a significant challenge for policymakers, regulators, and society as a whole.
While the rapid development of AI presents potential risks, it is important to acknowledge the numerous benefits it has brought to society. AI has facilitated groundbreaking advances in fields such as medicine, where it contributes to drug discovery and personalized treatment plans. In the fight against climate change, AI assists in optimizing energy consumption and predicting natural disasters, allowing for better preparedness and resource allocation. In the near future, we can expect AI technology to reshape critical aspects of our lives, such as optimizing agriculture for sustainable food production, alleviating congestion in healthcare systems by helping doctors in their diagnostic work, and transforming education by offering personalized learning experiences tailored to individual needs. As AI technologies and debates about regulating them continue to advance, we must recognize the technology’s potential to solve complex global challenges, boost economic growth, and improve the overall quality of life for billions around the world. However, striking a balance between harnessing these benefits and mitigating potential risks will be vital.
In June 2022 Paul Cristiano, founder of the Alignment Research Center and former employee of OpenAI, warned that "catastrophically risky" AI systems may emerge in the near future, and that a clear warning sign of their emergence might never exist. Notable luminaries from physicist Stephen Hawking to OpenAI CEO Sam Altman have raised similar concerns. In a 2022 survey of AI experts, participants were asked “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”. The median answer was 10 percent, highlighting how widespread concerns about the potential risks of AI development are. At the core of the argument is what is referred to as the alignment problem—the challenge of ensuring that advanced AI systems operate in ways aligned with human values.
The most immediate threat is not necessarily a Terminator-style AI takeover of the world, but rather the proliferation of externalities that already today have immense negative implications for humanity. AI is used in a variety of ways that impact almost every aspect of human life, including social media, healthcare, research, e-commerce, and insurance, to name a few. With the exponential development of AI models, usage is likely to become even more pervasive. The most pertinent example of how AI could pose significant risks to society is in the information space, as technologies that use AI to generate extraordinarily believable fake news or ‘deep fake’ videos could easily be used to manipulate public opinion, undermine democracies, and damage institutions that form the backbone of society. The technology to do this already exists today and it is not just the political realm that could be impacted. The decrease in production costs of believable content in all forms could also move markets, cause bank runs (Silicon Valley Bank, anyone?), and radically decrease the public’s trust in institutions.
The proliferation of AI systems has also eliminated jobs, lowered the barrier for bad actors to use autonomous weapons, and introduced malicious predictive policing practices, to name only a few trends. With further development of these systems, it is not difficult to imagine major corporations completely rethinking their need for lower-skill knowledge workers in assistant or entry-level jobs. Unfortunately, it is also not hard to envisage a criminal organization or terrorist group developing a new chemical weapon or an authoritarian government using the latest AI developments to further repress their citizens. Just by playing around with GPT-4—which is only one AI model developed by one company working in a limited number of modalities—one gets the sense that these examples could become reality within months or in a best-case scenario, years.
If the above reality was the case in any other domain not as exclusive to a relatively small group of experts, there would be widespread calls for policymakers to step in. Yet when it comes to AI, relatively little has been seriously debated and even less achieved. The White House Office of Science and Technology Policy published a largely toothless “Blueprint for an AI Bill of Rights” and some American legislators have called for regulating AI. The Chinese Cyberspace Administration released draft measures on Generative AI services directly targeted at ChatGPT-type applications. Among other things, it would require that content generated by generative AIs reflect the “Core Socialist Values” and does not undermine the socialist system, in practice rendering current applications illegal. UK Prime Minister Rishi Sunak has announced a summit on regulating AI to be arranged in the fall of 2023. However, so far the European Union is the only major democratic entity that has proposed relevant and realistic legislation. The AI Act, in the final stages of the EU legislative process at the time of writing in the early summer of 2023, is a welcomed first attempt at regulating AI.
However, there are obvious limitations to the EU’s policy intervention. It only delineates rules for the use of AI-driven products, services, and systems inside the European Union. While many Europhile fans of The Brussels Effect often presume that EU regulation almost automatically creates global standards, we cannot afford to take it for granted. In simpler terms, even if a destructive AI is not developed within the EU, the consequences could naturally still affect all Europeans. Nevertheless, EU policymakers should seize the opportunity created by the newfound public interest in AI and the fact that the EU AI Act is about to become law, to stake out a novel European foreign policy based on AI diplomacy.
In the realm of security and foreign policy, the European Union is currently focused on containing the Russian threat, and rightly so. However, to prevent becoming irrelevant on the global stage in the longer term, it is crucial for the EU to expand the scope of its foreign policy. The challenge for the EU lies in maintaining its relevance amidst the escalating rivalry between the US and China. It just so happens that these two countries are the main AI development hubs in the world. This gives an opportunity for the EU—as a relatively neutral actor—to establish itself as the originator, mediator, and coordinator of a global effort to mitigate and reduce the risks associated with AI.
The most obvious counterargument pertains to the limited effectiveness of global regulations and the lack of trust in global institutions. However, there are several examples of successful global regulatory regimes that do not always get the credit they deserve. The Treaty on the Non-Proliferation of Nuclear Weapons and the Chemical Weapons Convention are not perfect but they have helped to avoid nuclear war and the widespread use of chemical weapons.
Another critique would relate to what such international frameworks would endeavor to regulate. Prescriptive, detailed solutions are best left to lower jurisdictions, but there are several strategic areas where some level of global consensus could be found. Firstly, governments around the world will need to invest significantly in AI safety and alignment research in the years and decades to come, and it makes sense to coordinate these efforts globally to encourage knowledge-sharing capabilities and agencies. Secondly, inherent within the EU AI Act is the necessity for transparency—chiefly, the obligation to ensure clear differentiation of AI systems during instances of human-AI engagement. This elementary transparency provision could be adopted on a global scale. Additionally, it is within the realm of what is possible to envision a significant number of states agreeing to provisions on strict information-sharing requirements from organizations working on AI, proactive mitigation before they are allowed to publish new models, or perhaps even limitations on how models are trained and when they are published. It could even go so far as to set out monitoring schemes for computing hardware used to train large-scale models or mandate the creation of registries for what entities own AI chips. Moreover, regulators should focus intently on understanding existing and possible future business models of companies working on AI models so as not to repeat the mistakes of the social media era. Perhaps most importantly, initiating such an international regulatory process would involve gathering all key stakeholders at one table where the exact scope of any regulatory regime could be discussed and determined.
The opportunity for the EU to kickstart such a process lies in the union’s relative neutrality on the global stage and the credibility that entails, but also in its better-than-advertised regulatory creativity and speed. Over the past few years, the EU successfully planned and implemented impactful policies to distribute vaccines, reduce emissions, and even collectively provide military equipment to Ukraine, quickly and effectively. Similar to the challenges posed by climate change and the proliferation of nuclear weapons, action is required even in the absence of complete knowledge. Impact assessments will be unable to fully elucidate the consequences of exponentially evolving AI systems operating across an indeterminate range of modalities. Policymakers must grapple with this uneasy uncertainty without succumbing to cynicism or inaction. In a rapidly advancing domain fraught with ambiguous yet tangible risks, policy approaches must be adaptable and grounded in risk management principles. The primary objective should be to bolster resilience and establish a regulatory framework that can help address the range of risks associated with AI—from day-to-day misinformation to a truly existential global threat.
Patrik Gayer is a security commentator and public policy professional with experience working at the European Parliament, the Finnish Ministry of Defense, and for an international technology company. He graduated with a Master's in Public Administration from the Harvard Kennedy School in 2023 where he was a Fulbright scholar studying the intersection of technology and security policy.