Envisioning an AI Paris Agreement

Envisioning an AI Paris Agreement

. 7 min read

The regulation of novel technologies like artificial intelligence (AI) has become a hot topic in global affairs, even previously addressed in this publication. Although widespread discourse about AI outside of computer science is a relatively new phenomenon, several countries have already developed rules for the technology’s use, ranging from general non-binding guidelines to legal doctrine. With some arguing that the potential harms of unregulated AI transcend national boundaries, there have also been efforts to expand binding regulation beyond the individual state level—some notable examples include the European Union’s AI Act and the Council of Europe’s recent Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. However, we have yet to see a truly comprehensive, legally-binding international agreement regulating AI.

Given the increased “fragmentation” of the global technology and regulatory landscape, impacted by “great power competition,” some may doubt that such an agreement would be possible. But legally-binding international agreements already exist on another contentious topic: carbon emissions. The challenge of addressing the global climate crisis is similar to the challenge of regulating AI internationally, and this parallel has not gone unnoticed by global experts, including in the United Nations. By understanding the current AI regulatory landscape through lessons learned from prominent global climate treaties, we can better understand how to cultivate international agreement on governing this nascent technology.

AI Risks and Current Regulation

We can divide much existing discourse on the risks posed by AI misuse into two categories: intentional and accidental. Intentional misuse, the more commonly feared category, is the result of nefarious actors who use AI with the intention of causing harm, perhaps through creating deepfakes or spreading disinformation. But experts also emphasize the importance of addressing accidental misuse, the result of humans’ limited ability to fully comprehend or control the decision-making processes within AI. Even arguably well-meaning users have implemented AI systems with unintended negative consequences. These include unjust outcomes in the healthcare, banking, insurance, and other industries.  For example, uses of AI for diagnosing illnesses and supporting law enforcement have been criticized for perpetuating racial bias. Some suggest that such AI threats could even place humanity at “risk of extinction,” but others claim that these fears have been exaggerated, arguing that regulation may be more harmful than helpful by disadvantaging emerging firms. However, many believe that without some protections in place, improper AI usage could easily impact states’ ability to protect human rights.

Today’s global AI regulatory landscape reflects this variation in perspectives, with individual countries adopting several types of regimes. Many scholars highlight the contrasting approaches of the United States, European Union, and China in particular, describing how they are “‘market-driven,’ ‘rights-driven,’ and ‘state-driven,’” respectively. However, there have also been calls for coordination among international actors, including notably from Pope Francis, resulting in several actions by international organizations. These include declarations by the United Nations, a “code of conduct” released by the Group of 7 (G7), a guide on AI Governance and Ethics by the Association of Southeast Asian Nations (ASEAN), an AI strategy by the African Union, and the wide-reaching Global Partnership on AI launched by the Organization for Economic Co-operation and Development (OECD). However, perhaps the most notable international guideline to date is the Council of Europe’s recent AI convention, described as “the first-ever internationally legally binding treaty” regarding artificial intelligence. This treaty has not only been proposed for recognition by the European Court of Human Rights, but also has been “open[ed] for signature” by members outside of the Council of Europe, giving it the potential of expanding binding regulation on AI.

Despite this development, international agreements on AI still face several challenges. The Council of Europe treaty, in particular, has been criticized for making exceptions for private and military activities. However, scholars also note several broader concerns, including the difficulty of promoting international “interoperability”—a term broadly used to refer to the “coordination” of disparate regulatory regimes—given the variation among national regulatory systems. Additionally, the “slow pace” of policymaking, particularly at an international scale, has led technological developments to rapidly outpace the institutions intended to keep them in check.

Yet, addressing the challenge of international cooperation is exactly what numerous climate treaties have sought to accomplish, and perhaps why several organizations and experts have expressed a desire to regulate AI like climate change. Echoing sentiments that climate change is difficult to address because of the lack of a centralized authority or the challenges of “burden-sharing,” the United Nations has suggested that “analogies can be made to efforts to combat climate change, where the costs of transition, mitigation or adaptation do not fall evenly, and international assistance is essential to help resource-constrained countries.” However, adopting this structure also requires considering the effectiveness of international infrastructure around climate regulation. This article considers the role of one piece of this infrastructure: legally binding treaties.

Global Climate Treaties: A Regulatory Success or Bandage?

Scaffolded by information-gathering bodies like the Intergovernmental Panel on Climate Change (IPCC) and annual Conference of the Parties (COP) gatherings, the international community has shown a significant commitment to climate regulation through passing several international treaties. Two of these treaties, the Kyoto Protocol and Paris Agreement, are legally binding.

The Paris Agreement, still in effect today, enjoys global support, with over 190 Parties. Many scholars favor the Paris Agreement, praising how it places climate target decisions within the hands of individual countries. This approach differs from the previous Kyoto Protocol. Though still monumental as “the first legally binding climate treaty,” the Kyoto Protocol required its participants to “collectively [negotiate]” binding emissions targets. These strict, communally-generated guidelines likely discouraged some countries, including prominent polluters like the United States, from participating. However, changing this provision to allow greater autonomy in the Paris Agreement likely encouraged broader participation and less “gridlock” in the regulation process.

However, the primary requirement for the Paris Agreement’s signatories is to submit increasingly strong statements of “nationally determined contribution,” or goals for their reduction in emissions. In other words, states are not legally bound to actually achieve that reduction. Instead, the Paris Agreement operates through political “pressure,” with members facing a “political expectation” to increase their contribution, or risk reputational backlash. Though some argue that this approach is preferable to legal punishment because it encourages “healthy competition” in place of antagonism, this laissez-faire approach backfires when states either set “infeasible” goals or under-commit because they believe they will not face consequences. Further, states may not want to punish others for failing to meet their commitments for fear of highlighting their own shortcomings. In short, scholars suggest that this structure not only allows states to adopt relatively weak emissions reductions commitments and free-ride, but also prevents the agreement from achieving the standard of a truly enforceable legal treaty.  Ultimately, despite the efforts of the Paris Agreement, global temperatures continue to rise.

The Geopolitics of AI

An additional point of consideration is that AI and its regulation have become a focal point in global strategic competition, especially between China and the United States. Exemplifying the limits of analogizing AI regulation to climate regulation, large players such as China and the United States may have less political will to regulate themselves in AI than in the case of climate change. Notably, both countries have openly cooperated in the United Nations to pass resolutions on AI and collaborated in other meetings like the UK Safety and Woodside Summits. However, many US scholars remain concerned about the broader effects of regulating AI for their competition. This competition is largely over tangible elements of AI development. Many Americans, for instance, believe that increasing AI governance could give Chinese firms a “leg up.” Although some argue that these fears are unfounded because of China’s own strict regulatory regime and US export controls, they continue to drive discussions. Similarly, others believe that regulating the use of AI in warfare could unfairly advantage adversaries.

However, scholars have also identified a second-order source of competition, as China attempts to forge itself as a leader in global regulation. Some have argued that despite its strong existing regulatory infrastructure, China has avoided signing agreements like the Council of Europe’s AI convention in an attempt to “distance itself from Global West-centric efforts to set global norms” and instead “[join] the ranks of rule-makers.” Thus, while the United States and its Western allies have engaged in their own regulation initiatives, China has created alternatives like the “Global AI Governance Initiative” and the “World AI Conference.” This divergence, paralleling the differences in the structures of regulation in both countries, has ultimately led some to claim that legally-binding agreements, in particular, might be “out of reach” or “impossible.”

China’s emphasis on global engagement—even being incorporated into the nation’s broader strategy of engaging with the Global South through the Belt and Road Initiative—also highlights concerns about AI regulation being driven by states that already have broadly developed AI infrastructure. Some fear that introducing limits on AI development in less-developed countries could hinder their ability to catch up to larger economies. Addressing this challenge will be vital in ensuring AI’s broad accessibility.

Although the United States signed on to the Council of Europe’s Framework Convention, it has yet to be approved by the US Senate. Donald Trump’s recent return to the presidency may pose a challenge to the United States’ participation in this Convention, as well as other AI treaties. Having already overturned the nation’s AI Executive Order, Trump appears less likely to engage in multilateral regulatory efforts, especially given that he pulled the United States out of the Paris Agreement during his first term and again in his second. However, the fact that the United States signed onto the Global Partnership on AI during his first term, even if motivated by the belief that it could deter Chinese influence, indicates some possibility of cooperation.

What Next?

Given that the Council of Europe’s AI Treaty was only opened for signature in mid-2024, it is still unclear how much international support the agreement will receive. Even though the treaty has been praised for promoting interoperability—particularly by encouraging the United States and European Union to “formally” align their policies—it may be the case that, just like in the Kyoto Protocol, the biggest contributors to AI risk may not sign onto the treaty. This dilemma highlights the familiar tension between regulators’ desire for binding commitment and fear of alienating potential signatories or spurring international animosity. Some have even decried the use of “broad,” multilateral treaties, suggesting that regionally based treaties may be more effective. However, even regional blocs could foment competition. Others go so far as to claim that agreements extending beyond the country level are unnecessary. 

Given these challenges in passing and enforcing legally binding legislation, it makes sense that most experts who compare climate and AI regulation suggest creating regulatory organizations similar to the IPCC to monitor the cutting edge of AI development instead of international treaties. However, in a time of increasing geopolitical competition, a wide-reaching multilateral agreement could symbolize the possibility of diplomacy, much like the creation of the Paris Agreement did in 2015. To this end, the moments of collaboration between China and the United States could indicate the potential for global synergy on regulation, even if both countries currently appear driven by geopolitical gain. Ultimately, though, the example of global alignment to address the climate crisis—though not without its flaws—indicates that even competitive international systems can allow for collaboration on issues, like AI regulation, that matter most to society.

This article was updated on February 11, 2025 to include an additional reference.