The Asilomar Conference and Contemporary AI Controversies: Lessons in Regulation

The Asilomar Conference and Contemporary AI Controversies: Lessons in Regulation

. 6 min read

Will humans remain relevant in the age of artificial intelligence (AI)?

The inevitable, escalating force of AI is moving towards a head-on collision with the ever-evolving but timeless staples of higher education, labor and litigation. Academic and political discourse has exploded, with fears regarding AI models ranging from grossly exaggerated to rationally substantiated.

Within this tumult, a growing chorus of tech leaders had proposed a moratorium on AI, hoping to offer a bit of room for the proper ethical frameworks to catch up with technology. The implication was that no further progress would occur until an acceptable set of standards for safe AI use could be  instituted.

However, it was not long before the dominating ethos of “move fast and break things” prevailed in big tech, as the development of AI models appeared to instead accelerate. Even Elon Musk, who had initially expressed solidarity and signed the proposed moratorium, launched an artificial intelligence startup just last year.  

We now face a crossroads: do we allow artificial intelligence to overtake traditionally human-led endeavors, or do we frantically install speed bumps?

Our imminent decision is certainly not unique alongside the cascade of historical events that have defined the past century, with one hot-button controversy coming to mind: the regulation of recombinant DNA technology. The climacteric controversy that once plagued geneticists and politicians alike provides a useful historical precedent for regulating contemporary AI research.

Recombinant DNA Technology

Between the 1950s and 60s, recombinant DNA (rDNA) technology positioned itself as a trailblazing innovation, bolstered by grandeur claims of using gene editing to create plants resistant to crop disease or artificially produced insulin with few side-effects. However, concerns rapidly spread over protecting the general public and laboratory personnel from the potential biohazards created through experimentation. Fears intensified over individual scientists creating novel fatal diseases or a lab experiment producing some variant of Frankenstein’s monster. These fears were bolstered by vocal objections from prominent academics and scientists such as Dr. George Wald, Nobel Prize winner for biology in 1967.

This rising tide of concerns culminated in a moratorium on rDNA research in July 1974, championed by the leading experts of the rDNA regulatory movement. Chief among them was a biochemist and pioneer in genomic editing, Paul Berg, who led the discussions between government and academia. In perhaps the most famous example of scientists regulating scientific research, the moratorium represented the intense battle between innovation and ethics that dominated the latter half of the 20th century. In several ways, the existential and moral struggles faced by Paul Berg and his colleagues echo those that plague AI proponents and opponents today.

The Asilomar Conference of 1975

From February 24 to 27, 1975, the Asilomar Conference gathered over one hundred scientists, lawyers, and select journalists in Monterey, California to discuss whether the instituted moratorium on recombinant DNA technology should be lifted. Their question: what guidelines would facilitate safe, “protected” experimentation to mitigate the risks of rDNA technology?

Scientists recognized the need to approach rDNA technology in a way that not only satisfied public concerns but also supported autonomy in research. Self-governance was an attractive principle, for it avoided the chaotic patchwork of federal legislation in decision-making forums related to science. Anxious uncertainty weighed on conference members, with many fearing that strict regulation would fill the policy lacuna. As the discussions in Monterey transpired, Congress prepared to impose stringent regulations if no standardized norms were adopted before the conference’s conclusion.

Following multiple spirited sessions rife with disagreement, several recommendations were combined in an unsteady compromise: proper handling of bacteria that are incapable of surviving outside a lab, classifying experiments based on necessary containment levels, and discontinuing experimentation involving known carcinogens, toxin-producing genes, and antibiotic resistance genes. Paul Berg spearheaded the synthesis and proposal of these recommendations, which ultimately resulted in the formal adoption of the Guidelines for Research Involving Recombinant or Synthetic Nucleic Acid Molecules by the US National Institutes of Health (NIH).

The heart of the Asilomar Conference’s debate centered on the appropriate level of interference and oversight that a governing body has over uncharted territory. Participants in the conference lauded their fulfillment of social responsibilities and hailed the avoidance of government-regulated oversight as a boon for future research. As one molecular biologist expressed, it was “really an amazing time for scientists actually putting restraints on themselves in a working situation.” The Asilomar Conference was initially perceived as wildly successful, even considered the gold standard for self-regulation in science at some point. It also prompted general improvements in lab safety practices.  

However, the efficacy and motives of the Asilomar Conference have been called into question. Many of the imagined genetic horrors the convention sought to prevent were discovered to be exaggerated and currently unfeasible, and the intentions buoying the “self-restraint” demonstrated by scientists have met more intense skepticism as the passage of time has allowed reflection upon Asilomar. Scientists were partially motivated by a desire to avoid regulation and maintain favorable public relations, which is evident in first-hand accounts by participating researchers that allude to widespread relief. Many scientists feared that indecisiveness and disagreement around stringent guidelines would result in “heavy legislation” by Congress and that “[their agreed-upon guidelines were] probably the fastest route towards the science we know.” Therefore, the research restrictions approved at Asilomar were partially rooted in an ulterior motive: reducing the perceived need for government oversight.

The Asilomar Conference as a Lens to Evaluate Contemporary AI Debates

Echoes of these hidden intentions and calls to action manifest in the contemporary AI landscape, where tech leaders are once again calling for a moratorium. An open letter signed by over 30,000 individuals, including Elon Musk and political economy expert Daron Acemoglu, demonstrates how concerns have spiked over "AI systems with human-competitive intelligence [that] can pose profound risks to society and humanity.”

The AI Asilomar Principles, developed at the 2017 Beneficial AGI Summit, reiterate the concerns expressed by the participants of the 1975 convention. The decision to host an AI conference in Asilomar, California—where, only decades before, regulation fell in the hands of genomic researchers—is no coincidence. The collection of scientists led by Paul Berg provides a helpful historical comparison to evaluate measured responses to AI tools today.

A particularly critical lesson gleaned from the Asilomar Conference on recombinant DNA is that oversight and innovation are not necessarily incompatible. The 1975 guidelines spread in a way that welcomed oversight, with new security facilities being required at universities and the founding of investigatory bodies such the US National Institute of Health Recombinant DNA Advisory Committee. Public scrutiny also quickly followed these proceedings—as just 15 percent of the Asilomar participants were from the media—which provided the public a glimpse into the decision-making process behind the intensely poignant issue. The conference therefore boosted the transparency of rDNA research to both regulatory bodies and the general public.

However, a key difference between genetic modification research in 1975 and AI technology today lies in the institutions involved. In the 1970s, many of the scientists engaged in recombinant DNA research were spurred by collaboration across their respective academic institutions. In contrast, the majority of AI developers and software engineers are contracted to private companies, blurring the divide between public responsibility and private sector work. This dilemma is not unique to AI, as many issues in science and technology are beset by economic self-interests.

A handful of powerful tech giants like OpenAI now drive the development of generative AI tools. Big Tech rivals can obtain AI-related intellectual property (IP) granting companies ownership over certain generative AI elements and tools. IP can either be held in reserve or wielded as “competitive weapons in lawsuits against rivals.” The accumulation of these patents by Big Tech competitors drives a deeper wedge into future possibilities of open collaboration and establishing agreed-upon rules of conduct.

OpenAI, for instance, does not offer particular details on how its model GPT-4 is trained, citing a highly "competitive landscape and safety implications." Chatbot creators have admitted that their AI tools possess deep flaws, yet there is a push to rapidly release products in order to out-pace rivals.

As several reports point out, government policy may run the risk of "entrenching" the purview of a few big tech companies, rather than mitigating it. Aggressive oversight that leverages public and private sectors becomes increasingly necessary to ensure proper legislation passes.

In the past year, several experts warned Congress not to place the future of AI solely in the hands of the few most powerful tech companies. Rigorous public oversight and scrutiny, combined with strong government regulation, are essential to ensure that the development of transformative AI systems is done responsibly and with the best interests of society in mind.

The decision-making process at the 1975 Asilomar Conference was influenced by interests beyond simply public welfare—including intentions to avoid regulation and preserve the freedom to continue experimentation—which presented challenges in responding to more dire assessments of potential dangers. Those interests for individual gain should not be prioritized in the age of AI.

However, this dilemma does not mean that a complete pause on AI development is the sole, viable solution. Rather, it necessitates the installation of speed bumps to slow the dangerous race towards ever-expanding, unpredictable AI models. The focus should reorient towards making powerful AI systems more accurate and transparent. AI developers must modify their rules of conduct to encourage open collaboration with competitors and policymakers. The creation of robust AI governance frameworks is necessary to combat the potential for self-incentivized standards of regulation.

Just as with recombinant DNA technology, the choice before us is clear: do we proactively shape the future of transformative AI, or do we allow it to shape us? The stakes could not be higher, and the lessons of the 1975 Asilomar Conference loom large. We have an opportunity to enjoy a long "AI summer," reaping the rewards of our innovations while engineering them for the clear benefit of all and giving society time to adapt. Ethics must catch up with innovation. Let us not rush unprepared into a perilous fall.