Dan Sarewitz, professor of science and society at Arizona State University, argues that we should fully expect politicians to politicize scientific information because “that is their job...and this—like the second law of thermodynamics—is not something to be regretted, but something to be lived with.” Sarewitz’s assertion flies in the face of many recent discussions on science and politics, focusing predominantly on the actions of President George W. Bush, which are characterized in ample portions by both blame and regret.


The Bush administration has courted controversy in many areas of policy making, and science is no exception. While complaints about the heavy-handed tactics and questionable decisions of the Bush administration are both justified and easy to offer, such complaints can do little to address the challenges of science in policy and politics, especially now that President Bush enters the final months of his presidency.


The most simplistic prescription that has been offered to the issues of the politicization of science is simply to elect another president, a solution that plays well in large segments of the scientific community, where many never shared President Bush’s politics anyway. For instance, in 2004 a group called Scientists and Engineers for Change sought to use the issue of science politicization to help elect John Kerry to the presidency. At times a rallying cry to end the Republican “war on science” can be heard in the current presidential campaign.


More sophisticated efforts to address the challenges of science and politics look beyond the efforts to gain partisan advantage and instead focus on practical strategies for living with the reality that science and politics will always be intermixed in the practice of governance. If Sarewitz is correct—and many decades of study on the role of science in decision-making suggest that he is—then efforts to keep science and politics separate are not only doomed to fail, but they are likely to create conditions enhancing the pathological politicization of science.


Politics and Science Have Always Mixed

Accepting that science and politics are inextricably intertwined begins with a clear-eyed view of history. Consider just a very few examples of political issues that involved science during the past six presidential administrations. President Richard Nixon had the National Aeronautics and Space Administration (NASA) move the timing of the launch of Apollo 17 in order to better serve his 1972 reelection campaign, against the wishes of NASA scientists and engineers. During President Ford’s administration the Los Angeles Times alleged that the Environmental Protection Agency (EPA) had falsified data in support of its regulatory position on sulfur oxides. A subsequent investigation by the US Congress found serious issues with EPA’s peer review and that some of its epidemiological research provided an unsuitable basis for regulation.


President Jimmy Carter went against the wishes of his scientific advisors when he committed the United States to drawing 20 percent of its energy from renewable sources by 2000. President Carter explained that he accepted his advisors technical conclusions that the goal would be impossible, but that he had put forward the proposal for political reasons. President Ronald Reagan, prior to being elected, questioned the science of evolution, calling it a theory that was being increasingly challenged by scientists. He suggested that if evolution was to be taught in schools, “then I think that also the biblical theory of creation, which is not a theory but the biblical story of creation, should also be taught.” The administration of President George H. W. Bush proposed redefining “wetlands” in such as way so as to exclude millions of acres of land from federal protection and open them up for development. The proposal was eventually withdrawn for lack of a scientific basis. President Bill Clinton ordered a strike on the Al Shifa pharmaceutical factory in Sudan in 1998 in retaliation for bombings of the US embassies in Kenya and Tanzania. The target of the attack was justified in part by scientific evidence gathered at the factory site. It was later revealed that the scientific evidence had in fact been inconclusive.


If science and politics have always been intermixed, then what, if anything, is different about today? I can point to seven reasons why the politicization of science has gained so much more salience in recent years. First, there is an increasing number of important issues that are related to science and technology in some way. Some issues are the result of advances in science and technology (e.g., the ethics of cloning or stem cell research); in others, science and technologies are central to their resolution. Second, policy makers increasingly invoke expertise to justify a course of action that they advocate. Third, advocacy groups increasingly rely on experts to justify their favored course of action. Fourth, Congress, at least for the past decade and perhaps longer, has been derelict in its oversight duties, particularly when relating to issues of science and technology. Fifth, many scientists are increasingly engaging in political advocacy. Sixth, some issues of science have become increasingly partisan as some politicians sense that there is political gain to be found by exploiting differences in public opinion on the politics of issues like stem cells, teaching evolution, climate change, and so on. And lastly, but most visibly, the Bush administration has engaged in hyper-controlling strategies for the management of information.



Of these varied reasons for the increasing number of issues raised at the interface of science and politics, only one will be addressed by the election of a new United States president. The others will remain, and dealing with them will require a more sophisticated and nuanced understanding of the messy interconnections between science and politics.


The Language of Politics

The very language of science in public discussions lends itself to politicization. For instance, in February 2006, scientists at NASA’s Jet Propulsion Laboratory complained because they had been instructed to use the phrase “climate change” rather than the phrase “global warming” in their public communications. The reason for this complaint is that the language of climate science has become politicized. A Republican strategy memo recommended use of the phrase “climate change” over “global warming,” though environmental groups have long had the opposite preference. At a panel discussion at the 2008 annual meeting of the American Association for the Advancement of Science, Harvard’s John Holdren recommended that political action on climate change might be better motivated by using the term “global climate disruption.” Any language used to characterize the human role in the global environment will necessarily be loaded with emotional and symbolic meaning. There can be no getting around this reality—there is no bloodless, neutral language.


Similarly, several years ago, the Union of Concerned Scientists, as part of its advocacy campaign on reducing greenhouse gas emissions, recommended the use of the word “harbinger” to describe current climate events that may become more frequent with future global warming—such as “Hurricane Katrina is a harbinger of global warming disasters.” Subsequently, scientists at the National Oceanic and Atmospheric Administration (NOAA), Harvard Medical Center’s Center for Health and the Global Environment, Stanford University, and the Fish and Wildlife Service’s Polar Bear Project began to use the phrase in their public communication in concert with advocacy groups like Greenpeace. The term has also appeared in official government press releases from science agencies.


The use of language to convey political meaning is of course well understood in politics. If the choice of language to use in discussing matters of science is inherently political then so too is selection of topics for press releases and statements made in government reports describing science programs, and so is the process of assembling government advisory committees. Consequentially, we will now consider each in turn.


What Knowledge to Share? Politics and the Press

In 2005, two federal agencies that rely a great deal on scientists—NASA and NOAA—found themselves at the center of controversy on the media’s access to scientists in person and via press releases on the subject of climate change. One particularly visible flashpoint occurred when a young Bush administration political appointee tried to prevent James Hansen, a prominent NASA scientist, from participating in a media interview. The appointee disagreed with Hansen’s stance on climate change, and thus sought to limit his access to the media. The effort backfired when Hansen went public with his complaint, and it was later revealed that the staffer in question had lied about his credentials. The Bush administration ultimately failed in its efforts to carefully manage which scientists could talk with the media and to control the content of language in press releases on climate change. Arguably, its strategy led to even more coverage of the issue and a further loss of credibility for the administration.


It would be easy to call for a release of all scientific information to the public, but the reality is that choices always have to be made regarding what information is presented formally via press releases and regarding on which scientists public attention is focused.


Choices must be made because scientists in federal agencies author tens of thousands of research papers every year. For only a very small fraction of these can federal agencies issue press releases or media advisories, so the decision to issue a press release necessarily involves extra-scientific considerations such as the likelihood of making news, which itself can be a function of political conflict—a typical criterion of newsworthiness. The politics involved need not necessarily be partisan; they may simply involve casting the agency in a positive public light in preparation for future political battles over agency budgets.


Consequently, each agency must have some procedure for deciding which subjects and which scientists are promoted to the public. Because of the recent controversies involving press access to scientists, NOAA and NASA have developed very different approaches to their media policies.


NOAA’s policy on public statements by its employees states that the employee speaks for the agency at all times: “Whether in person, on camera, or over the phone, when speaking to a reporter you represent and speak for the entire agency.” This means that the individual is always a representative of the federal government. NASA, in contrast, distinguishes between speaking for the agency and personal views: “NASA employees who present personal views outside their official area of expertise or responsibility must make clear that they are presenting their individual views—not the views of the Agency—and ask that they be sourced as such.” Under both approaches it is expected that the officials know and understand relevant official US government policy, with the difference being that the NASA policy allows room for employees to express their personal views, whereas the NOAA policy does not.



Every government agency needs a media policy. Evaluating and improving agency media policies would seem to be an ideal subject for congressional or executive oversight, in order to develop procedures that get information out in the context of effective governing. Unfortunately, in recent years the Bush administration appears to look to media policies as irrelevant, while Congress views them as a topic on which to score partisan points. Even as the United States approaches the end of the Bush administration, we certainly have not heard the last word on scientists and agency media policies because the current administration is but one of a variety of factors leading to the increasing politicization of science.


Which Experts? Picking Advisory Panelists

A November 2004 report of the nation’s leading nongovernmental science advisory body—the National Research Council (NRC)—recommended that presidential nominees to science and technology advisory panels not be asked about their political and policy perspectives. The NRC describes the political and policy views of prospective panelists as “immaterial information” because such perspectives “do not necessarily predict their position on particular policies.” This “don’t ask, don’t tell” approach has been subsequently passed into law under the so-called Durbin Amendment to the FY 2006 Health and Human Services Appropriations Bill.


However, the “don’t ask, don’t tell” approach to politics in advisory committee empanelment is meaningless in practice: Considerations of politics are unavoidable in the empanelling process. Consider the irony in the fact that the NRC committee that recommended against considering political factors in advisory panels was itself composed of a perfect balance between those committee members who had served Republican administrations and Democratic administrations. The real question is whether we want to openly confront the reality that extra-scientific factors of course play a role in committee empanelment or turn a blind eye and allow committee empanelment decisions to play out in the proverbial backrooms of political decision-making.


In nearly every other area of politics, advice is put forward with awareness of the political and policy perspectives of the relevant experts: the Supreme Court, congressional hearing witness lists, and the 9-11 Commission, to name just a few. And while science is the practice of developing systematic knowledge, scientists are both human beings and citizens, with values and views, which they often express in public forums.


Sheila Jasanoff has written that when experts make scientific judgments, they do so usually “in full knowledge that different choices may lead to substantially different policy recommendations...it is almost inevitable that a scientist’s personal and political values will influence his reading of particular facts.” Whether they are asked explicitly or not during the appointment process, many scientists’ views on politics and policy are well known, especially as more and more scientists publicly attest to their political agendas. For instance, thanks to an open letter of endorsement in 2004, we know of 48 Nobel Prize winners who supported John Kerry for president. It would be easy to convene an advisory panel of very distinguished scientists who happen to have signed this letter, and we could do this without asking them about their political views.


Moreover, to evaluate whether a policy focused on keeping political considerations out of the scientific advisory process is working, it would be necessary to have information showing that the composition of particular panels is not biased with respect to panelists’ political and policy views, which in turn would require knowing what those views are in the first place. In short, this situation creates a catch-22.


Finally, science advisory panels never deal purely with science. They are convened to provide guidance either on policy or on scientific information that is directly relevant to policy. Turning again to Sarewitz, “When an issue is both politically and scientifically contentious, then one’s point of view can usually be supported with an array of legitimate facts that seem no less compelling than the facts assembled by those with a different perspective.”


On climate change for instance, even as scientists have come to a robust consensus that human activities have significant effects on the climate, legitimate debate continues on the costs and benefits of proposed alternative policy actions. And evaluation of costs and benefits involves considerations of values and politics.


Rather than eliminating considerations of politics in the composition of science advisory panels, a policy of “don’t ask, don’t tell” only makes it more difficult to see the role played by ever-present politics. More important than the composition of scientific advisory panels is the charge that they are given and the processes they employ to provide useful information to decision-makers.



The current debate over these panels reinforces the old myth that we can somehow cleanly separate science from politics in order to ensure that the science is somehow untainted by the “impurities” of the rest of society. Yet paradoxically, we also want science to be relevant to policy. A better approach would be to focus our attention on developing transparent, accountable and effective processes to manage politics in science—not to pretend that it does not exist. There are no easy solutions to the challenge of managing expertise in government. Efforts to implement solutions that might sound good in theory often flounder upon confronting the realities of governing. The sooner we accept that we have to learn to live with this reality, the better.


Scientific Advice in Practice

When former Vice President Al Gore testified before the United States Congress in 2007, he used an analogy to describe the challenge of climate change: “If your baby has a fever, you go to the doctor. If the doctor says you need to intervene here, you don’t say, ‘Well, I read a science fiction novel that told me it’s not a problem.’ If the crib’s on fire, you don’t speculate that the baby is flame retardant. You take action.”


With this example, Al Gore was not only advocating a particular course of action on climate change, he was also describing the relationship between science and political decision-making. In Gore’s analogy, the baby’s parents (i.e. in his words, “you”) are largely irrelevant to the process of decision-making because the doctor’s recommendation should be accepted without question.


But anyone who has taken a child to a doctor for a serious health problem knows that the interaction between patient, parent, and doctor can take a number of different forms. Experts therefore have choices in how they relate to decision-makers, and these choices have important effects on decisions but also the role of experts in society.


Gore’s metaphor provides a useful point of departure to illustrate four different roles for experts in decision making that I describe in my book The Honest Broker.The four categories are very much ideal types—the real world is more complicated, but nonetheless I argue that they help to clarify roles and responsibilities that might be taken by experts seeking to inform decision-making.


The Pure Scientist seeks to focus only on facts and has no interaction with the decision-maker. The doctor might publish a study that shows that aspirin is an effective medicine to reduce fevers. That study would be available to readers in the scientific literature.


The Science Arbiter answers specific factual questions posed by the decision-maker. One might ask the doctor about the benefits and risks associated with ibuprofen versus acetaminophen as treatments for fever in children.


The Issue Advocate seeks to reduce the scope of choice available to the decision-maker. The doctor might hand a parent a packet of a medicine and say “give this to your child.” The doctor could do this for many reasons.


The Honest Broker of Policy Options seeks to expand, or at least clarify, the scope of choice available to the decision-maker. In this instance the doctor might explain that a number of different treatments are available, from wait-and-see to taking different medicines, each with a range of possible consequences.


Each mode of interaction deals with the challenge of integrating science and politics in a different way. Consider the Pure Scientist or Science Arbiter as described above. How would a person view a doctor’s advice to take ibuprofen after learning that she had received US$50,000 last year from a large company that sells ibuprofen? Or upon hearing advice to perhaps forgo medicine for this particular ailment, what if one learned that she happened to be an active member of a religious organization that promoted treating sick children without medicines? Or if one learned that her compensation was a function of the amount of drugs that she prescribed? Or perhaps the doctor was receiving small presents from an attractive drug industry representative who stopped by the doctor’s office once a week? There are countless ways in which extra-scientific factors can play a role in influencing expert advice. When such factors are present they can lead to “stealth issue advocacy,” which I define as efforts to reduce the scope of choice under the guise of focusing only on purely scientific or technical advice. Stealth issue advocacy has great potential for eating away or even corrupting the legitimacy and authority of expert advice.


Then how does one decide what forms of advice make sense in what contexts? I argue that a healthy democratic system will benefit from the presence of all four types of advice, but depending on the particular context, some forms of advice may be more effective and legitimate than others. Specifically, I suggest that the roles of Pure Scientist and Science Arbiter make the most sense when values are broadly shared and scientific uncertainty is manageable (if not reducible). An expert would act as a Science Arbiter when seeking to provide guidance to a specific decision and as a Pure Scientist if no such guidance is given. (In reality, the Pure Scientist may exist more as historical legend than anywhere else). In situations of values conflict or when scientific certainty is contested, that is to say every political issue involving scientific or technical considerations, then the roles of Issue Advocate and Honest Broker of Policy Options are most appropriate. The choice between the two would depend on whether the expert wants to reduce or expand the available scope of choice. Stealth issue advocacy occurs when one seeks to reduce the scope of choice available to decision-makers but couches those actions in terms of serving as a Pure Scientist or Science Arbiter; for example, by suggesting that “The science tells us that we must act in so-and-so manner.”



So a child is sick and the parent takes him or her to the doctor. How might the doctor best serve the parent’s decisions about the child? The answer depends on the context and involves far more nuance than that suggested by Gore’s metaphor. If one feels able to gain the necessary expertise to make an informed decision, he or she might consult peer-reviewed medical journals (or a medical website) to understand treatment options for the child instead of directly interacting with a doctor. If one is well informed about the child’s condition and there is time to act, one might engage in a back-and-forth exchange with the doctor, asking her questions about the condition and the effects of different treatments. If a child is deathly ill and immediate action is needed, a parent might ask the doctor to unilaterally make whatever decisions are deemed necessary to save the child’s life. If there is a range of treatments available, a parent might ask the doctor to spell out the entire range of treatment options and likely consequences to inform the decision.


The interaction between expert and decision-maker can be complicated, even in a relatively simple situation like a doctor-patient discussion; it is even more so in highly politicized settings. Understanding the different forms of this relationship is the first step towards the effective governance of expertise, and for learning to effectively live with the intermixing of science and politics.


We have choices in how experts relate to decision-makers. Whether we are taking our children to the doctor or using science to inform policies, better decisions will be made more often if we pay attention to the role of expertise in decision-making and the different forms that it can take. Striving for better decisions, rather than trying to separate science and politics, is the best method for dealing with the challenges of the politicization of science.