Against ‘Science-Based Policy’

hypothesis
policy
Author

Dan Hicks

Published

August 21, 2015

Today, as part of the preparation for my AAAS science policy fellowship, I filled out a brief survey on science and public policy. Several of the questions dealt with “science-based policy,” and overall the wording of the questions assumed that policy should be “science-based.” I was uncomfortable with this language, and in this post I want to explain why.
(I should note first that, since I already filled out the survey, I no longer have access to it, and so I can’t confirm the wording that was actually used. But I’m pretty confident that “science-based” accurately represents the way the survey talked about the relationship between science and policy.)

“Science-based policy” suggests an asymmetrical relationship. Science is a foundation; it is firm and unshakeable, and it provides structural support to the policy that is “based on” it. It is also chronologically (and, by metaphor, logically) prior to policy. First the solid, secure foundation of science is laid; and then policy is built on top of it.

In his book Sustainability, Bryan Norton calls this the “serial” view of the relationship between science and policy, and he criticizes it as one of the major problems with contemporary environmental policymaking institutions (including, by name, the EPA):

Serial approaches fail because they are based on a false image and an associated myth that is perhaps the greatest barrier to an improved understanding of ecosystem management. The image is that of an ideal environmental decision maker, one who has gathered all the descriptive information regarding the functioning of an ecological system; determined the likely outcomes of further impact from human activities; polled the population to determine the values, goals and preferences in good democratic fashion; and, armed with all the facts, decides what policy to pursue to maximize total welfare. (140-2)

The problem with the serial view, and thus with the language of “science-based policy,” is that it assumes policy-relevant science can be done prior to, and hence independently of, considerations of aims, goals, and values. There are at least two reasons to think this is assumption is flawed.

First, policy-relevant science needs to provide knowledge and understanding concerning the variables and processes that are relevant to the aims, goals, and values of a piece of policymaking. In some policy contexts, the aims, goals, and values are widely shared, and there are institutionalized, standardized variables and processes for policy-relevant science to focus on. For example, when we’re talking about health, there’s general agreement that we’re concerned about mortality and cancer, and mortality and cancer rates are institutionalized as the variables of interest for toxicology research.

In other contexts, however, the aims, goals, and values are more controversial, and consequently there’s much less agreement on the appropriate variables and processes. Consider agricultural policy. There’s general agreement on productivity, as measured by yields. But there’s much less agreement on things like nutritional quality, sustainability, the economic viability of small family farms, and preserving the traditional cultures of agricultural communities. In these kinds of more contested contexts, there’s a very live risk that “policy-relevant” science will turn out to be irrelevant, incomplete, or contestable. For example, researchers might focus on certain variables — say, productivity and farmer profit margins — and neglect other variables — such as biodiversity and qualitative measures of culture.

Second, as I’ve noted, among philosophers who specialize in the role of values in science there’s general agreement that ethical and political values have a legitimate role to play in every stage of scientific inquiry, including the use of evidence to evaluate hypotheses. Consider the argument from inductive risk, which in recent years has been associated with the work of Heather Douglas. This argument points out that whether we have sufficient evidence to accept an hypothesis H depends, among other things, on the non-epistemic consequences of wrongly accepting H. On the one hand, if H is the hypothesis that I have enough soymilk for my tea tomorrow morning, the risks associated with H are small: at worst, I’ll have to go around the corner for some more soymilk or drink some tea that’s a little more bitter than I like. By contrast, if H is the hypothesis that bisphenol A is not an endocrine disruptor, then the risks are quite large: many people could suffer cancer or developmental disorders if we accept H and it turns out to be false.

To extend this argument further, I think we need to recognize that different ethical and political values and social locations can lead to different assessments of the non-epistemic consequences of accepting a hypothesis. Consider the hypothesis that humans are responsible for climate change, or anthropogenic global warming, “AGW.” For many city-dwellers, the main costs of accepting this hypothesis will be purely a matter of paying a bit more for fuel and electricity. This might mean less money to pay for other things, but as harms goes it’s not so very serious. But for people in the fossil fuel industry — and for communities that are economically dependent on the fossil fuel industry, such as in northern Alberta, West Virginia, and western Pennsylvania — accepting AGW could very likely lead to a major economic crisis. Consequently, while many city-dwellers might be led to accept AGW based on relatively little evidence, people in the fossil fuel industry and fossil fuel-dependent communities might require much more evidence to be convinced.

In short, policy-relevant science needs to be informed by the aims, goals, and values that are relevant to the policy process, in order to produce knowledge and understanding that’s relevant to policy and in order to be appropriately incorporate values — and potential value-based disagreements — into the evaluation of hypotheses. But this means that it can’t be carried out in a way that’s logically prior to any consideration of these aims, goals, and values. The problem with the “science-based policy” language, then, is that it seems to assume exactly the opposite.

What language should we use instead? I’d suggest “science-informed policy.” This suggests a more egalitarian relationship between science and policy, and recognizes that science is only one thing that should inform good policy. (That points to another problem with the “science-based policy” language.) Most importantly for this post, “science-informed policy” recognizes that policy can also inform science, in the ways that I described above.