Regulatory Science Reform?

data
policy
Author

Dan Hicks

Published

November 21, 2014

Over the last few days, several people have brought to my attention this story about two regulatory science bills passed earlier this week in the US House. Since the linked story doesn’t do a very good job of explaining why the two bills are controversial — indeed, it doesn’t even clearly indicate that there are actually two bills involved — I thought I’d take some time to unpack things.

HR 1422

The controversial bits of “the EPA Science Advisory Board Reform Act of 2014” seem to be the following:

  • paragraph 2(c): “persons with substantial and relevant expertise are not excluded from the Board due to affiliation with or representation of entities that may have a potential interest in the Board’s advisory activities, so long as that interest is fully disclosed to the Administrator and the public and appointment to the Board complies with section 208 of title 18, United States Code,” and
  • paragraph 2(E): “Board members may not participate in advisory activities that directly or indirectly involve review or evaluation of their own work.”

2(c) seems to allow scientists affiliated with — or even employed by — the fossil fuels industry, say, to sit on EPA advisory boards. While I agree that it’s troubling, the widely-accepted principle of state neutrality or “neutralism” actually endorses this approach. While there’s disagreement about the details of this principle, they all seem to support something like paragraph 2(c). According to the principle, since the EPA is a state body, it should remain neutral with respect to disagreements between environmentalist and industry over what’s good and important. One simple, practical way to try and do this is to solicit scientific advice from the whole range of perspectives, including from industry.

You might argue that EPA advisory boards are not actually a state body, since they don’t actually make regulatory decisions. Rather, they’re just responsible for offering scientific advice to the regulators, and so the principle of state neutrality doesn’t apply to them.

However, at this point the defender of 2(c) can argue that the science advice process, as an instance of the broader scientific process, should be objective and unbiased, at least in the sense of surviving critical scrutiny from any (relevant) perspective — including that of industry.[1] All together, it seems hard to criticize 2(c) without abandoning the kind of neutrality that requires us to treat environmentalism and industry as just a pair of opposed “interest groups,” with an equal right to influence the science advisory process.[2]

2(E) seems to be what commentators have in mind when they complain that “experts would be forbidden from sharing their expertise in their own research.” It seems to me that this depends on how “review or evaluation” is read in 2(E). Suppose we can distinguish vetting — making a decision about whether or not the advisory board should take some body of research into account — from incorporating — actually taking the vetted research into account. Does “review or evaluation” apply to just vetting, or both vetting and incorporating? If it applies to just vetting, then 2(E) doesn’t seem so bad: it would just seem to require that expert X recuse herself temporarily while the rest of the board makes a decision about whether to take expert X’s research into account. But, if “review or evaluation” applies to incorporating as well, then it seems that expert X’s research could be taken into account only if expert X weren’t on the advisory board. That’s a little more troubling. 2(E), by itself, doesn’t seem to settle this point, and I don’t know whether other documents clarify the meaning of “review or evaluation.” (Note that 2(E) seems to apply equally to both academic and industry researchers.)

HR 4012

The “Secret Science Reform Act of 2014” is quite pernicious, but in a much more subtle way than either of the controversial points in HR 1422. This bill requires that “scientific and technical information relied on to support” regulation must be “publicly available online in a manner that is sufficient for independent analysis and substantial reproduction of research results.”

This seems perfectly acceptable, and indeed in line with the scientific ideal of replicability and the democratic ideal of transparency: regulatory decisions should be based on “good science” and “good reasons,” which requires among other things that anyone should be able to double-check the data analysis and interpretation. That requires that the underlying data be made publicly available.

However, Andrew Rosenberg argues that privacy considerations mean that the underlying data for some regulatory decisions cannot be made publicly available. It would be unethical to release medical data that could be used to identify individual patients, for example. Rosenberg doesn’t note this, but in addition research data are often private property. They might be the intellectual property of the company that gathered or aggregated the data (as with public survey data, or climate data). In other cases, in order to get the materials needed for the research, researchers might have had to sign materials transfers agreements (or other contracts) with clauses that restricted their ability to public the resulting data. You might (like me) think this kind of private property in data is ethically illegitimate, but this is still the situation that the EPA has to work with right now.

You might also try to argue that the general public should defer to scientific experts, and consequently that there’s no reason to make the data publicly available. Rosenberg seems to say something like this: “As many politicians have taken pains to point out, they are not scientists, so they should listen to scientific advice instead of making spurious demands for unanalyzed data.”

I have to question this argument. There are plenty of cases in the history of science in which non-scientists and marginalized scientists have correctly criticized the scientific majority. In democratic terms, it’s important that scientists be held accountable to the general public.

That said, HR 4012 does very little to make scientists accountable to the general public. The overwhelming majority of members of the public lack the technical skills (and, in some cases, the computational resources) necessary to double-check scientists’ data analyses, much less distinguish reasonable and unreasonable interpretations of the results. It doesn’t matter whether the data are published or unpublished, if you don’t have the training necessary to work with them.

Instead, HR 4012 will be much more effective at obstructing the regulatory process and generating confusion and ignorance. This is arguably the case with its predecessor, the Shelby Amendment or Data Access Act.


  1. Philosophers of science should recognize the reference to the views of Helen Longino and, in his most recent work, Philip Kitcher. While there are significant disagreements between Longino and Kitcher, as of about 2011 both reject “value-free” conceptions of objectivity, and instead require that scientific findings must pass actual or hypothetical (respectively) critical scrutiny in order to be acceptable.  ↩︎

  2. One of the main concerns in my research — as both a philosopher of science and political philosopher — is to articulate views of science and politics that don’t require this kind of neutralism. I think that there are good reasons to think that protecting the environment and human health are more important than protecting the profits of industry and promoting economic growth. By these lights, paragraph 2(c) is much more troubling.  ↩︎

Reuse