The Funding Effect


Dan Hicks


January 30, 2013

In this series of posts, I’m applying Debra Satz’ account of noxious markets to a specific aspect of commercialized science, the funding effect. In the first post, I summarized Satz’ account. In this post, I explain the funding effect.

(Please note that this series is an experiment in publicly posting sections of a paper as I write them. I look them over briefly for typos and the like, but my ideas here are still in development.)

The funding effect refers to a correlation between the findings of scientific research and the financial interests of the funders of the research. Industry-funded science is often significantly more likely to have reached conclusions that are favorable to industry than non-industry funded science. In a recent paper (2012), Sheldon Krimsky has collected together several of the most prominent and well-established examples of the funding effect, including pharmaceuticals, tobacco, and bisphenol A (BPA). Another example, I think well-established by meta-analysis, is nutrition research:

Funding source was significantly related to conclusions when considering all article types (p = 0.037). For interventional studies, the proportion with unfavorable conclusions was 0% for all industry funding versus 37% for no industry funding (p = 0.009). The odds ratio of a favorable versus unfavorable conclusion was 7.61 (95% confidence interval 1.27 to 45.73), comparing articles with all industry funding to no industry funding. (Lesser et al. 2007, 41; see also Brownell and Warner 2009)

And several authors have speculated in print about funding effects in environmental justice research (Steel and Whyte 2012, 177) and safety testing of GMO crops (Byravan 2010). Kevin Elliott discussed the possibility of a funding effect in hormesis research in his book Is a Little Pollution Good for You?. (Elliott 2011; when I convert this series into a paper, I plan to discuss this book in more detail. For the sake of space, I won’t be doing that here.) Using the more general concept of selective ignorance, Elliott has also recently discussed something like the funding effect in agricultural research. (Elliott 2012)

It is important to recognize that, even when the funding effect is well documented, it is not necessarily caused by fabrication, deliberate manipulation of data, or individual scientists otherwise behaving badly. It is not even necessarily epistemologically problematic. For example, Krimsky notes that pharmaceuticals go through a long development process before a clinical trial; this process might screen out compounds that are less likely to be effective and more likely to have dangerous side effects. (Krimsky 2012, 7) Or “me-too” pharmaceuticals – yet another statin, for example – might be tested on (and marketed to) “slightly different outcomes in slightly different kinds of patients” (6, quoting Angell). This may be an ordinary instance of Simpson’s paradox due to the complexity of human physiology: for unknown reasons, the pharmaceuticals really are more effective at promoting these specific outcomes in this specific small group.

These mechanisms underlying the funding effect are not so worrisome. However, Elliott identifies six other, more worrisome mechanisms.

  1. Choice of Questions: Posing a practical question – that is, a question about what we shall do – relies on various presuppositions: that we are in what John Dewey called a problematic situation; the nature of the problem; the kinds of actions we might take to solve it; and what would count as a successful (or at least better or worse) solution. For example, psychopharmacological research presupposes that we are looking for a pharmaceutical rather than psychotherapeutic solution.

  2. Metrics and Standards: Elliott gives the example of GDP, which measures aggregate wealth but is completely insensitive to (a) the distribution of wealth across the population, (b) non-economic costs, such as ecological damage, and (c) the relationship between wealth and the concrete activities that people can actually engage in. Since food crops genetically modified to be herbicide-resistant are often profitable for both the intellectual property rights holder and commercial farmers, they tend to rate rather highly using a metric tied to GDP. But because these crops do not have significantly higher yields than non-genetically modified versions, they would not fare very well using a metric of food production.

  3. Research Strategies: Any trial or data analysis involves numerous methodological decisions: How statistically powerful will the study be? Which animal models will we use? How long will we run the study? Will we compare with a placebo or an on-the-market compound? What criteria of inclusion will we use for our meta-analysis? What databases will we search to identify studies? Within the space of acceptable answers to these questions, there is ample room for subtle nudges in one direction or the other.

    One especially important and simple example is the threshold for statistical significance that will be used. When investigating harmful effects of their funders’ products, researchers might adopt a high threshold, determine the findings to be statistically insignificant, and conclude that the products are not harmful. Then, when investigating beneficial effects, these same researchers might adopt a low threshold and determine that findings with exactly the same likelihood are significant and thus indicate that the products are beneficial. As Krimsky puts it, investigators may “set a high bar for establishing evidence of causality” (Krimsky 2012, 15).

  4. Information Dissemination: As Elliott puts it, “if a wide body of information is available to a small number of scientists and corporate executives, it may still be socially problematic if only a small selection of that available information is widely known and discussed in the political sphere.” (Elliott 2012, 12) Confidentiality agreements, intellectual property law, and national security classifications can all be used to actively hide undesirable information. But passive measures can also be taken. For example, simply not publishing unfavorable or “null result” data leads to unrepresentative sampling in meta-analyses, and publishing in pay-for-access venues effectively blocks access for most members of the public.

  5. Choice of Language: To use an example from social science, consider the variety of decisions that must be made when formulating a definition of income mobility. Will we use nominal income or control for inflation? If the latter, we need to use some kind of market basket index – a fixed bundle of goods, the nominal price of which we track over time – which leads to issues about changing technology, consumer demand, and what counts as a decent standard of living. In just a few decades in the US, a household computer has gone from dream to hobby to luxury to necessity (to maybe-not necessity, if one has a relatively inexpensive smartphone, netbook, or tablet and access to a “regular” computer at work, for example). Will we look at single lifetimes or intergenerational mobility? That is, will we compare someone’s income at age 20 vs. age 40 vs. age 65? Or their income at age 40 vs. their parents at age 40? And how to deal with the fact that most households pool their incomes and many women have gone from unpaid to paid work? Both individual income and household incomes have their difficulties. Thus right-libertarian and egalitarian liberal economists can easily come to radically different conclusions on income mobility in the contemporary US.

  6. Translational Research: As Elliott puts it,

    Another crucial judgment is how to push new technological applications forward, given that they can yield very different sorts of practical knowledge. For example, the authors of the IAASTD lament that, because of intellectual property rights, local farmers are unlikely to be able to engage in the sorts of participatory research with GM seeds that would be needed in order to develop truly effective, locally appropriate technological innovations. (13)

    For example, genetic modification technology has already been used to develop more submergence-tolerant rice. (For a narrative account, see Ronald and Adamchak 2009, ch. 1.) Such rice would be quite valuable for rice farmers looking for non-herbicide ways to manage weeds: they can simply flood their paddies, drowning everything except the young rice shoots. And rice, of course, is one of the global staple crops, especially in Asian peasant agriculture. Despite this, of the five genetically modified rice events expected to be commercially available by 2015, none exhibits submergence-tolerance; one exhibits herbicide tolerance, one exhibits disease resistance, and three exhibit insect resistance.

    In short, funders direct funds towards translational research that fits their interests. The resulting technologies, and the practical knowledge in their use, can thus be more carefully refined before undergoing clinical trial: we have a better understanding of the enabling conditions and limitations of the well-funded technology. Thus well-funded technology – such as transgenic crops provided with synthetic fertilizer, herbicides, and pesticides – is likely to perform better than poorly-funded competitor technology – such as organic or low-capital agriculture, which until recently was the exclusive province of subsistence farmers and marginalized radicals. (In this light, the fact that organic agriculture does about as well as conventional “industrial” agriculture in terms of yields gives some reason to believe organic has the potential to outperform conventional, if only it received commensurate funding. )

Again, none of these six mechanisms is best characterized or explained in terms of fabrication or manipulation of data. The influence of values that underlies the funding effect can be much more subtle than this, while still being worrisome.

Does that mean that the funding effect, and the various recognized and its various possible underlying mechanisms, give reason to think that the market in scientific research is noxious? That’ll be the subject of part three.


  • Brownell, Kelly and Kenneth Warner. “The Perils of Ignoring History: Big Tobacco Played Dirty and Millions Died. How Similar is Big Food?” The Milbank Quarterly, 87, no. 1: 259-294.

  • Byravan, Sujatha. “The Inter-Academy Report on Genetically Engineered Crops: Is It Making a Farce of Science?” Economic and Political Weekly, XLV, no. 43: 14-16.

  • Elliott, Kevin Christopher. Is a Little Pollution Good for You?. Oxford and New York: Oxford University Press, 2011.

  • Elliott, K C. “Selective Ignorance and Agricultural Research.” Science, Technology & Human Values. doi:10.1177/0162243912442399.

  • Krimsky, S. “Do Financial Conflicts of Inteterest Bias Research? an Inquiry Into the ‘Funding Effect’ Hypothesis.” Science, Technology & Human Values. doi:10.1177/0162243912456271.

  • Lesser, Lenard I, Cara B Ebbeling, Merrill Goozner, David Wypij, and David S Ludwig. “Relationship Between Funding Source and Conclusion Among Nutrition-Related Scientific Articles.” PLOS Medicine 4, no. 1: e5. doi:10.1371/journal.pmed.0040005.

  • Ronald, Pamela and Raoul Adamchak. Tomorrow’s Table. Oxford and New York: Oxford University Press, 2008.

  • Steel, Daniel, and Kyle Powys Whyte. “Environmental Justice, Values, and Scientific Expertise.” Kennedy Institute of Ethics Journal 22, no. 2: 163–182.