How to Use Citations to Create Ignorance


Dan Hicks


September 4, 2013

I spent most of Monday working on some “deep literature analysis” for my research on genetically modified organisms for food, or GMOf. This meant, in practice, that I spent about two hours looking up the citations for a single article. It was quite dull work, but the results were very interesting from the perspective of agnotology, an emerging area of science studies that deals with the production of ignorance. In this post, I’m going to give you some background on the “feed the world argument” and agnotology, then present my findings.

First, a few quick paragraphs of background. The “feed the world” argument is probably the most prominent argument used by supporters of GMOf. It’s shown up venues from The New York Times to Nature, and off the top of my head I can think of versions of it that date back to 2000. The argument, in brief, is that (1) world population will increase significantly over the next few decades, so (2) food production should be significantly increased; and (3) GMOf will increase food production, hence (4) we should develop GMOf.

Hugh Lacey has pointed out that this argument is invalid: the fact that GMOf will increase food production doesn’t imply that they’re the best way to do so; until we think that GMOf will work much better than methods based on organic agriculture, for example, we don’t have enough reason to develop GMOf (and not develop organics). (2) and (3) don’t provide enough support for (4).

Lacey’s critique grants premise (3), at least for the sake of argument. My work is more critical of premise (3) itself. It’s not that I think (3) is false. Rather, I think the ability of GMOf to increase food production depends a great deal on how the technology is developed, and that the way this research is funded tends to produce GMOf that don’t significantly increase food production. The “feed the world” argument holds up the promise of GMOf, but I suspect that this promise won’t be realized if we continue on our present course.

Next, agnotology. The term was coinced by Stanford historian of science Robert Proctor a few years ago, to mean “the study of ignorance.” Proctor thinks that we science scholars shouldn’t just be looking at the way knowledge is produced, and we shouldn’t just assume that ignorance is not (yet) having knowledge. Based on his work on tobacco, Proctor argues that ignorance is often actively produced, and in some cases is even produced as a “strategic ploy”. Think of the way tobacco industry executives and scientists publicly denied that nicotine was addictive or caused cancer for decades after they had strong evidence showing otherwise.

Work in agnotology would be especially valuable, I think, because it can be quite difficult to tell whether someone is producing knowledge or ignorance. Something that appears to be knowledge (like an article in a peer-reviewed journal, complete with numbers and citations) can actually be agnogenetic, creating ignorance. The agnogenetic strategy — the way of creating ignorance — that I’m looking at here works in this way. Specifically, scholarly citations might be being used to create a false appearance that the paper’s claims are trustworthy and well-supported.

I need to emphasize what precisely I’m arguing. My findings undercut the trustworthiness of the paper I examined, and so should make us question how well-supported its claims are. But I’m not going to provide evidence against any of its claims. Furthermore, I’m not claiming that the authors deliberately used this strategy to create ignorance. Indeed, I’m not even claiming that this paper does in fact create ignorance. My point is that the citations strategies used by the authors of this paper would be a very effective way to create ignorance if someone were trying to do so.

The paper — Brookes and Barfoot, “The Global Income and Production Effects of Genetically Modified (GM) Crops 1996-2011, GM Crops and Food 4:1 (Jan/Feb/Mar 2013), 74-83 — gives evidence that GMOf has already increased the production of maize (corn), i.e., in support of premise (3), as part of an argument that GMOf has produced economic benefits for farmers. Brookes and Barfoot are the founders and directors of PG Economics Ltd., which describes itself as "a specialist provider of advisory and consultancy services to agriculture and other natural resource-based industries.” PG Economics Ltd. has been hired by such agricultural biotech companies as Monsanto and Dow Agro Sciences in the past, but the paper I was looking at is a report that they publish annually on the economic benefits of GM crops (both food and cotton) for farmers. Earlier versions of this report have been cited hundreds of times by scholarly sources, as well as by Monsanto and other proponents of GMOf in non-scholarly contexts. (Note that, in this paper, Brookes and Barfoot claim no conflict of interest; this seems to be consistent with the letter of the journal’s conflict of interest policy.)

The key evidence supporting premise (3) is given in Table 5, “Average (%) yield gains GM IR [insect-resistant] cotton and maize 1996-2011,” page 80. Brookes and Barfoot report modest improvements in core countries — 7.0% and 5.0% for maize in the US, for example — but really spectacular improvements in semiperiphery countries — 18.6% for maize in the Philippines, 21.0% in Colombia, for example. In addition, Brookes and Barfoot provide a list of several citations for each country in the far-right column of the table.

The numbers are impressive, and the list of citations creates the impression that they are the product of good scholarship. But they’re significantly larger than numbers I’ve seen elsewhere, which is why I decided to actually look up their citations. Specifically, I checked on the 12 citations dated 2003 or later for the four countries with increases greater than 10% for maize. Here’s what I found:

All together, Brookes and Barfoot provide some modest to weak evidence that GM maize was significantly more productive in South Africa, and some weak evidence that it was significantly more productive in Colombia. But many of their citations are inaccessible or have industry affiliations, making them untrustworthy. Furthermore, Brookes and Barfoot provide no real explanation of how they decided to use these particular studies. Their methodology section claims that “The report is based on extensive analysis of existing farm level impact data for GM crops, much of which can be found in peer reviewed literature.” (78) But only 4 of the 12 citations that I tracked down above were peer-reviewed, and it seems odd to say that the analysis is “extensive” when it looks at only a couple dozen of the thousands of GM productivity studies.

Again, none of this means that Brookes and Barfoot’s numbers are false or aren’t actually well-supported. My point is that this paper isn’t very trustworthy: we can’t check many of their numbers, and many of the ones that we can check come from sources we might not trust. And yet it appears trustworthy — the list of citations is right there next to the numbers in the table. Recognizing that the paper isn’t very trustworthy — recognizing the kinds of citations that the authors cited, whether they’re available, and so on — requires a very tedious exercise in tracking down citations.

These two features together — the appearance of trustworthiness and the difficulty of dismantling that appearance — make this kind of use of citations a powerful agnogenetic strategy. A typical non-scientist reader of the paper probably wouldn’t go to the trouble of looking carefully at the citations. A scientist might look more carefully; but, especially if they just need a quick citation for the literature review section of their own paper, instead they might assume that untrustworthy sources would be caught in the peer-review process before publication. Yet this assumption may not be warranted: peer reviewers are busy faculty who aren’t paid for their work as reviewers, and checking citations is boring and time-consuming. So untrustworthy citations could easily slip past everyone.