Mathematics in everyday life

What does the case of Diederik Stapel tell us about high-profile journals

Advertisements

The former Dr. Diederik Stapel – he has now relinquished his Ph.D. – published a series of papers in high profile journals with simple, counterintuitive results. The articles made delightful news stories: The results were simple to explain, and offered interesting insights into human behavior.

A good example is the article Coping with Chaos: How Disordered Contexts Promote Stereotyping and Discrimination. The title pretty much tells the story: People are more prone to stereotype others if they are on a street with cracked sidewalks or litter, for instance. This paper has exactly what high-impact journals are looking for. It provides a counterintuitive, easy to understand result, and offers new, solidly documented insights about a fundamental aspect of human nature. Stapel published a number of similar studies, that told interesting, yet simple stories, and were supported by irrefutable data. The only problem was that most of the data was fabricated by Dr. Stapel.

Most of us reacted with shock when we first heard about this case. We can try to examine where the peer review process went wrong, and why Stapel’s collaborators didn’t notice that something was amiss. However, I think that perhaps we should not be surprised. Journals with a high impact factor look exactly for stories that can be told in a few pages, are simple to digest, and will be interesting and easily told in a 1 minute news story.  Is it surprising that eventually somebody decided to manufacture such stories?

Unfortunately (or fortunately), the world around us is complex. As John Muir said “When we try to pick out anything by itself, we find that it is bound fast by a thousand invisible cords that cannot be broken, to everything in the universe.” And yet as scientists we frequently try to “pick out things by themselves”: Isolate the one thing that we study from the rest, and try to pull at most one or two of those many cords at any time to see how the thing will be affected. Sometimes, we can capture the effect of the rest by “noise,” something that has a precise mathematical meaning, but frequently only a vague analog in nature. Or we try to show that pulling other strings may not have a large effect. But almost always the story that we tell is complex and full of caveats (controlled clinical studies are somewhat different, and I will address them at some later point).

The format and demands of a high-impact journal article make it difficult to communicate these complexities and caveats. I think this has changed the way we do science in certain fields. A friend who is a theorists and collaborates with high profile experimental scientists told me that often when he is trying to give a complex explanation, his experimental colleagues loose interest – not because they do not think it is right or interesting, but rather because it is unlikely to make it in a high impact paper.

Data suggests that results published in higher-impact journals are less reliable, as measured by the number retractions and sample sizes used (My caveat here: Studies in higher-impact journals are also more closely scrutinized, which probably also leads to a higher retraction rate.  Therefore, the true story is probably much more complex). I do not wish to argue against Occam’s Razor – there is good reason to go with the simpler explanation that fits the data. However, we ignore the complex web of interrelations around us at our own peril.

Advertisements

Advertisements