Thinking Fast and Slow
I have just finished Daniel Kahneman’s intriguing book “Thinking Fast and Slow“. Kahneman describes in detail the quirks that distinguish actual humans from “Econs” – the idealized, infinitely rational residents of the worlds in many economist’s theories. Not surprisingly, humans differ from Econs in many ways. We are not fully rational, we unwittingly use inappropriate heuristics, and when faced with a difficult question, we frequently and unconsciously answer a simpler one.
The fact that humans do not reason perfectly does not come as a surprise. However, Kahneman and his collaborators have spent decades trying to describe exactly what rules guide human reasoning and decision making – particularly in the cases when it departs from the fully rational. The book describes hundreds of experiments designed to uncover the rules that guide our choices. A number these experiments are quite famous (The gorilla’s in our midst), but many were new to me. For instance, I thought it was fascinating that economists actually analyzed 1.8 million putts to see whether professional golfers are more accurate when going for a birdie or facing a bogie.
However, while reading the book, the results of an experiment featured therein were called into question. In a paper from 1996, the group of John Bargh at Yale presented subjects with a group of words. Subjects were instructed to pick out the odd words in the groups. Surprisingly, when the words were related to being old, the subjects would walk away more slowly than when the word had no such connotations. The subjects may not even have made a conscious connection. This is an example of priming.
A group has recently tried to replicate the experiment, and failed . John Bargh’s reaction to this result, and its coverage are good examples of how not to react when somebody challenges your findings.
However, I wanted to address a different point. How many of these experiments can we actually trust? I am certain that they were carried out with the utmost care. The statistical analysis of the data is a weak point of quite a few studies, but let us assume that the scientists were competent. The problem is that we are still dealing with an enormously complicated problem (describing human behavior) which is governed by many factors that the experimenters do not have control over. And we are trying to tease out what happens as a single, or at most a few parameters are varied.
This is a daunting challenge. I am not surprised that Bargh’s results were not replicated. That does not even mean that they do not hold – it may be something in the new study that negated the effect. The only way I see around this is to try to replicate such experiments in as many cases as possible.
One of the fundamental problems that we are facing here is the question of what we accept to be true. This differs between disciplines, and I have the feeling that the social sciences are facing a time where their results will demand more rigorous demonstration. I suspect that some well respected results will not survive this.
I still think Kahneman’s book is quite good. He does not tell an easy, straightforward story. There are many ways in which our mind tricks us, and Kahneman tries to tease out just some of the main ones. These effects probably interact in numerous complex ways. In the end, I walked away from the book thinking that the mind uses a relatively complex collection of heuristics and other shortcuts that work most of the time, but can also fail spectacularly. The story is not simple, but perhaps this complexity suggests that it is closer to the truth.