Thinking, Fast and Slow, by Daniel Kahneman is  one of those books to read slowly, and reread multiple times. It is full of fascinating and counterintuitive facts about how we think. That makes it sound dense. It isn’t. I found it very easy to read, devouring it in great gulps, and having to stop myself from reading it so fast I didn’t pick up all the nuances.

The book is the distillation of the years of collaboration between Daniel Kahneman and Amos Tversky. They met at the Hebrew University, Jerusalem, where they shared a common interest in how well people think statistically. Are people generally instinctive statisticians, as they are instinctive grammarians? Or can you fool them by the way you ask statistical questions?

I don’t think I’m giving too much of the plot away to tell you that there are many ways in which people think wrongly about statistics.

Foundation of behavioural economics

This work, from two psychologists, eventually led to a Nobel prize in Economics for Daniel Kahneman (sadly, Tversky had died before he could share in the award). The work was the foundation of the massive field of behavioral economics, which, unlike much of regular economics, allows for the assumption that people are not always rational, and do not always distinguish sensibly between two choices.

Kahneman says in his introduction that his aim in writing the book is to

Enrich the vocabulary that people use when they talk about the judgements and choices of others, the companies new policies, or a colleague’s investment decisions… Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others.

And he does that superbly. The book has a structured way of thinking about thinking, and is chock full of examples of common ways in which everyone, no matter how statistically sophisticated, will make mistakes about statistical questions if they think too fast.

Framework

So to the framework. Kahneman talks about two different types of thinking.

  • System 1 operates automatically and quickly, with little or no effort, and no sense of voluntary control.
  • System 2 allocates attention to the effortful mental activities that demand it, including complex computations.

System 1 is your gut reaction. The answer that comes effortlessly to your mind when you’re asked what is 2+2. System 2 is the hard thinking you have to do when you’re asked to multiple 17 and 22.

Your mind is filled with shortcuts, that often will give you the wrong answer, if you just leave System 1 to do all the work. How do you know when to engage System 2? How do you stop yourself from just using the System 1 answer? Most of the time, your brain is highly efficient in how it chooses intuitive (system 1) and effortful (system 2) thinking. But sometimes you need to override, and turn System 1 off, even when you’ve already got an answer that makes sense. The trouble is that System 2 is hard work, and we are all inherently lazy, so if the answer to a question pops up from System 1, we’re inclined to go with it.

A classic example that illustrates this point is this question:

A bat and ball cost $1.10. The bat costs one dollar more than the ball. How much does the ball cost?

The immediate answer that came to your mind was 10 cents. But in fact, the right answer is 5 cents. To get the right answer, you have to engage System 2 thinking, and check your intuitive answer. Most people, when presented with this problem don’t do that, because their intuitive answer seems right.

Common Statistical misconceptions

Now for me, the most interesting parts of the book were the parts dealing with common statistical mistakes – which were the initial places that Kahneman and Tversky researched.

Sample size and Patterns in randomness

The first misconception is about sample size. The tendency to see patterns in randomness is overwhelming. Kahneman gives a real life example.

A study of the incidence of kidney cancer in the 3,141 counties of the US reveals a remarkable pattern. The counties in which the incidence of kidney cancer is lowest are mostly rural, sparsely populated, and located in the Midwest, the South and the West.

So what does this imply? Your mind automatically starts wondering what it is about rural isolated areas that might lead to kidney cancer – is it the fresh air? the lack of pollution? But then we move onto the second interesting result from the study:

Now consider the counties in which the incidence of kidney cancer is highest. These ailing counties tend to be mostly rural, sparsely populated and located in the Midwest, the South and the West.

The key factor in both the high and low incidence of kidney cancer is that these counties have small populations, which lead to much more variability in outcomes. But our brains don’t think that way. We want to find a cause, and we ignore sample sizes. If you had thought hard, and invoked your System 2 mental processes, you might have overcome this tendency. But when an explanation comes easily to mind, it is hard to ignore it, and keep thinking.

Bayesian estimations using proportions and stereotypes

On to a trickier example. Imagine a cab was involved in an accident. There are two companies, Green and Blue, operating in the city. Of these, 85% are Green, and 15% are Blue. A witness identified the cab as Blue. Based on past experience, the witness has an 80% chance of being right. What are the chances of the cab in the accident being Blue? You probably instinctively said 80%.

Now imagine the facts of the story change. The two companies, Green and Blue, are equally distributed. But the Green cab drivers are terrible drivers, and are involved in 85% of the accidents in the city. The witness still says the cab was Blue, with an 80% chance of being correct. What are the chances that the cab in the accident was Blue? Now you’re not sure, and you think the chances are about even that the cab was Blue or Green, because you know those Green drivers are maniacs!

The actual answer in both cases is that the chances of the cab being Blue is 41% (using Bayesian estimation, 15% times 80% divided by  (85% times 20% plus 15% times 80%)). But notice how you ignored the population you were drawing from when it was just a proportion. But once it was changed into a stereotype (those Green drivers are maniacs), it changed how you thought about the problem! Almost everyone gets this type of problem wrong, even when they know how easy it is to get wrong. The trick, if you are trying to get people to think about a problem correctly, is to try and create a stereotype (Green drivers are maniacs) rather than a proportion (85% of the cabs are Green)

Regression to the mean

We all tend to over estimate causation. If we see a great performance from a fund manager, we assume that they are excellent at picking stocks. Often a high rated fund manager gets a whole lot more money to manage, after their good performance. Then their performance deteriorates back to the average. We make up a reason – fund managers can’t outperform the market if they have too much money to manage – when in fact the more likely explanation is that the initial outperformance, and the subsequent worse or even under-performance, were both just random events.

Fitting the statistics into the framework

In all of the examples above, your System 1 thought processes came up with a very quick, and wrong, answer. System 1 hasn’t evolved too well to deal with complex probabilistic and statistical processes. So one of the big lessons for me was that whenever a question comes up that involves thinking statistically, even if the answer seems obvious, it is important to check it, using System 2.

So should you read the book?

Kahneman hits a perfect balance of entertainment and education. He sets out the book with questions to you, the reader, because (as he explains in several examples) you are more likely to learn if you are surprised by a personal result, rather than a generic result about other people that you can ignore. And he describes his experiments, his thought processes for drawing conclusions, and various disagreements he’s had with other academics about them in an entertaining, chatty style that keeps you turning the pages for more. I’ve only scratched the surface in the examples I’ve quoted above. There are many more, that make you stop and think about your own decision making blind spots.

If you’ve ever read a book about decision making, choice theory, common fallacies in the way people think, or behavioural economics, you’ve read some of the examples in this book. But if you are at all interested in the subject, this is the book you should read. You might think that the book written by the academic who came up with much of the original research would be harder to read. On the contrary. A wonderful, insightful, thought-provoking book.