Bunkobons

← All books

Judgment under Uncertainty: Heuristics and Biases

by Daniel Kahneman & Paul Slovic and Amos Tversky

Buy on Amazon

Recommended by

"Daniel Kahneman, Paul Slovic, and Amos Tversky were three psychologists who studied judgement and decision-making – how we assess uncertainty in our lives and how we make decisions based on that. That’s a topic that has been studied by economists and psychologists for a long time – but for the longest time people would study it with normative models. They would have a model for how people should behave and they would see if people followed the model. Kahneman, Slovic and Tversky did a series of experiments which started with studies that were pretty complicated. They asked professional statistical researchers and research psychologists who were doing real data analysis the kind of questions that might be conceptual questions on a statistics final exam. The kind of questions that might be hard if you don’t know statistics, but shouldn’t be hard if you’re a pro. What they found is that the pros were getting it wrong. This is always interesting to me. When someone who doesn’t know anything makes a mistake, it’s sort of boring. But when someone whose job it is to get things right, gets it wrong, that’s interesting. When someone who has every incentive to get things right gets it wrong, it makes you think there is something going on cognitively, that there’s a cognitive bias. They did a series of studies that started with fairly complicated questions about statistical significance that people were getting wrong and they boiled them down, over the years, to simpler and simpler questions that people couldn’t get, sometimes called cognitive illusions. This book was the first place that a lot of these things were published. It came out in the early 1980s, and it’s a collection of articles. It has about 25 different chapters by different people, the top people in the field describing all sorts of experiments. I like to say that this is the best-edited book that I’ve ever seen, at least since the New Testament. It has become gradually more popular over the last few decades; now it’s sometimes called behavioural economics , but it’s basically psychology. I think it’s just incredible – studies of overconfidence, of how people estimate uncertain quantities, the importance of the framing – I could give you a million examples of where if you describe a decision option in a different way, people make a different choice. The kind of questions they were getting wrong had to do with uncertainty. One question was – you have a large hospital, every month they have a number of boys and girls that are born, and there’s some variation in the percentage of boys that are born in each month. The basic statistical idea is that the larger the hospital, the less variation in the percentage of boys or girls that are born every month. A lot of people know that if you have a large sample, your standard deviation is smaller and more stable. But somehow they asked it in a very natural seeming way so that everybody would get it wrong. People were expecting a level of stability that wasn’t occurring. The naive thing is that people believe in the law of averages, so they think that if the roulette wheel goes black three straight times it’s likely to go red. We all know that’s not going to happen, unless it’s a rigged roulette wheel (and roulette wheels generally aren’t rigged because people don’t need to rig them to make money, so that’s not usually an issue). This is a more sophisticated version of this, in an experimental context. It turned out that psychologists were expecting that their experimental results would automatically balance out; in a way that someone with statistical training should know will not really happen. This is a different example from the babies – the mathematics is similar but it’s a different framing – and it had to do with, if you’re doing a research study and you’re expecting a certain result, how likely is it that you get something similar to what you expect? And people overestimated how similar it would be to their expectations. The researchers knew about the idea of uncertainty and statistical significance, but they tended to think of it more as an obstacle to be overcome rather than a true bit of uncertainty that they had to address in real life. Some of the more recent studies involved what they call almanac questions – for example, they’d ask people the date of an uncertain historic event, or the population of Saudi Arabia. They would ask, ‘In what year did the State of Tennessee join the United States?’ Well, you know it’s some time later than 1776, but you have to guess. Before giving people the question they prompt them with a number – but the number will be unrelated. They’ll mention 1822 in passing, but say either explicitly that it’s a random number or they just slip it in in a different way. Then it turns out people use that number. Not that they say Tennessee joined the Union in exactly 1822, but their answer will be closer to 1822 than if you give them a different prompt, say 1799. Our brains are just machines, so it makes sense that we just use whatever information is there. But it’s not really appropriate decision-making. This work started out as a bit of a curiosity in the field of psychology , but we get a lot of insight from it, it is absolutely essential to understanding how humans think. Just as visual illusions give you insight into how the brain sees things, cognitive illusions show us the shortcuts that our brain uses to make decisions. It is. It’s fun to read. I get a little upset that a lot of this has gone into, and people talk about, behavioural economics and nudging people. That stuff is fine too, but it’s really much more broad than that. Economics is on everyone’s mind right now, but it’s not just about economics. The phrase that Bill James has is that the alternative to doing good statistics is not no statistics, it’s bad statistics. Bill James had an on-going feud with various baseball writers who put down statistics. He would write about these people who would say, ‘Statistics are fine, but what you really need to do is see people play. Baseball is about athleticism and heart, and it’s not about numbers.’ What Bill James pointed out is that the people who said this, when they talked about their favourite players they would talk about their statistics. So and so batted .300. So they were relying on statistics, but just in an unsophisticated way. They’re still using the written record. To say that you don’t want to use statistics – that’s just not an option. I was at a panel for the National Institutes of Health evaluating grants. One of the proposals had to do with the study of the effect of water-pipe smoking, the hookah. There was a discussion around the table. The NIH is a United States government organisation; not many people in the US really smoke hookahs; so should we fund it? Someone said, ‘Well actually it’s becoming more popular among the young.’ And if younger people smoke it, they have a longer lifetime exposure, and apparently there is some evidence that the dose you get of carcinogens from hookah smoking might be 20 times the dose of smoking a cigarette. I don’t know the details of the math, but it was a lot. So even if not many people do it, if you multiply the risk, you get a lot of lung cancer. Then someone at the table – and I couldn’t believe this – said, ‘My uncle smoked a hookah pipe all his life, and he lived until he was 90 years old.’ And I had a sudden flash of insight, which was this. Suppose you have something that actually kills half the people. Even if you’re a heavy smoker, your chance of dying of lung cancer is not 50%, so therefore, even with something as extreme as smoking and lung cancer, you still have lots of cases where people don’t die of the disease. The evidence is certainly all around you pointing in the wrong direction – if you’re willing to accept anecdotal evidence – there’s always going to be an unlimited amount of evidence which won’t tell you anything. That’s why the psychology is so fascinating, because even well-trained people make mistakes. It makes you realise that we need institutions that protect us from ourselves… If you’re a research psychologist, you need the institution of formal statistics to protect you from your false intuitions, which if you’re not protected from, will lead you to make all sorts of mistaken claims. Similarly for medical research, it’s very easy to fool oneself – even if you’re well trained. This man didn’t realise that even if hookah smoking doesn’t kill every single person, it can, potentially, still be a problem."
Statistics · fivebooks.com
"This is one of the most influential books in modern economics. But first of all, it’s just this list of incredibly clever experiments. They don’t use any fancy tools, there’s no microscopes or telescopes involved: Kahneman and Tversky just asked their undergraduates hypothetical questions. So how much would a student want in return if they were betting $1 on the flip of a coin? You can’t get a simpler question to ask in a science experiment, and yet that very simple question eventually led to a thing called ‘loss aversion’. And this is now viewed as a very important phenomenon – with implications for everything from how taxi-cab drivers think, to how people act when they evaluate their stock portfolio. So what I like about this is book is how they took these very simple protocols – really just idle conversation with students – and transformed them into the first really hard proof that people consistently violate the expectation of rational agents. That we don’t think like homo economicus at all. That our behaviour, our responses to very simple questions, don’t look at all like what a rational person would do – there are these deep inconsistent flaws in the human mind. It makes no rational sense to have such a strong loss aversion, or to be so vulnerable to any one of the long list of biases that Kahneman and Tversky demonstrated. But across all the big-end, large sample sizes, this is the way people responded. So it’s an incredibly powerful piece of work that really showed that people aren’t just occasionally irrational, they don’t just act stupidly when they’re in the midst of a bubble. Irrationality is embedded deep into our operating system. Yes it’s a very academic book. But it happens to be about as accessible as a bunch of academic papers can be, simply because it’s just fun to go through and do the hypotheticals – these questions they’re giving to undergraduates at Hebrew University in the mid-1970s – and then testing yourself against them. And the collection does a very nice job of mixing together the original papers with subsequent results in the field of economics which then take, for example, loss aversion and apply it to the real world. So you can see how this actively influences the decisions of mutual fund managers, with very important negative consequences. And this book not only pointed out this core irrationality, but really changed the economics field as well. That’s actually an offshoot of loss aversion. So there are different ways of framing a question, and one way to demonstrate loss aversion is that the ultimate loss is, of course, death. So if you go to doctors and ask them to choose between options, and one, the riskier option, is framed in terms of saving people, and the other in terms of people dying, most doctors will risk everything on the all-or-nothing approach. Even when it’s the exact same numbers, if it’s framed in terms of death, people are twice as likely to avoid that option. Because framing the question in terms of losses, making us even think about death, is so ugly, it feels so bad to us, that the person thinks, ‘Oh I’ve got to go for the risky approach.’ And Kahneman and Tversky argue that it does indeed affect the way doctors discuss, for instance, cancer treatments. You can get doctors who work in cancer wards to think very differently about treatment if you frame it as a five per cent chance of surviving, or a 95 per cent chance of dying. Yes. We’re given all these statistics, but the human mind wasn’t designed very well to deal with statistics. What we’re left with is this feeling. A feeling of either fear – that’s a risk we’re taking – or that’s a potential gain I should pursue. A lot of it really is about these emotions which, in the end, drive our decisions. So simply by reframing the question one way or the other, you can dramatically influence these feelings. Human beings really aren’t rational agents for the most part, because we’re actually being driven by these emotions triggered by dreams of losses or gains."
Decision-Making · fivebooks.com