Bunkobons

← All books

The Illusion of Conscious Will

by Daniel M. Wegner

Buy on Amazon

Recommended by

"There is a philosophical conception of free will, but here’s what most of us mean by it: I’m about to decide to pick up this pencil, I reach forward, I pick up the pencil, I freely chose to do that. The Illusion of Conscious Will is ostensibly a social psychology book, as opposed to an evolutionary psychology book. Wegner was at Harvard, and is one of the greatest ever psychologists. He argues that we are usually mistaken about this impression of conscious will; we usually infer an intention to do something from our actions, rather than from actually being able to consciously make something happen. He pulls together a lot of evidence. You’ll know about the old cognitive dissonance experiments—that when you get paid for doing something, it changes why you thought you did something. But there are loads of these experiments. There are split-brain experiments; in the past, epileptics sometimes have the corpus callosum chopped. The corpus callosum is a bundle of nerve fibres that connects the left and right hemispheres of the brain. After the operation, the two halves of the brain can’t communicate as freely as yours and mine can. Your right half of the brain tends to control your left visual field, your left arm, and vice versa. And most of the generation of language takes place in the left brain. So if you do something with your left brain, using your right side, it’s easy to explain why you did it. But with these people who’ve had the corpus callossum cut, you can show things to the left side of their field of vision, and get them to respond to it with the same side of their body, then explain why they did it, they’ll invent explanations which seem entirely plausible, but are obviously wrong. In the most famous example, scientists showed a picture of a snow scene to the left visual field—so, right brain—and a chicken’s claw to the right visual field. Then the patient had to choose a picture to go with the scenes. One on each side. The right half of the brain—so, left arm choosing from pictures in the left field—chose a shovel, which makes sense to go with the snow scene. But the patient now has to explain why they chose the shovel. This is done with the left brain, which saw a chicken’s claw and now the shovel. Does it just admit it’s flummoxed? Not at all. The patients confidently explain that you need a shovel to clean out the chicken shed. We can see that’s really odd because of the experimental set-up. But that’s what we’re doing much of the time—coming up with externally plausible explanations for why we do what we do that don’t necessarily have much to do with why we really do what we do. There are dozens and dozens of experiments, and mostly they’ve been left isolated. There’s cognitive dissonance stuff, self-perception stuff, all this left-brain interpretive stuff. Then there are some slightly more fanciful findings from hypnotism and so on, and he pulls them together. I think that this really hammers home that idea that the order of events isn’t ‘I consciously willed to do something, and then I did it.’ It’s much more often: I did something, I inferred why I did it, I created a coherent explanation for it. This is really important for evolutionary psychologists to know, for two reasons. Firstly, I think evolutionary psychologists sometimes cut a corner. For example, looking at mating strategies, they might interview 1,000 men and show them pairs of pictures and say: ‘which of these images do you prefer?’ Or, they might interview 1,000 women and say: ‘Would you be willing to have an affair or not?’ Then they’ll infer differences. Which is all very sensible if what they say, and what they are aware of, directly influences what they would actually do. Because it’s the doing that’s important—actually having sex and producing children, not saying who you would be more attracted to. If what we are now saying is that what we genuinely believe is inferred more from the outside in, you would expect people’s answers to tend towards what they think culture expects them to say. You might expect them to be just fundamentally mistaken, often, and you’re starting to completely blur the thing that leads to evolutionary consequences. I think that happens a lot. Reading The Illusion of Conscious Will would stamp out a lot of that. The second reason is that it sets up an enormous question for evolutionary psychologists. Most of what’s studied are the obvious questions: When will you kill somebody? When will you feud over something? What will your mating strategies be? They’re sort of obvious, because biologists have already studied them in animals. A lot of it is really saying: how do these theories apply to humans? That’s why Daly and Wilson’s book is so great. We can run the stats and say, oh, actually, it looks remarkably similar in many ways. But here’s the question: if we’ve only got the illusion of conscious will, why do we have that illusion? I’m not sure Wegner’s answers are really coherent. It’s still a big question. One of the proposed answers is that it helps us take responsibility for our actions and gives us more self-confidence, because we see we’re having an effect on things. But it doesn’t make sense. If having self-confidence was a good thing, why not just have a module that tells you to be self-confident? It’d be crazy to have all this complexity concocting a whole other story just to convince yourself of something that could be done very simply. The illusion clearly requires a lot of specialised brainpower, so it’s doing something for us. The fact we don’t know what that is makes it the most fascinating of all questions, I think. That’s why I included it here. It has to have an evolutionary answer. It’s on the evolution of morality. A bit like language, it’s one of those things where you’d say it seems really odd to say it evolved.In the same way that you’ve got Spanish speakers and Chinese speakers and English speakers, you have people who are Utilitarians, you’ve got duty ethics people, you’ve got people who say their morals come from religion. You’ve got people who think abortion is murder, and those who think it’s a right. But the mechanism for holding morals, and for acting upon them, and for judging people, can have and does seem to have evolved. Going back to the Bingham book: we punish people who step out of line. So it’s very important that you don’t do things which cause other people to punish you, and that you choose the right side to be on when you’re ganging up with your rocks to punish someone else. So we know that there will have been pressure to behave morally, in the way other people think of it. We’ve probably known for a while—philosophically at least—that the types of moral truths that most people believe in can’t, in a scientific sense, be true or false. They don’t seem to behave in that way. There are, I suppose, there are a couple of issues. The first is: does being mistaken about morality actually harm us? Our mechanisms for holding morals happened long before Utilitarianism , or any of the modern religions. Tens of thousands of years ago, as did language. Having language is still a good thing in modern society. Having a mechanism for holding morality we might think, on balance, there are some good things about it. But there might be some quirks. I argue there are some quirks in morality, that actually make life a bit harder. It’s something that caused you to bond into groups, when you lived in tribes. That’s not so helpful in modern cities, that we’re all bunched into political groups, or racial groups, or so on. Secondly, if you’re seeking a connection between why these different books excite me so much is when we predict how other people are going to behave and whether to punish them, we infer minds in them. I can’t see inside your brain, so I’ve got to create a mind for you based on your expressions, the things you say to me… that changes how I judge you and how I treat you. Yet, if you’re doing the same thing about me, it’s probably quite useful for me to be having a bit of my brain that’s working out what you’re inferring about me from my actions. That’s one way of explaining the evidence that Wegner pulled together, that we have this illusion of conscious will and we infer the causes of our action, not because we actually need to do this to work out why we did what we did— our brain probably doesn’t need that information and it could collect it from its own modules—but because in the way I’m building a model of your mind, reading your mind, it’s helpful for me to have some sense of what you’re thinking about me. That observation very naturally fits in with something about the evolution of morality. I think it gives an evolutionary explanation for the otherwise odd things that Wegner has collected. Like Steven Pinker had to get rid of people’s idea that they already knew what language was about, I had to get rid of this idea that everybody knows what morality is about—that it’s about being nice to each other, nothing to do with evolution at all. Get the weekly Five Books newsletter This is an intriguing thing. If you look at rich people, and more creative people, they both seem to have fewer morals. You can measure this. Who speeds through traffic lights, who doesn’t stop at traffic lights, it’s the people in the nice cars. Who takes more money? It’s the people in the higher social class. This seems to cut across the idea of morality altogether, because morality would never have evolved unless you’re more successful by being moral. So something’s gone wrong today, which means it’s inverted. We’d only evolve to be less moral if the people who society saw as successful, the more creative people and the rich people, were also having more children. And I don’t think there is evidence of that. In fact, generally, the evidence is the other way, that poorer people tend to have more children. So in a perverse way, you’ll probably end up evolving to be more moral, because being more moral makes you poorer, and when you’re poorer, you have more kids."
Evolutionary Psychology · fivebooks.com