Bunkobons

← All books

The Technological Singularity

by Murray Shanahan

Buy on Amazon

Recommended by

"This is a very different sort of book. The ‘technological singularity’ is explained in slightly different ways, but it is roughly the point in AI development where things run away and we can’t predict what happens next. Imagine that we’ve created an AI that is cleverer than us. With a person that is cleverer than you, you probably don’t know what they’re going to do next. So, with an AI that is cleverer than us – especially if it’s one which has got a positive feedback loop and is developing and learning itself – we might be in a situation where we really can’t predict outcomes. That’s one way of looking at the singularity. This book is quite short, and it’s very well explained. One of my irritations about much of the debate about AI is that some people come along and say: ‘the superintelligence is just around the corner – and any minute now we’re going to be enslaved to robots. There are going to be killer robots and drones everywhere, and nanoparticles that you’re going to breathe in, and they’re going to go into your brain, and this is all going to happen by 2035.’ And other people are saying ‘relax, there’s nothing to worry about; there’s nothing to see here.’ “With an AI cleverer than us – especially if it’s developing and learning itself – we really can’t predict outcomes” Of course, ordinary people have absolutely no way whatsoever of knowing who’s right. They’ve got no idea whatsoever. What Shanahan does in this book is to look at exactly how we might map what intelligence is and different approaches to doing that, how we might build up intelligence in different ways. He looks at this in quite a lot of detail, but very simply and very well explained. He looks, for example, a lot at how you might understand what it is to model a mouse brain and what exactly you’d have to do to do that. Mouse brains are much much simpler than human brains. I think it’s useful and interesting material to help to understand better what’s going on in the debate. So, he’s taking us through the fundamental nuts and bolts about how we might reproduce intelligence. For instance, he talks about whole brain emulations… There are different ways of trying to produce AI. An emulation is the attempt to make a precise copy of a brain in another form, in an attempt to create the brain’s intelligence artificially. You’d start by looking at exactly at how the brain works and try to build an emulation that way, looking at the physiology of how the brain actually works, and replicating that so that it replicates the brain’s functions, and performs those functions as closely as possible to the way the brain itself works. And I suppose there might be a question as to whether you could take something which operates in a biological material medium and reproduce it exactly in a non-biological material. But a different approach would be to see what intelligence fundamentally is, in terms of what we’re able to do, and see whether you could construct something, maybe in a completely different way, that is capable of that. Machine learning, for instance, might be able to solve problems but we might not know how it’s doing it. It could do it in a way that is completely different to how our brains work. By looking at those different ways it helps us to see why there are disputes about whether we might be able to exactly reproduce human intelligence or if it might be different to us in some fundamental way. Yes. As a background to looking at the ethical issues, he helps to give a thorough understanding about the differences between a biological intelligence and a possible non-biological intelligence, and how biological limits might constrain us in certain ways. There are also questions about how our intelligence is linked to our actual bodies. With the brain, for instance, you can’t just take it out of your body. It wouldn’t function the same way. There are hormones, and all the rest of it, that are influencing how you behave; our actual embodiment is important. “A lot of the central questions about ethics in AI revolve around what’s going to happen when we replace humans” I think that’s a good background grounding for thinking about ethics in AI because a lot of the central ethical questions revolve around what’s going to happen when we replace humans with AI or enhance humans with AI. I suppose you could rephrase it as saying ‘what happens when we replace biological human intelligence or agency with something which is non-human?’. So, I think raising the question of how our biology affects who we are and how we relate to the world is really important to approaching the ethical questions. And there’s a lot more detail besides. Yes, precisely. One of the things that I like about Shanahan’s book is that he gets to the deep questions about ethics because he ends up by saying that AI ends up raising the question that Socrates raised about how we should live. Some of the questions that Murray Shanahan looks at later in the book are questions that might arise in some more futuristic scenarios which are linked to ethical questions and then linked to questions about the nature of personhood. A lot of philosophers have been really interested in this. “AI ends up raising the question that Socrates raised about how we should live” Supposing, for example, we could somehow upload someone’s mind to a computer, then you immediately could get all those questions about what happens if that splits. Once it’s on a computer, you can just copy it. But what then? Which copy is the real you? Should we treat the various copies equally, and so on? One of the points that he makes is that what AI is doing is making these thought experiment that philosophers like Bernard Williams and Derek Parfit have proposed about brain splitting and personal identity etc. real possibilities. AI can potentially realise some of those philosophical conundrums. It has that feature in common with John Havens’ book: its willingness to take what are currently science fiction tales seriously as tools for thought. This is a book that you might not think at first is about AI per se, but it’s got very close links with important ethical issues in AI. One of the reasons why I’ve put this on the list, apart from the fact that it’s really interesting and important in its own right, is because AI certainly makes use of algorithms and a huge amount of data. One of the things that we need to think about is not simply the big picture Terminator situation where we’re going to get gunned down by killer robots, but also questions about automated calculation of results which AI can only speed up, and about how that’s raising really important ethical and social questions that are already here with us now. Cathy O’Neil is a very interesting person. In a way, she’s a bit like somebody who used to belong to a terrorist cell who saw the light and is now spilling the beans. She was working with hedge funds when the financial collapse happened in 2008 and came to a realisation about how the use of numbers to manipulate data was having significant effects on people’s lives. As a result, she became a data analyst and, later, worked directly with exposing the issues. Again, as with John Havens’ book, hers includes many really interesting and gripping examples – in her case, from real life – about how the use of algorithms has affected people’s lives and occasionally ruined them. Support Five Books Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount . For example, she starts off by talking about how schools in the US introduced mechanised mathematical ways of scoring based on students’ test results in an attempt to check that teachers were doing a good job. One of the things that was in the algorithm was how well the student had done compared to how they did the year before. So, some of the teachers who everyone thought were really good educators could end up with a very low score. Some teachers were even sacked because of low scores. Some found that they’d get a six one year and ninety-six the next. What happens in this sort of situation is that people try to game the system. We’re not yet talking about sophisticated AI in this example, but these are the sorts of things that AI can easily magnify. Teachers would game the system and do their students’ homework for them, or make the tests easier, or make certain that they had kids who did really badly last year so that they could add a lot of value. O’Neil has numerous examples of how the system can be gamed. Another example that people in AI have talked about a lot is algorithms that try to determine the possible rates of recidivism in crime which is then used in sentencing. These are very real issues. A recent court case in Wisconsin concerned the COMPAS algorithm used to determine sentencing, because it appeared to be biased against black people. You could have lots of algorithms which are assessing insurance risk, for instance, which can end up being biased against people who live in certain areas. She also looked at an algorithm which measures the likelihood of staying in a job. One of the factors which makes you more likely to stay in a job is how close you are to where you live. So, one of the things I really like about the book is how she shows how what seemed to be mathematical decisions done with the lure of rationality and the lure of tech and the lure of numbers – which seem to be objective – can end up incorporating poor decisions of the past, or incorporating values or leaving them out without us consciously realising. The author is very clear about how something which just looks like a computer programme designed to produce a good outcome is imbued with values and may well reproduce and entrench those values. “Machine learning is only as good as the data set that you’ve got” She clearly links the tech to the social issues and also shows how, in example after example, it’s often the same group of people who don’t get insurance, don’t get good education, don’t get the job. It’s the same people who often end up getting discriminated against. I think that’s a really important aspect of AI, that we need to look at: how we’re using algorithms. Machine learning is only as good as the data set that you’ve got. So, if we’re suddenly starting from where we are now and having a big take off in using this way of trying to make decisions, what we might not get is some glorious future. What we might just get instead is some dystopian version of the present because we’re just reproducing the data that we’ve already got, and sometimes amplifying its biases. I don’t think she’s pessimistic. She’s just warning that we need to be able to do this properly and make very clear what’s going into it. Another thing I really liked about it is that she explains the history of how things came to be as they are. I read this on the train on the way back from visiting a university open day with my daughter and I read the chapter about how various university rankings began. As soon as it’s pointed out to you, it’s obvious that the whole thing is a scam. If you are near the top of the list then you stay near the top of the list because people apply to your university because it’s at the top of the list. Universities will do things to try to game the system. O’Neil emphasises throughout how values are embedded in algorithms, like it or not. The university ranking system helped to create an appalling runaway problem because fees were not included, so there was no incentive to keep these low. She explains the history of how this happened. It all began in America because a second-rate magazine had run out of things to write about and decided to feature an article about which the best universities were. It all took off from there. It’s just insane. So, one of the things I think really important in ethics in AI is that we keep a develop a clear and informed view of social history and keep track of why we make particular decisions about tech, and how this can run away with us."
Ethics for Artificial Intelligence Books · fivebooks.com