Bunkobons

← All books

Moral Machines: Teaching Robots Right From Wrong

by Wendell Wallach and Colin Allen

Buy on Amazon

Recommended by

"This book is slightly older than the other books and is a lot more detailed. I picked it because the authors look at the particular issue of how we might actually program ethics into machines, into what they call ‘artificial moral agents’. One of the things I liked about it was how they thought in really detailed ways about how you might go about constructing a machine that could somehow understand or incorporate ethics and what that might mean for whether you’re talking about AI which is performing quite specific tasks, or whether you’re talking about something more general. But they also talk about what sort of moral theory might be needed. They consider how you might have a ‘top down’ approach of starting off with, say, a set of principles, or a consequentialist approach looking at measuring harm and benefit; or how you might have a ‘bottom up’ approach where you’re trying to teach a robot ethics in the ways you might teach a child and what that would involve. And then they go into quite a lot of detail about the important issues in ethical theory that lie behind this. For instance, they discuss Jonathan Dancy’s work (something that caught my eye because he taught me when I was an undergraduate). Dancy has a theory of moral particularism which is basically the idea that when we make moral decisions or moral judgements, we’re not relying on general principles or rules, but on the specific details of each situation, because these are so individual and so unique. On his view, when we make decisions or judgements they are particular to that situation, and we can’t draw general principles and rules from how we act in these circumstances. Well, I have no idea how you would do it. The book suggests certain models of building AI using neural networks might work better than others if a particularist approach is correct, but we have no answers so far. But one of the things that the authors talk about in the book is the notion of moral relevance. That’s such a key issue. If you’ve got a fairly simple moral theory like utilitarianism, the only things that are relevant are pain and pleasure – which is why the theory is wrong and why the theory is crude. It works quite well in some issues, but it’s really crude. What we need to do is to have an appreciation of which elements in a situation are morally relevant. That’s really interesting in terms of machine learning because we can get machine learning to recognise that something is a cat, for instance, but this is done in a completely different way to how a child recognises a cat. If you show a child a cat once, then the next time they see a cat they will say ‘cat’. But machine learning does it very differently. It’s a really interesting question whether or not you could ever program machines to be able to pick out moral relevance in the same way that we do. “If you show a child a cat once, then the next time they see a cat they will say ‘cat’. But machine learning does it very differently” One of the things we need to think about further from this is that ethics is not just about getting to the right answer. When we’re talking about something being a moral problem or an ethical issue, one of the things that means is that we expect the people involved to be able to explain and justify their actions. If you just decide you’re going to have coffee rather than tea, I don’t expect you to justify that. But if you swear at me and push me down the stairs, then I expect you to have a very good reason for having done that, a reason like, ‘oh, I thought there was a bomb, so I was trying to get you out of harm’s way’. For a moral issue, then, we expect reasons or an explanation. That’s one of the very important issues we need to think about: whether we really could program machines to be morally autonomous in that kind of way. This book is useful for exposing the level of detail we need to be asking both about machines and about morality to be able to answer those questions. Personally, from my understanding, I don’t think we’re anywhere near developing something which is morally autonomous in that sense. But this book is an excellent account of what the issues are, both in terms of the tech and of the ethics."
Ethics for Artificial Intelligence Books · fivebooks.com