Bunkobons

← All curators

Carissa Véliz's Reading List

Carissa Véliz is a philosophy professor at the University of Oxford's Institute for Ethics in AI. In 2021, she received the Herbert A. Simon Award for Outstanding Research in Computing and Philosophy from the International Association of Computing and Philosophy.

Open in WellRead Daily app →

Digital Ethics (2022)

Scraped from fivebooks.com (2022-11-30).

Source: fivebooks.com

Joanna Kavenna · Buy on Amazon
"Let’s start with the fiction because, for a general audience, that is a very intuitive way to get into the topic. The novel I’ve chosen is very philosophical. It’s called Zed and it’s by Joanna Kavenna . I’m not sure if the author is a philosopher but if she’s not, she’s very philosophical. It’s a funny novel, that’s one of its virtues. But it’s very concerning at the same time. It’s about a big tech dystopia in which a company called Beetle has gained a lot of power and is led by a narcissistic CEO. It’s about how close this corporation becomes with the security services. What started as nudging—a notification that pushes you to stand up when you’ve been sitting down for a long time or to eat the right kinds of food—becomes seriously oppressive. It’s about how digital technologies can interact with society in a way that invites questions about what is fake and what is real. When you start experiencing reality through avatars, for instance, and through reports by companies, there are a lot of questions about whether this information can be trusted. The author plays with this duality of what is fake and what is real and also with self-fulfilling prophecies. How do we know what technology is doing? How do we check when there’s a mistake? How much transparency do people have? In this case, the technology starts to develop glitches. Of course, the company always tries to explain them away. One of the ways in which they explain the glitches away is by creating very obfuscating language. There’s a case in which somebody gets killed by a drone and the company brands it as ‘suicide by drone.’ It’s a great novel. It’s powerful."
Cover of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
Cathy O'Neil · 2016 · Buy on Amazon
"Yes, it’s a good one to start with because it’s very well-written and super accessible. This is a great book with a memorable title. It’s very comprehensive and she has excellent examples—about getting insurance or jobs, how we evaluate people at work and in civic life. The book also has a lot of authority, because the author is a mathematician arguing that maths is not neutral, and that values are always baked into different algorithms. One of the examples that most stuck with me is how we evaluate both students and teachers and how metrics change the activity that is being measured. So, if you focus, say, on test scores, then universities try to game the system by getting students to sit the exams many times and get the grades up. There’s another example in which there was an algorithm that was used to assess how good teachers were, and the algorithm was quite complicated—it took into account different things, including the grades of the students, which of course can vary depending on how difficult the exam was that year or other random things. In the end, people got sacked for this algorithm that was later proved to be tracking nothing at all. It was an algorithm that was self-referencing, but people had to go to court for that to come out."
Christopher Wylie · Buy on Amazon
"I really liked this book. I wanted to choose a book about privacy. I think privacy is possibly the most concerning challenge we have with digital technologies. Privacy can be tricky because it can feel very abstract. It doesn’t feel like anything to have your data collected. It seems innocent and painless. The consequences are not always tangible or sometimes very far off in the future. “Privacy protects us from possible abuses of power” This book is great because it’s written by Christopher Wylie, who was the whistleblower who exposed Cambridge Analytica. He tells the story of exactly how Cambridge Analytica got the idea to use personal data to try to sway elections and how he became part of this. He was the data analyst who made it happen, and he writes about how they built the tool and what exactly that tool could do. The book makes something very abstract and difficult to understand very tangible. It’s a bit like that. It’s a book that also has a lot of narrative interest. Carole Cadwalladr, the Guardian journalist, persuaded him to become a whistleblower. It wasn’t his idea. There is also an ethical, interesting story there to be told about how someone becomes a whistleblower, how someone switches from thinking, ‘This is my job, and it’s okay to do it’ to realizing ‘Maybe I’ve done something really, really wrong and I have to make amends and try to change this.’ He did leak documents, but what he revealed was mainly about the tool itself, and how it was developed. But you’re right. Daniel Ellsberg, the whistleblower of Watergate , had to spend hours photocopying those papers as the only way he could take them out. For someone like Snowden, it was very different. In general, privacy protects us from possible abuses of power. If you share things like your heartbeat or what you drank last night, that could be used against you by insurance companies. Likewise your genetics. If you do a genetic test that reveals hereditary conditions you could pay a higher premium, even though it’s through no fault of your own that you have those genes. Furthermore, there’s a Nature study that shows that about 40% of the results of these commercial DNA tests yield false positives. But many insurance companies still take them at face value. No, because you can argue that the fundamental principle is unfair. Even if they got it right about your genes, you’re still not to be blamed for your genes. There’s an argument to be made for why you shouldn’t be paying more than other people for genes that you didn’t choose, and you couldn’t change. That sounds good, but it’s a very sterilized and clean image of a reality that just doesn’t pan out that way. Typically, the people who do exercise are people who are wealthier—they have more time to do exercise than somebody who is working two jobs to survive. “Cambridge Analytica no longer exists, but there are more than 300 firms that do pretty much the same thing” Furthermore, there are all kinds of assumptions there. For instance, it may be that you do a kind of exercise that doesn’t get tracked as easily with a watch. So that might nudge you to run, which might actually be harder on the body than doing, say, yoga. Whenever we track things and categorize people, there is a risk of tracking the wrong thing, or of nudging people into doing things that are actually not as good for them as it might seem at a superficial level, and of being unfair in different ways, either because we miscategorized them or because the categories are not respectful of social realities that should be taken into account. When personal data gets used to treat people in different ways, often it ends up in unfair discrimination because it takes into account things that shouldn’t be taken into account, and because it doesn’t take into account things that should be taken into account. In the end, we are not being treated as equal citizens anymore, but on the basis of our data, and that’s an affront to democracy. Yes! Another reason why I like this book is because we haven’t changed anything to make sure that it doesn’t happen again, so it serves as a warning. What this book reveals is still relevant. Cambridge Analytica no longer exists, but there are more than 300 firms that do pretty much the same thing. We haven’t fixed it. I’m worried that we are building a surveillance structure that could be co-opted by anyone. It could be an authoritarian regime, or it could even be a company. Something that I’ve been thinking about recently is how digitization is analogous to colonialism. It’s a kind of colonizing of reality, a colonizing of the world to make trackable what wasn’t trackable before, to turn the analog into digital. When we look back to colonialism in India, the default image that comes to our mind—or at least it was for me—is that it was the British government who colonized India. But actually it was the East India Company, which at some point had more soldiers than the UK government. So the rogue player could be an authoritarian government, but companies could also become oppressive enough that they jeopardize our freedom. When I see something like Amazon ring cameras becoming more and more popular—and having this very close connection to the police—that’s definitely a worry. When we have rivals like China, who are not very democratic, keen on collecting data and becoming leaders in AI, that’s a geopolitical risk that we’re taking."
Safiya Umoja Noble · Buy on Amazon
"A lot of people might already appreciate that algorithms can be biased. When this book was published, it really changed the way many people saw search engines. It’s about how algorithms can be sexist and racist and this is true of Google search engine in particular. People tend to think of Google search engine as something very neutral, very reliable. It’s public information, like a public service. Safiya Umoja Noble reminds us that Google is a commercial enterprise, it’s not the public sphere, it’s not a public square, it’s not an NGO and, actually, racism and sexism can be quite profitable. One story that the author tells is that she was looking for certain activities to entertain her nieces. She looked up ‘Black girls’ and found that most of the search results were incredibly sexualized and pornographic. By trying to find something to entertain girls she pushed them into this idea of Black girls as sexual objects. Another example was somebody who searched for ‘three black teenagers’ and the images that appeared were mugshots. But if you searched for ‘three white teenagers’ you got images of very wholesome kids smiling. “Just because something is very popular doesn’t mean that it’s true or that it’s morally acceptable” Google creates this product that is very profitable and when something goes wrong, sometimes it then fixes the problem, which shows that they could have fixed it before, had they thought it through. But sometimes they can’t even fix the problem. And then they just shirk responsibility and say, ‘Well, it’s the algorithm; we can’t really do anything about it.” Another example was searches relating to Judaism, where the first page that came up was full of Holocaust deniers and anti-Semitic content. Google was confronted with this and tried to change it. But because these pages were so popular, they actually couldn’t change it. The best thing they could come up with was to buy an ad for themselves. So the first thing that you saw was a Google ad that explained to you that some of the first searches were unacceptable, and that Google didn’t endorse them. Instead of fixing the algorithm, the best they could come up with is to buy their own ad and display it to issue a warning. Both. It’s to do with the way the algorithm is written and the associations that are made, and how pages get ranked through popularity. Just because something is very popular doesn’t mean that it’s true or that it’s morally acceptable. And that’s one of Noble’s points, that companies like Google try to shirk off these mistakes as ‘glitches,’ when in fact they are part and parcel of how most AI works. There are some tricky ethical questions because when that happens, Google tries to put the burden on people. They say, ‘It’s people who like that kind of thing and it’s not our problem. We don’t decide the content, it’s just the popularity.’ But the decision to defer to popularity is an ethical, morally significant one. Furthermore, their algorithm makes something popular even more popular, because then, when you search, ‘Why do women…’ it then gets auto-completed with something completely unacceptable that reinforces whatever prejudice might have been there already. The United Nations highlighted this in a campaign. Google search suggestions included: Women cannot drive/be bishops/be trusted to speak in church. Women should not have rights/vote/work/box. Women should stay at home/be slaves/be in the kitchen/not speak in church. Women need to be put in their place/know their place/be controlled/be disciplined. Google fixed that right after the campaign happened. It’s mostly Google. A good complement to this book is Race after Technology by Ruha Benjamin . That’s more about how technology is not race neutral, with many examples of different technologies and how this impacts people differently, including in the areas of pre-emptive policing and hiring algorithms. She argues that there is a new ‘Jim Code’ that is designed and implemented by algorithms, with biases coded into tech. It looks very objective and scientific but is just encoding biases just like the old Jim Crow laws. That’s because I chose books for the general public. There was a temptation to include Shannon Vallor’s Technology and the Virtues , but that’s quite specialist and primarily written for professional philosophers. Yes, all the time. We hear a lot of positives, such as in books like The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World by Pedro Domingos . I’m just reading one that’s called Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live by Jeff Jarvis . That’s about how having so much data is going to boost innovation and how much we can learn from each other and everything’s going to be fine. We also get a lot of positivity from tech companies who are always telling us how tech is going to save the world and how amazing it is. So, with these book choices, I’m trying to balance the view. There’s a tendency to be very optimistic when it comes to technology, and I think that’s natural, but it’s partly wishful thinking. We’re in this pandemic, we really want it to go away, ‘oh, this app is surely going to solve it…’ Also, human beings are natural makers of things. Technology has been so useful for us in the past that we have a tendency to trust it more than we trust other human beings—even though it’s human beings who are building the technologies. Yes, absolutely. It’s not an easy distinction to make when it comes to practice, but I think a lot of data that is not personal data should be more public and accessible and open. But with personal data, it’s so easy to misuse and so dangerous for individuals and societies that we should be much more careful than we are. Yes, because it turns out that it’s very hard to anonymize data. It’s in fact very easy to re-identify data. So you can remove the name from someone’s personal data but, actually, if you know where they live and where they work, you can quickly discover who they are. The evidence suggests the contrary. Given how much money and investment and how much development we’re putting into AI and surveillance, I think that the general tendency is to be very optimistic about it. We do need to balance that tendency. Oftentimes, when I criticize tech, people assume that I’m arguing that we should ditch it, or that tech is bad. It’s not that. The important question is: How do we design tech to get some of the benefits without incurring many of the costs or the risks? And I don’t think we’re striking that balance well at the moment. “There is still not enough regulation of AI” So partly it’s about balancing the optimistic view with a realistic look at what is going on around us. Secondly, I think a big task of ethics is to figure out what could go wrong. Our job is partly about finding the problems and trying to prevent future problems. That can seem like a negative enterprise, but in fact, it’s a worthwhile one—particularly when you propose solutions, and when you propose different ways of designing things. Exactly. But that road often goes through a negative process of focusing on what can go wrong and the problems."
Mark Coeckelbergh · Buy on Amazon
"This book is by a philosopher. It’s very clear, and he knows what he’s talking about. He sets out a map of the problems. It covers issues like the problem of superintelligence. That’s the predicted moment when there is an intelligence explosion, and AI becomes smarter than we are. When it works on itself and improves itself until we become superfluous. The worry is that if we become superfluous, this AI might not care about us. Or it might be totally indifferent to us, and maybe it will even obliterate us. How do we make sure that we design AI so that we have value alignment, and we’re still in the picture? So that’s one classic problem in AI. Another one is privacy and surveillance. Another one is the problem of unexplainable decisions and black box algorithms, where we don’t exactly know how they work and what precisely they are inferring and with what data. The book covers challenges for policymakers, including the challenges posed by changes to the climate. It’s a kind of taster: a very short, compact survey, academic but very accessible. Then there is another book I want to mention in passing called Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford . That one fulfils a similar purpose in that it mentions many of the biggest problems with AI and tech. But it does so from a very interesting perspective, and that is the material sustenance and composition of AI. It’s about what these machines are made of and who makes them. It’s about the mines that are used to extract the metals necessary to build phones and to build data servers and so on. The main thesis of the book is that artificial intelligence is neither artificial—because it actually depends on the natural environment—nor is it genuine intelligence. This book is very well-tuned to problems of power, and how AI gets used to enhance power asymmetries that are worrisome for labour rights and civil rights. It doesn’t have many proposals. In my view, that’s something that’s missing from most books in this area, and it’s something that I tried to redress in Privacy Is Power . Yes. It’s a handbook with 36 chapters, written by philosophers, but not only for philosophers: it also aspires to be a source of information for people working in computer science, law, sociology, surveillance studies, media studies and anthropology. It covers a wide range of topics, including free speech and social media, predictive policing, sex robots, the future of democracy, cybersecurity, friendship online, the future of work, medical AI, the ethics of adblocking, how robots have politics. It’s very, very broad. When I first had the idea to do this book, very few philosophers were working on AI ethics or digital ethics. I was very frustrated that philosophers weren’t producing more given the importance of the topic. In a matter of a few years, that has dramatically changed. There are so many papers coming out now, so many people getting interested. Hopefully, this book will be a text that can help academics and students get a map of the most important philosophical problems in the digital ethics field."

Suggest an update?