Algorithms of Oppression: How Search Engines Reinforce Racism
by Safiya Umoja Noble
Buy on AmazonRecommended by
"A lot of people might already appreciate that algorithms can be biased. When this book was published, it really changed the way many people saw search engines. It’s about how algorithms can be sexist and racist and this is true of Google search engine in particular. People tend to think of Google search engine as something very neutral, very reliable. It’s public information, like a public service. Safiya Umoja Noble reminds us that Google is a commercial enterprise, it’s not the public sphere, it’s not a public square, it’s not an NGO and, actually, racism and sexism can be quite profitable. One story that the author tells is that she was looking for certain activities to entertain her nieces. She looked up ‘Black girls’ and found that most of the search results were incredibly sexualized and pornographic. By trying to find something to entertain girls she pushed them into this idea of Black girls as sexual objects. Another example was somebody who searched for ‘three black teenagers’ and the images that appeared were mugshots. But if you searched for ‘three white teenagers’ you got images of very wholesome kids smiling. “Just because something is very popular doesn’t mean that it’s true or that it’s morally acceptable” Google creates this product that is very profitable and when something goes wrong, sometimes it then fixes the problem, which shows that they could have fixed it before, had they thought it through. But sometimes they can’t even fix the problem. And then they just shirk responsibility and say, ‘Well, it’s the algorithm; we can’t really do anything about it.” Another example was searches relating to Judaism, where the first page that came up was full of Holocaust deniers and anti-Semitic content. Google was confronted with this and tried to change it. But because these pages were so popular, they actually couldn’t change it. The best thing they could come up with was to buy an ad for themselves. So the first thing that you saw was a Google ad that explained to you that some of the first searches were unacceptable, and that Google didn’t endorse them. Instead of fixing the algorithm, the best they could come up with is to buy their own ad and display it to issue a warning. Both. It’s to do with the way the algorithm is written and the associations that are made, and how pages get ranked through popularity. Just because something is very popular doesn’t mean that it’s true or that it’s morally acceptable. And that’s one of Noble’s points, that companies like Google try to shirk off these mistakes as ‘glitches,’ when in fact they are part and parcel of how most AI works. There are some tricky ethical questions because when that happens, Google tries to put the burden on people. They say, ‘It’s people who like that kind of thing and it’s not our problem. We don’t decide the content, it’s just the popularity.’ But the decision to defer to popularity is an ethical, morally significant one. Furthermore, their algorithm makes something popular even more popular, because then, when you search, ‘Why do women…’ it then gets auto-completed with something completely unacceptable that reinforces whatever prejudice might have been there already. The United Nations highlighted this in a campaign. Google search suggestions included: Women cannot drive/be bishops/be trusted to speak in church. Women should not have rights/vote/work/box. Women should stay at home/be slaves/be in the kitchen/not speak in church. Women need to be put in their place/know their place/be controlled/be disciplined. Google fixed that right after the campaign happened. It’s mostly Google. A good complement to this book is Race after Technology by Ruha Benjamin . That’s more about how technology is not race neutral, with many examples of different technologies and how this impacts people differently, including in the areas of pre-emptive policing and hiring algorithms. She argues that there is a new ‘Jim Code’ that is designed and implemented by algorithms, with biases coded into tech. It looks very objective and scientific but is just encoding biases just like the old Jim Crow laws. That’s because I chose books for the general public. There was a temptation to include Shannon Vallor’s Technology and the Virtues , but that’s quite specialist and primarily written for professional philosophers. Yes, all the time. We hear a lot of positives, such as in books like The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World by Pedro Domingos . I’m just reading one that’s called Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live by Jeff Jarvis . That’s about how having so much data is going to boost innovation and how much we can learn from each other and everything’s going to be fine. We also get a lot of positivity from tech companies who are always telling us how tech is going to save the world and how amazing it is. So, with these book choices, I’m trying to balance the view. There’s a tendency to be very optimistic when it comes to technology, and I think that’s natural, but it’s partly wishful thinking. We’re in this pandemic, we really want it to go away, ‘oh, this app is surely going to solve it…’ Also, human beings are natural makers of things. Technology has been so useful for us in the past that we have a tendency to trust it more than we trust other human beings—even though it’s human beings who are building the technologies. Yes, absolutely. It’s not an easy distinction to make when it comes to practice, but I think a lot of data that is not personal data should be more public and accessible and open. But with personal data, it’s so easy to misuse and so dangerous for individuals and societies that we should be much more careful than we are. Yes, because it turns out that it’s very hard to anonymize data. It’s in fact very easy to re-identify data. So you can remove the name from someone’s personal data but, actually, if you know where they live and where they work, you can quickly discover who they are. The evidence suggests the contrary. Given how much money and investment and how much development we’re putting into AI and surveillance, I think that the general tendency is to be very optimistic about it. We do need to balance that tendency. Oftentimes, when I criticize tech, people assume that I’m arguing that we should ditch it, or that tech is bad. It’s not that. The important question is: How do we design tech to get some of the benefits without incurring many of the costs or the risks? And I don’t think we’re striking that balance well at the moment. “There is still not enough regulation of AI” So partly it’s about balancing the optimistic view with a realistic look at what is going on around us. Secondly, I think a big task of ethics is to figure out what could go wrong. Our job is partly about finding the problems and trying to prevent future problems. That can seem like a negative enterprise, but in fact, it’s a worthwhile one—particularly when you propose solutions, and when you propose different ways of designing things. Exactly. But that road often goes through a negative process of focusing on what can go wrong and the problems."
Digital Ethics · fivebooks.com