Will MacAskill's Reading List
Will MacAskill is a Scottish philosopher, ethicist, and one of the originators of the effective altruism movement. He is an associate professor at the University of Oxford, where his research focuses on the fundamentals of effective altruism—the use of evidence and reason to help others as much as possible with our time and money—with a particular concentration on how to act given moral uncertainty. He is director of the Forethought Foundation for Global Priorities Research and a co-founder and president of the Centre for Effective Altruism. He also co-founded 80,000 Hours , a YCombinator-back
Open in WellRead Daily app →Effective Altruism (2019)
Scraped from fivebooks.com (2019-01-01).
Source: fivebooks.com
Peter Singer · Buy on Amazon
"I think of Peter as the ‘grandfather’ of effective altruism. He laid out the foundations, and he’s the reason many people got into the community. He didn’t start the movement itself, but he’s clearly its inspiration and has been a big proponent of ideas. In terms of why I chose this book, I can think of three reasons. “I think of Peter as the ‘grandfather’ of effective altruism.” One is that some core philosophical ideas in it are key motivations for effective altruism itself. A second reason is that it was one of the books that really got me enthusiastic about philosophy. I had already decided to study philosophy as an undergraduate, but when I read it, I was really compelled by the thought that philosophical reasoning is of huge importance and can really change the world. I was so inspired by that. The third reason is that Peter Singer represents the career and life that I would also like to lead: someone whose ultimate purpose is the pursuit of truth, with the willingness to follow arguments wherever they lead, and actually willing to make changes to their life on the basis of those ideas. The thought experiment is this: imagine you’re walking past a child drowning in a shallow pond, screaming for help. You can get in and save the child, but you’re wearing a really nice suit or dress, perhaps because you’re on your way to a wedding. It cost several thousands of dollars, and will get completely ruined in order to save the child. Imagine yourself thinking that you don’t want to waste that money, and you just walk on by and let the child die. In moral philosophy, we have a technical term for someone who does that: they’re called an asshole. It’s very clear morally that if there’s a child drowning in front of you, you’re required to save them, even if it costs you a few thousand dollars. That sum of money to you is simply nothing in comparison with saving a life. But then, the killer twist is that we’re in that situation all the time. For a $3,500 donation to the Against Malaria Foundation , you can, on average, save one child’s life. What’s the moral difference? Arguing that there is none, Peter Singer concludes that we’re actually obliged to give away a lot of our income to those living in poor countries. I find that argument very compelling, and it was one of the reasons for me to become an effective altruist. But it’s not an argument I use very much in public, partly because I think it’s not the most persuasive one, and it’s not always necessary to use it. There is already a very large number of people who want to do good, and the reason that’s stopping them from doing good is that it’s confusing and scary, and they’re afraid of not having an impact. I certainly was in that category as an undergraduate. What we do is skip to the chase and tell those people: ‘look, these are the options available to you, and you can truly have a transformative impact on the world. It’s up to you to pursue that life or not.’ I doubt that those who decide not to pursue it, even in the face of such opportunities, would get convinced by additional moral arguments. It’s extremely neglected, and probably the most neglected cause among the ones we look at. 60 billion animals are killed every year for food, and the very vast majority of them are kept in factories, in horrific conditions. Almost everyone in society, if they really understood what those conditions are, would vote against them. Yet, there are only a few tens of millions of dollars spent every year on improving the conditions of animals in factory farms. You can compare that to global health and development, which I also think is neglected, but receives 250 billion dollars every year. This is ten thousand times as much money, and that’s not counting individual philanthropy as well, which also amounts to many billions of dollars. So it’s a huge issue, and an incredibly neglected one in relation to the size of the problem. I think that’s true in public discourse as well. Society is starting to do a little better—a few percent of people are vegetarians, but it’s not major news. For an issue that future generations might look back on and see as a moral atrocity, it barely gets mentioned. In a way that’s not astounding, since animals can’t unionize, they don’t get a vote, they’re completely disenfranchised in society. “Animal welfare is probably the most neglected cause among the ones we look at.” I think it’s possible that people in the future will come to regard animals as being of equal status to humans. That doesn’t mean I would save 2 chickens over 1 human. The ratio would probably still be a thousand to one, or something like that. But one unit of pain is the same whether you’re a chicken or a human being. I think it’s possible to achieve this, but I would expect that getting there requires removing a huge self-interested bias that people have to not care about animals. If we get very low-cost, tasty and healthy lab-grown meat, that might be a solution. It would provide people with a self-interested incentive to stop farming animals in order to eat meat, and they’ll gradually realize that the conditions in which factory farming was done were horrible."

Derek Parfit · 1984 · Buy on Amazon
"Derek Parfit is much less known in the public sphere than Peter Singer. He never wrote books intended for a general audience, but within academia, he was significantly more influential. His book Reasons and Persons , especially, has over 10,000 citations. I would say that Derek Parfit was the most brilliant philosopher of the 20th century. Others would maybe dispute that claim, but everyone would agree that he would be in the top 5. Reasons and Persons is so important because it introduced to the world the field of population ethics: reasoning ethically about the value of increasing the size of the population in the world, and the size of the population in the future. Is it better to have more people if those people have happy lives? Is it worse to have more people if those people have a life so bad it’s not worth living? If it is better to have more people, what is exactly the theory that governs how to think about these changes in the size of the population? Do you just add up the happiness of everyone? If so, you get counterintuitive conclusions. Do you just try to maximize the average well-being? If so, you also get counterintuitive conclusions. Parfit raised those questions, without claiming to have an answer. He made many fundamental breakthroughs in his book as well, and I think that this field of inquiry is among the most important ones right now. The conclusions we ought to draw given the many decades of research done since Parfit’s book, mean that we should think that, all else equal, it’s good to have one more person if they’re sufficiently well-off. That’s a huge departure from common sense ethics, and it’s extremely important from a moral standpoint. I do agree that it’s a pressing matter, and even possibly the most important set of issues we face. We are remarkably early in the story of civilization. Almost all the value that the human race could achieve lies in the future: all the greatest scientific advances, works of art, peaks of happiness and creativity. But we now seem to be entering a stage where there is at least some chance that all those things could be lost. The biggest shift occurred in 1945 with the use of nuclear weapons, which put on the table the idea that we could develop technology so powerful that we’d be capable of destroying ourselves. The study of existential risks receives very little attention, in the order of a few tens of millions of dollars per year; but it’s increasing. “Almost all the value that the human race could achieve lies in the future: all the greatest scientific advances, works of art, peaks of happiness and creativity.” The reason for this renewed attention is twofold. First, the success of the effective altruism movement means that more people are taking these issues seriously, and are trying to think about how to deal with them—and key researchers like Nick Bostrom are doing very good work on this. Second, these issues have become more salient now. Worry about existential risks was very widespread in the 1960s and 1970s, with strong protests against nuclear weapons. Of course it wasn’t referred to as an existential risk, but simply as nuclear proliferation. Nowadays we have identified many of those risks: war, nuclear weapons, climate change, man-made pandemic, artificial intelligence, etc. And just as with the effective altruism movement, having something concrete rather than abstract to worry about is much more compelling. I think I’ll disagree with Derek Parfit on this one! I think there are a couple of considerations that he maybe doesn’t mention, but which are very important in setting that threshold. Giving more has two effects: obviously it means more money is going to charity, but it slightly decreases your own living conditions. Perhaps it has zero effect to begin with, but as you start to donate more and more, now you might have to take slower transport to get to the office, or buy a cheaper suit which could impact your job prospects, etc. At some point those considerations outweigh the benefits of your donations. For someone in a rich country, the point at which it’s no longer optimal to keep donating is much higher than a poverty threshold of two dollars per day—because if you gave everything above two dollars a day, you simply wouldn’t be able to live in a rich country, and therefore earn money to donate. A second point is that we should be actualists. Actualism versus possibilism is a question in moral philosophy, which can be framed like this: when I decide what I ought to do today, should I take into account my own future weakness of the will? Actualism says that we should. If you give away all of your savings at once today—which you could technically do—you’ll probably get so frustrated that you’ll simply stop giving in the future. Whereas if you decide to give 10% of your earnings, this commitment will be sustainable enough that you’ll continue doing it over many years in the future, resulting in a higher overall amount, and thus a higher impact. Therefore, an actualist says that you should give only 10%. In my opinion, the best philosophical answer to the question of how much you ought to give, is that you should give as much as you can to maximize your long-term giving, taking into account both the fact that you can spend money to make more money, and this concept of future weakness of the will. And then of course, there is the question of how demanding this commitment is—but I think that for most people it isn’t very demanding at all. If you live in a rich country and pledge to give 10% of your earnings, you’re still living a better life than almost everyone who’s ever lived."
Benjamin Todd · Buy on Amazon
"80,000 Hours is named after the number of hours we spend working in our life. That’s a lot of hours. Spending 1% of this, 800 hours, on figuring out what to do with the remaining 99%, makes a lot of sense. The aim of this organization is to provide advice and coaching for people who want to use their career to do as much good as possible. It has developed a body of research since its creation, and it has coached hundreds of people, giving them tailored advice on how they can use their career to have the biggest impact. The book is a summary of these years of research. It addresses questions like: what are the most important areas you should focus? Within those cause areas, such as global health and development or existential risks, what are the highest priority careers? To what extent should you focus on what you’re personally passionate about, good at, or excited about? To what extent should you invest in yourself in order to have a larger impact later on, such as by pursuing further degrees, or working in a non-impactful but prestigious organization that would train you really well? These are the core questions of the book. It seems to have been very successful so far, although it’s been a long time since I’ve personally done some coaching. But about a third of the people I talked to reported having made significant changes in their life on the basis of the advice they received, and it’s probably a larger amount now. There are various calls to action on 80,000 Hours’s website; after someone has read a bit, they can apply for coaching, especially if they’re interested in a particular area. They’ll be asked to read a lot of background on the research that we’ve done, then they’ll talk to an advisor, to discuss what are their strengths and weaknesses, what they think they could potentially excel in if they worked in a particular area, and what would be their 2 or 3 best potential options. These are often people in the early stage of their career, in their twenties. They’ll also get connected to various mentors who have specialist knowledge and have worked in these areas for a number of years. They go away from this coaching with a long list of things to look into, jobs to apply for, etc. “Spending 1% of your career figuring out what to do with the remaining 99% makes a lot of sense.” The motivation for this is a kind of “practice what you teach” idea. The world would be a lot better if charities were very honest and open about what they do, including their mistakes. One thing we found when looking at charities is that it can be incredibly hard to know even the basics of what a charity actually does—not in the sense of what they focus on, such as malaria, but what is actually their intervention and what is the evidence base for this intervention. I think that transparent reporting is good for the organization as well: it keeps you on track, especially as a charity when you don’t have the carrot and stick of profits and losses beating you into submission all the time. Instead, you have to rely much more on judgements from advisors and people you work with, to judge how well you’re doing. Being transparent on these aspects means that you get a lot more opportunities to get criticised. It’s also the case that 80,000 Hours received money from the effective altruism community, from very deserving donors who would not donate to an organization that wouldn’t report what it was really doing."
Graham Allison · Buy on Amazon
"The book is about assessing the chance of the United States going to war with China in the future. Thucydides’ trap was written about by the greek historian Thucydides during the Peloponnesian War. The argument is that when you’ve got a ‘hegemon’, a country that has power over an area or the world, and there is a rising power that grows, strengthens and threatens to take over the hegemon, that generally leads to war. Twelve out of sixteen times in history, that has happened. Allison doesn’t go into as much depth on what would be the theoretical model behind this, but the idea is that two powers are competing for status. As the ‘top dog’, if you see someone else competing for your spot, you’re in a position to kill them first—and if you don’t, you’re at risk of getting killed once they’ve caught up with you. Any quantitative argument about the chances of war in the 21st century will be very subjective—I certainly wouldn’t want to say that it’s 75% just because of this twelve-out-of-sixteen idea. Allison doesn’t make that claim himself, but what he makes clear is that war is the normal state for humanity. The last 70 years are a fairly unusual state. We don’t really know why they’ve been so unusual. It’s possible that it’s a result of contingent facts about technology, namely nuclear weapons. This could all change in the 21st century. This, combined with the incredible economic progress of China, means that we should take very seriously the possibility of war, even though it seems so weird and unprecedented for people of my generation, who grew up in a period of almost complete peace."

Nick Bostrom · 2014 · Buy on Amazon
"I picked this book because the possibility of developing human-level artificial intelligence , and from there superintelligence—an artificial agent that is considerably more intelligent than we are—is at least a contender for the most important issue in the next two centuries. Bostrom’s book has been very influential in effective altruism, leading lots of people to work on artificial intelligence to ensure that it is developed safely. I don’t agree with the entire book, but there are many compelling arguments in it, and it would be extremely overconfident to dismiss it as too speculative. In fact, I think there should be a lot more work that tries to understand the biggest challenges for the next two hundred years, and what we could do to try to overcome them. Since these long-term ideas have become more influential in effective altruism, we’ve also done less mass media outreach, so I don’t actually have a perfect sense of how compelling the public finds them. Certainly, for the issue of climate change, people are very onboard with the idea that we should be safeguarding the planet now in order to provide a good planet for our children, their children, and so on. Within the effective altruism community, people have systematically found this set of arguments very compelling. The key issue is really whether you think that people in the future matter as much as people living today. If so, then do you think that there will be a lot more people in the future than today? It seems extremely plausible. Then, you’ve at least acknowledged that most of the value lies in the future. The mass attention we’re trying to give to this, such as with my TED Talk , has been very well-received so far. We’re still in the early days, and some people do think that we should only focus on the more easily measurable and quantifiable bets. That’s a perfectly reasonable position, but I’m optimistic that the arguments in favour of effective altruism are so compelling that the public will learn to see longtermism as a very important issue."
Longtermism (2022)
Scraped from fivebooks.com (2022-08-15).
Source: fivebooks.com
Christopher Leslie Brown · Buy on Amazon
"This is by a leading historian, Christopher Leslie Brown. It’s the story of the British abolitionist movement, which included North America at the time. In particular, he argues that the abolition of slavery was a quite contingent event, it’s something that could easily have never happened. When I first heard this idea—from the historian who was consulting for my book—I just thought, ‘Wow, this is mad. This is just such a wild idea. Surely the abolition of slavery was more or less inevitable?’ But I really came around to having a lot of sympathy for Professor Brown’s view. The abolition of slavery was not the inevitable result of economic changes. Instead, it was a matter of changing cultural attitudes. Then there’s a further question: Was it heavily contingently dependent on a particular campaign that was run? That has more going for it than you might think as well. The Netherlands was also an extremely well-off, industrializing country and there was no abolitionist movement there. There was one attempt to get a petition going and it had very little impact. So I think it’s not at all a crazy view to say that if a particular cultural movement had not happened, we could be living with widespread, legally permitted slavery today. “We systematically underappreciate the moral importance of our descendants” Now, what does that have to do with effective altruism and longtermism? It’s relevant because changing the values and moral beliefs of a society is one of the most important, longlasting and also, in a technical sense, contingent things you can do. It really could go either way, you’re not pushing on a door that’s going to open anyway. Improving the values of the day is one way people can have a positive long-term impact. That might mean extending the circle of concern and compassion towards people in other countries and taking their moral interests more seriously, or towards non-human animals and future generations. It’s a striking question, but if the Industrial Revolution had happened in India, perhaps we would look at factory farming as this horrific, impossible, dystopian scenario. It’s obviously hard to tell because it’s a counterfactual, but it seems plausible enough to me."
Julia Galef · Buy on Amazon
"I put The Scout Mindset on the list because the ability to reason very carefully and have a curious, truth-seeking attitude is of enormous importance. We can be pretty good at it for issues that are not very high stakes. If you’re learning about a topic in school and it doesn’t have to do with anything that’s really facing you, it’s easy to be impartial. Then, when you’re talking about very morally sensitive, high-stakes, life-or-death issues, it can be much harder to have what Julia Galef calls a ‘scout mindset.’ This is about seeing all the different views, deciding how much weight you should give to each, understanding other points of view. It’s much easier to get into a mindset where you say, ‘Look, lives are on the line, this is my cause, I want to defend this view at all costs.’ When it’s high stakes, it’s even more important to ensure that we keep this open mind because if you focus on the wrong priority, then you’ve done less good. Perhaps you’re helping 10 people, whereas you could have helped 100. Perhaps you’re saving lives of people who would otherwise die instead of preventing an enormous catastrophe that would in fact kill everybody. When it comes to the best ways to help, it’s so high stakes that we have to have correct views, even if that can feel uncomfortable. That’s why I’ve chosen this book. Yes, it can be easy to get into the mode of, ‘Look, here’s a person in front of me. I’m going to help them.’ This is the soldier mindset that Julia talks about. You want to defend them at all costs. That’s a very natural, very understandable reaction. However, if you want to do the most good, that requires your ability to reason, to take and channel those moral emotions, even in a way that can feel unintuitive."
Toby Ord · Buy on Amazon
"The Precipice is a book I regard as a complement to What We Owe the Future . It is making the case that there is a serious chance of an existential catastrophe in our lifetime, an event that would permanently foreclose on all of humanity’s future potential. Some such risks include the extinction of humanity by engineered pathogens or a takeover by a rogue AI or AI systems that have advanced to be more intelligent than humans but don’t share our goals. Also, more familiar events like asteroids, super volcanoes and so on. He gives us this amazingly detailed and balanced account of those different existential risks and what we can do about them. I think his case is fairly compelling. That’s right. We actually talked about that, the choice to put a number on it, before he published the book. He didn’t think it would be a big deal, but I thought that a lot of people would talk about it. He’s not saying it’s an objective chance. It’s not that he’s very confident that it’s one in six, but that’s his best guess. And, honestly, my estimate doesn’t differ that much from his."
Dan Gardner & Philip E Tetlock · Buy on Amazon
"Yes, this is very relevant to the one in six question. We talked about the importance of the scout mindset, of thinking clearly, of trying to have beliefs that reflect reality. Once we start thinking about issues that are not just happening now but over the coming years or even decades, that gets particularly challenging. There’s a long track record of people making predictions about the future that are hilariously wrong, both in too optimistic a direction—where they say, ‘In the year 2000, we’ll be walking in spacesuits on the moon’—and in too pessimistic a direction. JBS Haldane , one of the early futurists at the beginning of the 20th century, made many great predictions. But he also said it would be 8 million years before there was a return trip to the moon. That was only a few decades before there was one. Support Five Books Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount . How should we reason in the face of uncertainty? Forecasting is this discipline, this art and skill of getting better and better at making predictions about the future. The key thing is starting to reason in terms of precise probabilities. You might ask, ‘What’s the chance of x happening in our lifetime?’ and people will say, ‘It’s unlikely.’ That’s very vague. I don’t really know what ‘unlikely’ means. Is there a 40% chance x might happen? 10%? 1%? These are really big differences. The approach of forecasting is to make very precise predictions and then, over time, you see which are correct and which are incorrect. They have developed a whole set of skills for improving the way we reason about probabilistic matters. The book is very helpful for thinking about things that are intrinsically uncertain, questions like, ‘When will we develop human-level artificial intelligence?’ Or ‘Will there be a third world war in our lifetime?’ Yes, you could think of it as a nascent field of forecasting studies, within economics and psychology in particular. Exactly, but that’s the wrong way to think about things because it’s always a matter of doing better or worse. Here’s one example. Metaculus is a forecasting platform that overlaps a lot with the effective altruism community. In 2015, there was a forecast about the probability of a global pandemic between the years 2016 and 2026 that would kill at least 10 million people. Metaculus put the odds at one in three. Now, if the world’s decision-makers and political leaders had internalized a one in three chance of a global pandemic of that magnitude within the coming decades, we would have prepared for the pandemic that did occur much better than we did. We weren’t thinking probabilistically. We thought, ‘Nah, it won’t happen.’ One in three is not that high a probability, but it’s certainly enough to prepare for."

Peter Singer · Buy on Amazon
"This was a book that was very influential on me all the way back in 2009 and got me inspired to work on the problem of extreme poverty. I’ve been arguing for a longtermist worldview, focusing on civilization-impacting events, but there are major problems affecting the near term too. Still, every year, hundreds of thousands of children die unnecessarily of malaria, of diarrheal disease, of tuberculosis. You can do an enormous amount of good; it literally costs only a few thousand dollars to save a life. When we think about doing good in the world, we should appreciate how much our money can do. I included this book because whenever we’re engaging in prioritization or thinking about the big problems of our time, this is something that should really be borne in mind. Peter Singer also inspired me to give away most of my income. It can do a lot more good for other people than it can for me. If they could do absolutely anything, I think the thing that I would most want them to focus on is AI governance. AI is already the fastest-moving technology at the moment. There’s competition between the big powers, including the US and China. It’s clear that we need to carefully navigate this technology because the upsides are enormous, and the downsides are very great. I do think that the development of very advanced artificial intelligence could be one of the most important technologies ever. How should you govern this? What are the policies that are correct and in place? Honestly, I don’t really know. That’s why if I’ve got someone who is a polymath and super smart, I would love to have more attention on that, to ensure that we reap the benefits without paying the costs. Get the weekly Five Books newsletter Everyone is going to have different incentives. That’s why I particularly want to see more altruistically-motivated people, perhaps philanthropically funded, working on it because they can have a truly impartial perspective. Predictably, the US government is going to have the United States’ interests at heart. The leading AI labs will have their own interests at heart. People working in the AI labs take the concern seriously—they want to have a governance framework such that we have the upsides of AI without the downsides—but, at the same time, I do think having people who are operating in think tanks or going into government is especially valuable. They’re more concerned about how things go for the world as a whole rather than any particular interest group."