The Centre for the Study of Existential Risk's Reading List
The Centre for the Study of Existential Risk is an interdisciplinary research centre within the University of Cambridge that studies existential risks, develops collaborative strategies to reduce them, and fosters a global community of academics, technologists and policy-makers working to safeguard humanity. Its research focuses on biological risks, environmental risks, risks from artificial intelligence, and how to manage extreme technological risk in general.
Open in WellRead Daily app →Existential Risks (2018)
Scraped from fivebooks.com (2018-09-21).
Source: fivebooks.com
Gudrun Pausewang · Buy on Amazon
"SB: I was going to recommend When the Wind Blows , a 1982 graphic novel by Raymond Briggs, but one of our colleagues recommended that we choose this one instead. LS: We chose it because of its unrelenting grimness. It’s apparently often assigned to school children in Germany, which I find very surprising, because it must be very hard for a teacher to talk about it to pupils. It’s the kind of book in which nothing nice happens – which makes it very realistic. Quite often in (post-)apocalyptic stories, you intermittently see some rays of hope, but not here. It’s also quite didactic, a sort of cautionary tale about the consequences of nuclear weapons. “We tend to forget just how many nuclear warheads the US and Russia used to have during the Cold War” SA: It’s extremely moralising. But in the epilogue, the author actually writes: “I have depicted the disaster and its consequences as less catastrophic than they presumably would be in reality, since I had to allow for a survivor who would later be in a position to talk about what had happened.” It’s also very interesting in how it brings us back to the Cold War. We tend to forget just how many nuclear warheads the US and Russia used to have. The threat of nuclear weapons was an everyday reality for people: if a nuclear war had broken out, every major population hub would have indeed been targeted. This is no longer the case, although a large-scale nuclear war would still wipe out everyone because of nuclear winter. SA: Yes. There is a much broader knowledge about nuclear winter being a plausible scenario, which wasn’t the case during the first three decades of the Cold War. Russia is broke, and China is not very interested in nuclear weapons. Nuclear terrorism is the thing that seems to scare people now, but we wouldn’t classify that as an existential threat because a detonation would likely be circumscribed to one geographical area. SA: It’s a tricky situation, and it’s difficult to admit, but once we’ve invented nuclear weapons, we really just need to govern them responsibly and prevent mutual destruction. But if we get one side to disarm without the other, we end up in a far less stable situation. When a government is being a responsible custodian of nuclear weapons, it is generally making the world a safer place. SB: When I came to CSER I was very much a unilateralist in terms of nuclear disarmament. I still think there’s a strong case to be made that Britain should disarm by itself, because the UK could set some useful precedent for developing nations, to break this link between being taken seriously and having nuclear weapons. The smaller countries could do that, and it would be valuable. But having been through these arguments and thought about the problem a lot, I don’t see any great value in the United States or Russia disarming unilaterally. As an alternative, some scientific papers have argued that we could relax the legislation on biological weapons – whose long term consequences may not be as bad as a nuclear winter – in order to keep the logic of deterrence while convincing all superpowers to renounce their nuclear weapons. Something we do worry about within existential risks is accidental triggering of nuclear weapons. SB: Exactly. There are between 15 and 20 reasonably well-documented historical examples of near-misses of that type. There were people who had the authority to press the button, and had orders to press it under certain circumstances, but decided not to. Of course we should all be very grateful to these people, and it’s interesting to realise that in all of these examples, they seemed to be able to use their critical faculties before pressing that button."
Walter M. Miller Jr. · Buy on Amazon
"HB: Before the beginning of this novel, a nuclear war destroys civilisation and is followed by what’s called the ‘Simplification’, a backlash against the Enlightenment, science, and culture, leading to most people becoming illiterate. And the novel tells the story of society recovering from that disaster, and how a small core of pre-deluge civilisation is preserved and protected through centuries of rebuilding. It touches on many questions about rebirth, and whether history follows an endless cycle of dark ages, middle ages, and renaissance. The reason I chose it is that there are many things that would kill humanity just outright, but a lot what we do in existential risks is study scenarios of civilisational collapse: humanity doesn’t completely disappear, but is reduced to small groups or bands wandering around. In those scenarios we can try to imagine whether humanity would recover, and how fast it would be able to do so. Some questions are difficult to answer: if humanity does end up recovering from its collapse and goes back to its previous level of cultural and technological development, should we be fine with that? SB: Yes, and it’s also part of the reason we chose The Last Children : it takes a long view of nuclear scenarios, and looks at what happens months and years after such a disaster. All of these books are willing to take the long-term perspective; this is something we’re always encouraging people to do. One of the reasons you should read these novels if you’re interested in existential risks, is that they really challenge you to think about what civilisation looks like in the very long term. It’s so easy to get hung up on problems we face and hear about every day on the news, but on the scale of history, many of those problems will be nothing more than footnotes. “If humanity recovers from its collapse and goes back to its previous level of cultural and technological development, should we be fine with that?” HB: A Canticle for Leibowitz invites the reader to take this long-term view, by making the point that a civilisational collapse, as long as it is followed by a renaissance, doesn’t really matter in the grand scheme of things. That little ‘blip’ won’t matter at all over millions of years. I don’t necessarily agree with that argument myself, but it’s certainly an interesting one to think about. Another question it touches on is the problem of the Great Filter. The Great Filter is one suggested solution to the Fermi Paradox , which says that given what we know about the probability of intelligent life in the Universe (calculated mainly through the Drake equation ), we should have spotted intelligent extraterrestrial civilisations a long time ago. The solution proposed by the Great Filter hypothesis is that all civilisations have to pass through a sort of filter, which leads to either their survival and expansion, or their complete collapse; and of course the theory is that most civilisations don’t pass the filter, which explains why we seem to be alone in the Universe. For example one could theorise that every sufficiently-intelligent civilisation in the Universe evolves until it discovers nuclear power, and ends up killing itself with nuclear weapons. The main question for humanity then becomes: is the Great Filter behind us (and we’re ‘fine’ now, as humanity will most likely survive for a very long time), or is it ahead of us and resides in those existential threats? SA: Scenarios like this one also make one realise the complexity and fragility of the systems that keep civilisation in place and thriving. We don’t notice those systems most of the time, but it takes a whole lot of effort to keep supermarket shelves stocked on a daily basis, around the world. How far are those systems from tipping over? And can we make them more resilient? These are interesting questions because they’re disconnected from a particular type of existential threat; they’re about making our civilisation more resilient against all possible risks, and making sure we’re able to bounce back. LS: We do have a lot of gene banks, in Svalbard for example, so that’s one aspect that is considered fairly robust in terms of preserving things like crops. HB: In the 1950s and 60s, when people were very scared of nuclear war, there were lots of things published on shelters, contingency plans to preserve a minimal government, etc. There’s less being written on the subject now. My view is that this kind of research is less useful – there are libraries everywhere, and our knowledge is very well stored and preserved. And even if we collapsed and recovered, physical artefacts like cars, fans or radios could be reverse-engineered to understand their inner workings. SA: It wouldn’t work for everything though. If all nuclear scientists were to die at the same time, it would probably take us a while to get back to our current levels of understanding of nuclear energy. SB: There is a very good book called The Knowledge: How to Rebuild our World from Scratch by Lewis Dartnell, in which he goes through everything that would be needed to rebuild civilisation, including chemistry, agriculture, electronics, etc. But in his last chapter, which really stayed with me, he argues that the one thing you’d need to rebuild civilisation quickly is the scientific method. Until a few centuries ago, so much time, effort, and energy had been wasted to improve things in the dark, with people eventually getting lucky and finding random improvements. But if you only take into account what was rigorously discovered through the scientific method, you could collapse a lot of the history of human thought and development into a relatively short period of time. When people talk about surviving and building bunkers, what they often suggest putting into the bunker is people, seeds, etc. But actually what we really need to put into the bunker, or ensure it survives in some way, is our rationalism through the scientific method. If civilisation collapses and everyone resorts to superstition, our chances of recovery are far lower, regardless of the physical resources we’ve preserved. You also mentioned the possibility of an optimistic bias, and I think there definitely is one, for several reasons. First there is a psychological bias which makes us prefer thinking about things being fine rather than going badly. There’s also a statistical reason, which is that things going very badly is quite unlikely, at least in the near future. At the moment there are so many risks around that an extinction might happen in the next century; but it almost certainly won’t happen next week. That’s penalising in general for existential risk research, because they are very rare events, and human beings are bad at judging priorities based on their total expected impact, as opposed to the ‘most likely scenario’. “What we really need to put into the bunker, or ensure it survives in some way, is the scientific method” SA: There is also a simplicity bias. It’s quite easy to think about resilient systems for catastrophes like volcano eruption or earthquakes; both because the risk can be calculated, and because they’re quite entertaining to talk about. The whole industry of disaster movies is founded on that premise. But on the other side, we too rarely think about the dangers of very boring systems collapsing. If the entire sewage system was to break down in a major city, the consequences would be really bad; but that’s quite a boring thing to study, and an even worse movie to make. HB: A Canticle for Leibowitz ends in a combination of being both quite depressing, and surprisingly hopeful, and I think that characterises the entire field of existential risks quite well."

David Mitchell · 2004 · Buy on Amazon
"LS: I read this novel before I even knew what existential risks were; I’ve loved it for a long time. I really like the narrative structure; it’s a technical and stylistic masterpiece. What made me pick it for this subject is that it starts off being not overtly apocalyptic, but there are little signs announcing that something is coming. There are little Easter eggs that slowly lead to the central section of the novel, dedicated to a post-catastrophe world set in the far future. Support Five Books Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount . SB: This is the perfect book if you want to read about human extinction but you still need to be ‘seduced’ into it. If you start with The Last Children , your exploration of existential risks might be cut short by the very depressing aspect of the story. Cloud Atlas is cleverly written, because its unusual narrative structure means that if you want to know what happens to the half-finished stories set in past centuries, you have to go through the middle section dedicated to the catastrophe. LS: The six stories contained within the book are very interesting in the way they’re nested together, almost like a literary Matryoshka doll. And they’re also completely distinct in style: you have a hard-boiled 1970s California cop drama, but also a 19th century Herman Melville-type elegiac story. I really loved this book; don’t watch the film though! SB: The book also has a nice coda at the end: the character from the 19th century story actually writes about human extinction, and what he thinks is going to happen to humanity in the future. He predicts that if we keep on being too greedy, or try to outdo one another instead of cooperating, then there is no way that civilisation will carry on forever without causing a massive catastrophe. Interestingly, New Scientist wrote an editorial in early 2018, basically saying that existential risk research was great but that researchers kept focusing on ‘traditional left-wing’ issues about capitalism, overpopulation, and environmental concerns. They were asking if this wasn’t politicising the research. We had a discussion at CSER about it. But my view is that there is a set of premises about what society should strive for – which are traditionally, though not universally, seen as being right-wing – such as short-term profit maximization, ‘creative destruction’, individualism, externalization of risks, and so on, then you can’t run that model forever and not end in some kind of disaster. It’s not an attack on anyone for being right-wing; all this is saying is that the long-run equilibrium of that model will almost inevitably include some kind of global catastrophe, so any system built on it is likely to be very unstable. It doesn’t mean we should all be marxists or socialists, those models are also flawed, but it does mean that there is indeed a problem with the current way we operate in our world."
Gernot Wagner & Martin L. Weitzman · Buy on Amazon
"SB: The reason we dedicated our only non-fiction choice to climate change is that at the end of the day, if you’re interested in existential risks and wondering what you can do to prevent them, then probably the one direct threat you can do something about is climate change. People often ask us what the most dangerous existential risks are. If you want something that could definitely leave us all dead, that would be AI: if an AI becomes superintelligent and decides that humans are a waste of space, it’s over. If you’re interested in something that could happen tomorrow, that would be nuclear weapons: it only takes one person to press the wrong button. But if you want to work on the thing we’re currently running fastest towards, it is catastrophic climate change. There is a lot of complacency about the likelihood of extreme climate change. The uncertainty in current climate models means that we could very easily aim for a global increase of 2°C or lower, but actually get 6°C instead. People are always going to be pushing for the most optimistic predictions, but there are many tipping points and feedback loops that could drive climate change to levels that would be truly catastrophic for us. So we’re driving very fast, in the dark; and we need to do something about that. For most people out there, this is the risk they can do most about, in particular by changing how they vote and what they expect from policy-makers. If people demand that politicians talk about climate change, stuff will happen. If people don’t care then it will get taken off the agenda again and stuff will not happen. Ordinary individuals can have a big say on that, if they want to. There are also issues about individual lifestyle and consumption, and our perception of what is ‘normal’. Unfortunately not everyone has a such big role to play in something like AI safety. “If you want to work on the thing we’re currently running fastest towards, it is catastrophic climate change” SA: If you think of a map of the world, you can identify the geographical centres that are most contributing to each risk. For nuclear weapons, it’s mainly military bases where warheads are stored, and the chain of command that would lead to their use. If you look at AI, you can look at some key research labs and data centres in California, London, China, etc. For bioengineering, again geographical sources of risks are fairly circumscribed. But if you look at climate change, then suddenly almost the entire world, both developed and developing, is part of a very distributed source of risk. This means that any individual anywhere on the planet can act regarding climate change, and take partial ownership of the problem. LS: For a non-fiction book on a quite heavy topic, Climate Shock is very readable. SB: Yes, the fact that it’s about climate change makes it much more accessible. And actually, one of its chapters is a very gentle introduction to existential risks. SA: There are some fairly good and popular books about AI, such as Life 3.0 by Max Tegmark, or The Technological Singularity by Murray Shanahan. But then what? Unless you go on to do a PhD on AI safety, those books wouldn’t really make you behave any differently after you’ve read them. SA: Maybe, but if you look at the situation with nuclear weapons, there was once mass literacy about the potential effects and dangers, and very large mobilisations by people worried about it. It certainly did play a role, as did Hiroshima and Nagasaki. But ultimately if the decision power is centralised in the hands of a few people, there isn’t that much that the public can really do."
Cixin Liu · Buy on Amazon
"SA: In the first book of the trilogy, science basically stops working on Earth, and there’s a big puzzle as to why. Particle accelerators start giving random results, and a bunch of scientists commit suicide. It is then revealed that an alien civilisation is at the origin of those events. These aliens themselves are going through a systemic collapse, and they create an AI that they send across space to take control of another civilisation. LS: So interestingly, in this book, the alien civilisation is experiencing an existential risk, and by using its technology to prevent it, it creates an existential risk for humanity. SA: Exactly. And all of this is the setup of The Dark Forest , in which humanity realises the scale of the potential risk, and learn that they have 400 years to prepare before the alien fleet arrives. They know that the adversary is more technologically advanced than them, but not by how much. And they can’t do fundamental scientific research anymore. So the story says a lot about the massive advantage that exists in higher technological advancement. LS: It goes back to what Simon was saying earlier, about research and the scientific method being our most precious resource. SB: And humanity doesn’t even know what the alien side’s technology is, and what it should be catching up on. SA: Without revealing too much, the novel sort of makes the broad point that as a species, it’s hard science, cold game theory , and consequentialist reasoning, that will keep you alive. The ‘fluffy stuff’ like human love, morality and ethics won’t save you at all. LS: But you need the cold, hard reasoning to preserve the fluffy stuff. SA: Interestingly, this book is among the most popular science fiction coming from China right now. It showcases the Cultural Revolution at the beginning of the first novel, most characters in the book are Chinese, and there is some uncharitable yet very accurate portrayal of Western democracy and the inefficiency of the United Nations. But it doesn’t glorify the planned economy either. It simply makes the point that humanity is fairly weak, and that if we’re ever faced by a really big threat, we most likely won’t survive it. SB: Eliezer Yudkowsky, who runs the non-profit Machine Intelligence Research Institute , says it’s really important to realise that there’s no natural law that says our civilization can’t collapse and our species can’t go extinct. It is a real, live option on the table. It really could happen. Just like as an individual you could suddenly die this afternoon, humanity could suddenly disappear, and all the ingredients necessary for that to happen already exist. Being able to accept that fact without looking away from it and then do something about it, that is the message that one would hope people might take away from these books. That’s one of the reasons why it’s worth exploring existential risks through science fiction and novels, rather than just through non-fiction books: all of the people in these stories have to engage with these problems, realise the mess they’re in, and decide how to respond. We need more people who are willing to do that; taking these issues seriously but not just getting depressed or angry, and instead actually doing the cold, hard thinking about what can be done."