A Canticle for Leibowitz
by Walter M. Miller Jr.
Buy on AmazonRecommended by
"HB: Before the beginning of this novel, a nuclear war destroys civilisation and is followed by what’s called the ‘Simplification’, a backlash against the Enlightenment, science, and culture, leading to most people becoming illiterate. And the novel tells the story of society recovering from that disaster, and how a small core of pre-deluge civilisation is preserved and protected through centuries of rebuilding. It touches on many questions about rebirth, and whether history follows an endless cycle of dark ages, middle ages, and renaissance. The reason I chose it is that there are many things that would kill humanity just outright, but a lot what we do in existential risks is study scenarios of civilisational collapse: humanity doesn’t completely disappear, but is reduced to small groups or bands wandering around. In those scenarios we can try to imagine whether humanity would recover, and how fast it would be able to do so. Some questions are difficult to answer: if humanity does end up recovering from its collapse and goes back to its previous level of cultural and technological development, should we be fine with that? SB: Yes, and it’s also part of the reason we chose The Last Children : it takes a long view of nuclear scenarios, and looks at what happens months and years after such a disaster. All of these books are willing to take the long-term perspective; this is something we’re always encouraging people to do. One of the reasons you should read these novels if you’re interested in existential risks, is that they really challenge you to think about what civilisation looks like in the very long term. It’s so easy to get hung up on problems we face and hear about every day on the news, but on the scale of history, many of those problems will be nothing more than footnotes. “If humanity recovers from its collapse and goes back to its previous level of cultural and technological development, should we be fine with that?” HB: A Canticle for Leibowitz invites the reader to take this long-term view, by making the point that a civilisational collapse, as long as it is followed by a renaissance, doesn’t really matter in the grand scheme of things. That little ‘blip’ won’t matter at all over millions of years. I don’t necessarily agree with that argument myself, but it’s certainly an interesting one to think about. Another question it touches on is the problem of the Great Filter. The Great Filter is one suggested solution to the Fermi Paradox , which says that given what we know about the probability of intelligent life in the Universe (calculated mainly through the Drake equation ), we should have spotted intelligent extraterrestrial civilisations a long time ago. The solution proposed by the Great Filter hypothesis is that all civilisations have to pass through a sort of filter, which leads to either their survival and expansion, or their complete collapse; and of course the theory is that most civilisations don’t pass the filter, which explains why we seem to be alone in the Universe. For example one could theorise that every sufficiently-intelligent civilisation in the Universe evolves until it discovers nuclear power, and ends up killing itself with nuclear weapons. The main question for humanity then becomes: is the Great Filter behind us (and we’re ‘fine’ now, as humanity will most likely survive for a very long time), or is it ahead of us and resides in those existential threats? SA: Scenarios like this one also make one realise the complexity and fragility of the systems that keep civilisation in place and thriving. We don’t notice those systems most of the time, but it takes a whole lot of effort to keep supermarket shelves stocked on a daily basis, around the world. How far are those systems from tipping over? And can we make them more resilient? These are interesting questions because they’re disconnected from a particular type of existential threat; they’re about making our civilisation more resilient against all possible risks, and making sure we’re able to bounce back. LS: We do have a lot of gene banks, in Svalbard for example, so that’s one aspect that is considered fairly robust in terms of preserving things like crops. HB: In the 1950s and 60s, when people were very scared of nuclear war, there were lots of things published on shelters, contingency plans to preserve a minimal government, etc. There’s less being written on the subject now. My view is that this kind of research is less useful – there are libraries everywhere, and our knowledge is very well stored and preserved. And even if we collapsed and recovered, physical artefacts like cars, fans or radios could be reverse-engineered to understand their inner workings. SA: It wouldn’t work for everything though. If all nuclear scientists were to die at the same time, it would probably take us a while to get back to our current levels of understanding of nuclear energy. SB: There is a very good book called The Knowledge: How to Rebuild our World from Scratch by Lewis Dartnell, in which he goes through everything that would be needed to rebuild civilisation, including chemistry, agriculture, electronics, etc. But in his last chapter, which really stayed with me, he argues that the one thing you’d need to rebuild civilisation quickly is the scientific method. Until a few centuries ago, so much time, effort, and energy had been wasted to improve things in the dark, with people eventually getting lucky and finding random improvements. But if you only take into account what was rigorously discovered through the scientific method, you could collapse a lot of the history of human thought and development into a relatively short period of time. When people talk about surviving and building bunkers, what they often suggest putting into the bunker is people, seeds, etc. But actually what we really need to put into the bunker, or ensure it survives in some way, is our rationalism through the scientific method. If civilisation collapses and everyone resorts to superstition, our chances of recovery are far lower, regardless of the physical resources we’ve preserved. You also mentioned the possibility of an optimistic bias, and I think there definitely is one, for several reasons. First there is a psychological bias which makes us prefer thinking about things being fine rather than going badly. There’s also a statistical reason, which is that things going very badly is quite unlikely, at least in the near future. At the moment there are so many risks around that an extinction might happen in the next century; but it almost certainly won’t happen next week. That’s penalising in general for existential risk research, because they are very rare events, and human beings are bad at judging priorities based on their total expected impact, as opposed to the ‘most likely scenario’. “What we really need to put into the bunker, or ensure it survives in some way, is the scientific method” SA: There is also a simplicity bias. It’s quite easy to think about resilient systems for catastrophes like volcano eruption or earthquakes; both because the risk can be calculated, and because they’re quite entertaining to talk about. The whole industry of disaster movies is founded on that premise. But on the other side, we too rarely think about the dangers of very boring systems collapsing. If the entire sewage system was to break down in a major city, the consequences would be really bad; but that’s quite a boring thing to study, and an even worse movie to make. HB: A Canticle for Leibowitz ends in a combination of being both quite depressing, and surprisingly hopeful, and I think that characterises the entire field of existential risks quite well."
Existential Risks · fivebooks.com