Diaspora
by Greg Egan
Buy on AmazonRecommended by
"It is a masterwork of science fiction , and the deepest account I’ve seen of how much the human condition could change. The book opens almost 1,000 years into the future. But at some point, much closer to the present day, humanity had worked out how to create digital minds, living in virtual worlds. And the book is primarily an exploration of how profoundly such a development would change things. Many age-old challenges of social arrangement would become moot. For instance, the people of this future are effectively immortal and cannot come to any bodily harm unless they will it. So in these virtual worlds, violent crime is not against the law, but against the laws of physics. But many new challenges appear, such as finding meaning in these lives of ease, or keeping a coherent self-concept when living for millions of subjective years. Unlike the other books here, it is not a history, or even a prediction. But what is so interesting is that it is a fascinating sketch of how utterly strange and new humanity’s long-term future could be. Humanity has existed for about 200,000 years, a time that has seen a remarkable ascent in our power to transform the world. Very recently, in the 20th century, our escalating power finally reached a point where we could make weapons of such devastation that they could threaten our continued existence. We still live under the shadow of nuclear weapons, and have added new risks such as catastrophic climate change . And the coming hundred years will see additional risks, such as those of engineered pandemics and unaligned artificial intelligence. It is as if on humanity’s long journey, we have reached a high mountain pass where the only way forward is along a narrow ledge on the cliffside, above a sheer precipice. We don’t know exactly what the chances of falling are, but we can tell that this is the most dangerous moment of our journey so far. Our entire future will depend on whether we can make it through this time. What Parfit saw was that human extinction would destroy not only our present, but our entire future. We have survived for 10,000 generations and if we don’t destroy ourselves, we could survive for tens of thousands more. But we have got ourselves to a point where this entire future is at stake. When writing the book, I thought hard on whether to give numbers at all, as they can easily suggest a false precision. But I decided it would be a disservice to the reader to have them read so far and then simply not tell them how large I thought the risks were. The numbers are ultimately my best guesses — my own degrees of belief that there will be an existential catastrophe of each type in the next hundred years, based on a decade of research and conversation with experts. Support Five Books Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount . I treat them as order-of-magnitude estimates, meaning that they are trying to be correct to within a factor of 10. In many cases such rough estimates would be useless, but note that my estimates of the risks range from 1 in 1,000,000 to 1 in 10, so even such rough estimates can help people understand which ballpark we are talking. And by putting a number on things, it also allows the reader to see that although I think the risk is very serious, I still think we are more likely than not to make it through the century. One way to understand the robustness of a probability estimate is to ask about how much disagreement there is among experts. Here, it is vast. Some think that the chance is extremely close to zero, while others think it is almost a foregone conclusion — something like a 90% chance. Thus the evidence around AI risk isn’t strong enough to force all reasonable people to a similar answer. You could think of my 1 in 10 as a hedge in this situation: something like the average among my peers. I am indeed suggesting that existential risk is a key priority for our time. And I do think it is extremely important. I think Holt would agree that the loss of our entire future, of everything that we could ever accomplish, would be extremely bad. And when something is so bad, it can be important to work on it even when the chance of changing the outcome is smaller. But there is a question of how far this goes. How much more important is our future than our present? How small would the chances of saving the future need to be to warrant costly action? His real fear, as I understand it, is that it seems difficult to get off this train — to avoid concluding that protecting our future overwhelms the importance of all other things. I’m sympathetic to this. But I don’t think the solution can be to thereby conclude that future lives and achievements must matter much less than our own. I think we are just in a very unusually weighty situation: a unique period for humanity where sorting out how to safeguard our future really is of paramount importance. It is thus like living through a crisis or emergency, where resolving that emergency can temporarily become a chief priority. But I’m not sure how much depends on our disagreement. At the moment, humanity spends less on protecting its future from existential risk than it does on ice cream. And however we think of the value of the future, we can agree that we certainly need to start doing much more than this. The question of exactly where to stop can wait. There are many reasons why we have neglected existential risk. One of those is actually quite hopeful, which is that it is a very new challenge and it takes a long time for us to update our understanding of our moral situation. At the moment, there is a strong rise in the number of researchers looking at these questions and the number of young people who come to understand the threats and want to devote themselves to safeguarding humanity. The careers site ‘ 80,000 Hours’ has a vast wealth of advice for how to use your career (and this vast number of hours you will spend at work) to help safeguard our future. Even if it is too late to choose (or change) your career, it is never too late to pick up a book about existential risk to learn about the threats we face and why they matter, or to donate to an organisation such as Nuclear Threat Initiative who are trying their hardest to prevent the worst from happening."
Big History · fivebooks.com
"When I was a teenager in the 1980s, I had loved science fiction . I’d read lots of Heinlein, Asimov, and Bradbury especially. For some reason, though, I mostly stopped reading science fiction in college. I suppose that I imagined myself too busy with other, more important things. “ Diaspora opened my eyes to the wealth of philosophical thought that has been playing out in science fiction over the past 30 years” Then in the mid-2000s, someone recommended Diaspora to me, and it set me afire with enthusiasm to devour all the science fiction that I’d been missing. I realized science fiction had potential to explore issues like artificial intelligence in a way that goes far beyond the classic sci-fi I’d read as a teenager. Asimov’s robots, and the android Data from Star Trek, they’re cool and interesting, of course, but if an artificial intelligence can be conscious, Asimov’s robots only scratch the surface of the possibilities. Diaspora opened my eyes to the wealth of philosophical thought that has been playing out in science fiction over the past 30 years, which we professional philosophers almost entirely ignore, to our great loss and discredit. The setup is that we’re living in a world in which, for a few centuries, people have been able to destroy their physical bodies to upload themselves into computers. You have to accept certain views of computation and artificial intelligence and consciousness to accept this, but this is the premise of the book. So let’s accept as a premise that if you were to destroy your body and your brain but record all that information in a computer and have the computer implement it in the right way, you could continue living as a person inside the computer in an artificial environment designed however you want to design the environment. Futurists like Ray Kurzweil now talk about uploading yourself, copying or transferring yourself into computers. David Chalmers and Susan Schneider have given the idea some sympathetic philosophical analysis. In Diaspora, Permutation City, and related works, Egan gives this idea the fullest imagining I’ve seen. What would it really be like if you were a computer program living inside an artificial computer environment? Well, for one thing you could duplicate yourself. You could back yourself up. Multiple times. Yes, and then there’d be the question, ‘do you want to merge back together with the person you diverged from?’ In Egan’s worlds, people can also control what he calls their exoselves. You can do things like tweak your abilities and personality parameters. One character tweaks herself so that she really, really loves math. She is just going at math theorems all the time. Then a friend of hers says something like, ‘you’re getting pretty deep down in this math. Don’t you think you should adjust your parameters a little, so you can kind of poke your eyes up into the world a bit more?’ She says, ‘Oh, yeah, I guess you’re right.’ She changes herself, and then she looks back on her mathematical self, and she’s like, ‘Wow, that person I used to be really got pretty deep in there.’ So, you could change your values and what you want to value. Get the weekly Five Books newsletter What would that be like? We ordinary biological humans, our values change somewhat over time, and with work we can sometimes intentionally shift them in certain directions. But what if you could just say, “I want to value X, intensely, passionately, more than I value anything else” and then make it so? What would you choose? What are the risks? If your choice is too narrow, could you get stuck in a hole and keep tweaking yourself deeper into that hole, until you’re just ecstatically counting blades of grass? Some people choose to stay embodied, not uploading at all, either keeping their traditional human form or accepting various moderate to extreme biological enhancements. Among the people who have uploaded, a wide range of lives and values are possible, from spiritual meditation to space exploration to art colonies with multidimensional enhanced sensoria. If you’re living within one of these giant mega-computers, there’s really no threat of death, no serious scarcity. So the big question is how do you find meaning in life in those conditions? You’ve got in front of you potentially billions of years of subjective experience and nothing that you’re required to do. It’s mixed, but closer to utopia. Yes, that’s often the case. Egan is exploring possibilities. One of the wonderful things about this book is that he explores a broad range of possibilities. Some end better than others. With still others it’s ambiguous how to interpret the ending. They’re all connected, but it’s not a plot-driven book. It’s not for everybody because it includes long descriptions of, for example, hypothetical physics. Philosophers of mind might find the beginning fascinating. It’s inspired by Daniel Dennett. It’s a detailed description of how you might seed artificial intelligence inside one of these computers. Egan’s a fiction writer, and yet the fictions he imagines are, if you accept certain philosophical views about computation, within the realm of possibility. What might the future hold for us, or what are the different ways people—I don’t know if we can call them humans anymore—could or should be? We all have more or less a normal conception of what a person’s life could be. Egan imagines a huge range of alternatives. He knocked loose some of my implicit suppositions about what the future might look like. One more example: dream apes. They don’t play a big role in the story, but they’re fascinating to me. These are humans who genetically engineer language out of themselves and become closer to apes. Yes, out of choice. It’s never really explained, but you can speculate. Yes, maybe. Or maybe that’s what they anticipated. But once they’ve become dream apes, there’s no going back, right? It is a work of philosophy. Borges is also philosophy. I don’t think philosophy has to be written in the form of expository essays. Right! Sadly, I didn’t imagine a punchline."
Philosophical Wonder · fivebooks.com
"This book imagines a far future in which the world is populated with a diverse range of what I will call ‘persons’, rather than ‘humans’. So if we think of a human as a biological member of Homo sapiens —there are other ways of thinking about them—but if we think about humans that way, we can think about a person as someone who’s ethically relevantly like a human in deserving the highest level of moral concern, but who is possibly an artificial intelligence or a member of another species. In Diaspora, there are AI systems, there are robots, and there are genetically altered humans who populate Earth. Humanity has managed to create real persons who exist inside simulated environments, in high capacity protected computers, and real persons who exist in robot bodies, who are exploring the solar system, and real persons who are biological. Those biological persons come in various forms, because we’ve taken control of our genome. So there are some who have gills and swim under the sea, and there are some who have engineered out of themselves all capacity for language; there are some biological persons who value very different things than we do, who maybe have really deep insight into biochemistry and smell capacities that allow them to interact with a forest in a radically different way than we do. Support Five Books Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount . This is a society without significant scarcity. In this society, people have immense freedom to consider what kind of life they want to live because they don’t need to hold down a job to draw an income. Furthermore, you can control your own values, especially if you’re one of the AI systems. You can just tinker with your settings, saying, ‘Okay, I think I’m going to really love math for a little while’, or you can change your values in many other ways. At one point in Diaspora , there’s an art installation. The people who want to go to the art installation get a special profile attached to their personality so that they will appreciate art in a certain way. One of the main characters adopts the profile, and suddenly she notices the world differently. The clouds in the sky become salient for her as they never would have before. You can voluntarily adopt a new worldview for a while, then shed it. It gives us the existentialists’ question in its purest form. I take central idea of existentialism to be that you have freedom to figure out what you value and pursue that. That freedom is limited in ordinary embodied human life. But much less so in this story. Right. You couldn’t really achieve it, and you’re limited by various practical necessities. But the AI systems in Diaspora aren’t nearly as constrained, with massive resources, vast lifespans, and vast abilities to control both their internal structure and their environment. Yes. And a capacity to control and construct desires beyond what we can even imagine. You could desire to become an artist in 16 dimensional space who works with smellscapes. Yes, I see the pull of that. I think you could take Egan’s work as moving a bit in that direction, although I don’t think he goes fully there. He does not commit, he paints the picture. This is one of the other wonderful things about science fiction. Different characters illustrate different possibilities. One character decides to install an outlook that is universally self-affirming, in the sense that once you adopt it, you can never decide that another outlook would be better. This outlook has an ethics and aesthetics with some resemblance to Buddhism . The character tweaks his settings so that he’s just going to be at peace with the now and he enters into an inescapable meditative state, and his friends are like, ‘Okay, goodbye.’ It probably strikes the reader as disappointing, a mistake. But maybe there’s something to that. Another character plays through all the possibilities that he sees in his personality, and then at the end of an immensely long life, he’s like, ‘I’ve pretty much done everything, so goodbye.’ Another character ends up giving herself over completely to exploring the beauties of mathematics. It is breath-taking in that it gives you a sense of the amazing variety of possible ways of living once you lift the constraints that we normally take for granted. That’s part of what’s coming across in this book. One dimension of difference is how much the different characters care about embodiment. There are biological humans who are highly constrained by the physical and biological realities of Earth and who risk dying in accidents. They choose this and see value in biologically constrained forms of life. And then there are robots who are embodied but not constrained in quite the same way, who can back themselves up and orbit around Saturn. And then there are AI people without conventional bodies at all, living in artificial environments, but who chose to experience themselves as having bodies in those environments, subject to virtual laws of physics like friction and gravity, but maybe they don’t need to use the toilet. And then you have AI people who just dispense with all of that—why would you want friction? Just forget that, it’s constraining. Who needs gravity? These computers are buried 200 meters under the ground in Siberia, so they don’t take up a lot of surface space on Earth!"
Science Fiction and Philosophy · fivebooks.com