Bunkobons

← All curators

Tom Chatfield's Reading List

Tom Chatfield is a British tech philosopher and author, with a special interest in critical thinking and the ethics of technology. He has written a dozen books, published in over thirty languages. His latest, Wise Animals (Picador), explores the co-evolution of humanity and technology.

Open in WellRead Daily app →

Computer Games (2010)

Scraped from fivebooks.com (2010-03-15).

Source: fivebooks.com

Raph Koster · Buy on Amazon
"The reason I like A Theory of Fun is that it’s by a professional games designer, one who was responsible for the first MMO (Ultima Online in 1997). It’s a book about games in general, not just about electronic games, and it reminds us that game-playing is a human activity that is at least as old as civilisation. Today, we are seeing a new form of it, but in order to understand it properly, we need to begin with this really deep evolutionary hold that games have on us. Koster looks at games as something that are about learning above all; they are, in his phrase, ‘chewy’ environments for our brains, where we are performing a task again and again to get better at beating the particular properties of a particular environment. He certainly gives the lie to the idea that gaming is new, at least in terms of being unprecedented. I love his eloquence in explaining that, first of all, you have to understand games in very old terms if you want to understand which aspects of our natures are being foregrounded and tapped into by this modern explosion in a particular form of game."
Johan Huizinga · Buy on Amazon
"It was written in 1938, and it’s a study of what the author calls the ‘play element of culture’. So really it’s another book about the way that play precedes culture, and is a distinct and very complicated human phenomenon. The author sees play as something that has many interlocking facets, and that has given rise to much that we think of as civilisation: something that encodes a crucial set of human values, ideas and ways of being in the world. As he points out, all animals play; play is a bigger thing than mere culture, and an important counterbalance to those ‘serious’ elements we sometimes seek to reduce life into: the Darwinian business of work and struggle. A lot of people say play isn’t ‘real’ or ‘serious’ and mean this as a critique. But that’s the point of it – that human reality is richer and stranger than the animal business of survival."
Julian Dibbell · Buy on Amazon
"This is where it gets really interesting in the present day. If we want to talk about the biggest problems and challenges around games, too, economics seems to me a much more fertile ground for worry and exploration than the vague fears of corruption that are doing the rounds. Play Money was written by an American author called Julian Dibbell, who decided he was going to spend a year trying to earn enough money to live off purely by trading in virtual goods. Virtual goods are things that only exist within a game world, but which people value so much that they are prepared to pay real world money for. It sounds uncanny and paradoxical, but makes perfect sense: game worlds exist to reward people, you have to put in a lot of effort to get stuff within them, and a lot of people wish to short-cut that effort. It’s simple supply and demand. There is a demand among millions of fairly affluent players for virtual things that take hundreds or thousands of hours to earn, and so the supply exists to meet this demand, especially in places like China, where the national average wage is not very high. At this point, virtual economics start to look astonishingly real. It’s possible to construct, out of nothing, a virtual arena that generates enormous notions of value from its players. It begins to blur the boundaries between what is playful and recreational and what is hard work, what is business. This, I think, is one of the reasons that games are a fascinating window into the world’s future, because the ways in which we are recreating and entertaining ourselves, and the activities that we think of as business and money, are blurring in some ways. After all, it’s easier to trace the chain of value of spending 100 hours of effort earning a virtual sword that then has a monetary value put on it by the market than it is to understand why it is a derivative is worth a certain amount of money. If you want to talk about the potential of game worlds, this is hugely important. Economics has never been such a precisely measurable science before – these may be games, but they are also very real and very valuable economies, with real world economic phenomena appearing in them. Unlike real life, however, you can measure every single variable precisely down to the tiniest millimetre, and you can easily set up controls, comparisons, you can tweak variables. Down the line, this could have very wide applications."
Stefan Szymanski · Buy on Amazon
"I really enjoyed this book. It’s a compelling account of the unusual economics that have grown up in the last century around the extraordinarily profitable arena of sports. It’s interesting because, in order to make sports work as an industry – in order to make the money – you have to indulge in all sorts of practices that defy conventional economics. For instance, a sport league needs to be competitive to be enjoyable, so, unlike in a business arena, crushing your opponents by too much damages everyone’s ability to make money. So leagues are highly redistributive in terms of the money; for instance, the value of the premiership is not dependent on how good its best two clubs are. It’s dependent on how good the competition is. Theoretically, the top few clubs could claim almost all the money because they have the clout in terms of their following, players and bank balances. Instead, though, it’s realised tacitly there has to be a good level of competition, there have to be good matches. However, the great story of making money out of sport is how staggeringly popular vicarious play is as an activity. The last football World Cup was the single most viewed activity in human history, and in terms of vicarious participation it dwarfs any religion. I am very interested in what happens when what has been a vicarious activity is made a mass participatory activity, because people, who have an intense curiosity about watching play, can now participate in it on a vast scale. If we look at the special economics, rules and circumstances that have grown up around sports, this is a very useful index of the play economics of the future."
Mihaly Csikszentmihalyi · Buy on Amazon
"The notion of flow is the idea that there is a state that is characterised by complete immersion in an activity, by a constant response to stimuli, and a perfect match between your ability and the challenge in front of you. This combination puts people into a state that has often been described as feeling like ‘flow’, where you are learning and acting and responding at a super-efficient rate. This physiological phenomenon has been likened to many things – to what happens when a sportsman is hitting the perfect drive in golf, or a musician is performing a piece at the peak of their powers. It’s a notion that has really been taken up by the gaming community because games are an interactive medium in a way that nothing else is – you’re getting many of thousands of tiny responses a second. And they’re a dynamic medium, in the sense that they can offer you an environment that adapts to your performance. Recently, an influential game called Fl0w was designed by a man called Jenova Chen, the idea behind it being that if you can come up with an adaptive environment that responds to what people are doing, they are learning at a much faster rate and their ability to respond to this environment is exponentially increased, as is their pleasure and immersion in it. ‘Flow’ is a term that can be over-used and isn’t always too precisely defined, but as an idea it’s one of the most exciting things around in games theory. Because we can actually start to measure this kind of state neurologically, and start to break down this resonant but imprecise word into a number of human phenomena relating to learning, to action and to memory. This gives you powerful insights into how we can make a whole spectrum of activities more intuitive, more appealing, and more open to a wider range of abilities – how we can draw in people from higher and lower ends of abilities, and give them both a satisfying environment. It’s as relevant to schools and businesses as it is to entertainment companies, and it’s only just beginning. Super Mario Galaxy on the Wii is a beautifully designed game world – loving attention to detail, the quality of design, it’s just immaculate. I have also been really enjoying a clever cheap game on the PS3 called PixelJunk Monsters, which is a tower defence game, because I think it’s a very simple, beautifully executed example of a game you just want to play again and again and again to get better at particular tasks: in this case, building towers to defend cute furry things against cute monsters. I have always had a very soft spot for the original SimCity game. I still play it. It’s a great example of how simple rules can give rise to amazing, emergent complexity and how satisfying it is to play with a city as a virtual toy and complex system. People often miss the fact that the most popular type of games are not violent, but rather systems management games. The idea of a city, where you play god, try to make it grow, is very satisfying. I also always loved Super Metroid on the SNES. You could play it in different ways – either just complete it, or complete it and try to find every secret in the game, and it was so well designed it was an absolute pleasure to try and discover all the bits that the designers hid around the place. Oh, go on then, World of Warcraft. I have been playing since Beta – so for its total existence – and what’s interesting about it is what it allows people to do. You can go in and be a twat, you can go in and help people. I’ve made friendships within it, I play it with my wife, I play it with good friends, with a guild on the east coast of the States, who we occasionally fly over and stay with. We’ve had people who’ve never left the States before come over and stay with us. It’s a delight. It’s a game that gives people a large selection of different things to do. It doesn’t set out to suck out your life, rather just present players with a broad choice of activities. Only a few people play it obsessively at the top end; most people just enjoy it because it’s witty, post-modern, and well-designed. It is the daddy. I know you only asked for five, but I can’t finish without mentioning perhaps the cleverest single game of the last decade, Portal, which just has a brilliant concept – you fire a gun that allows you to create wormholes, instantly transporting you between any two surfaces in a maze-like 3D landscape. Brilliant gameplay, but also a wonderful script and amazing voice acting. The script is hilarious; it’s a comic masterpiece. It shows that games can be well-designed, fun to play and funny."

The Ethics of Technology (2024)

Scraped from fivebooks.com (2024-04-24).

Source: fivebooks.com

Luciano Floridi · Buy on Amazon
"I love Luciano Floridi—who is a philosopher of technology—because I think he has a breadth of vision, a genuinely systematic approach to the ethics of technology, but also he is deeply literate in history, and deeply interested in human nature. He’s not a consequentialist, in the sense of being interested in maximising some kind of uber-beneficial long-term outcome for humanity. A lot of tech philosophy naturally leans towards consequentialism in terms of mega payoffs and outputs. This can be a great tool for engaging with the outputs of particular systems, but it’s not a systematic philosophy of human nature or thriving. In this particular book, Floridi starts by referencing and updating Freud ’s account of three historical revolutions in human consciousness. With Copernicus and the birth of heliocentric models of the universe, we gradually learned as a species that we weren’t the centre of the universe. It’s not all about us, in a cosmological sense. Then in the Darwinian revolution of the 19th century, we found evidence that suggested that we are not the unique pinnacle of creation—that there wasn’t this moment where humans were ‘made,’ the best and most intelligent at the top of a pyramid. In fact, we’re connected with and emergent from the rest of nature; and nature turns out to be far vaster and stranger than we previously imagined. The timespan within which we exist is immense while—incidentally—you no longer need a deity to explain our existence. Next, Freud argued that his own psychoanalytic revolution was another de-centring of human consciousness, because rather than the sublimely lucid self-knowledge of Descartes—I think therefore I am, cogito ergo sum —you can’t be at all sure of what’s going on when you introspect. You’re grasping at straws. Floridi adds to this account what he calls a ‘fourth revolution’: a similar de-centring of human consciousness where suddenly, through artificial means, we’re creating entities that are capable of incredible feats of information processing. And by doing so we’ve been forced to re-conceive ourselves as informational organisms – and take on board the fact that even our intellectual capabilities may not be beyond replication. “Our ability to be good and do good is bound up with the systems through which we’re connected and interconnected” The subtle point he makes—which I think is a Kantian one—is to recognise that human dignity and thriving become more rather than less important in this informational context. For Floridi, the informational webs we weave across the world are themselves sites of ethical activity. Building on this, he discusses what it means for the technologies we use to be ‘pro-ethical’ by design, in the sense that they enhance or uplift our capacity for accountable, autonomous actions. In order to do so, they need – for example – to give us correct, actionable information. They need to help us arrive at decisions that are appropriate and genuinely linked to our needs and concerns, rather than manipulating or disempowering us. You can contrast this to what have elsewhere been described as ‘dark patterns,’ where you have system that are opaque and exploitative: where the interface is more like a casino, and it doesn’t want you to make a good decision, or really to have any choice at all. Some forms of social media might be one example of that—where the incentives are to do with emotional arousal, getting people to act fast and without consideration, with little ability to apply filters of quality to what they are doing. I didn’t list it among my recommendations, but I love Neal Stephenson’s novel Seveneves , about a hypothetical future in which social media so deranged people’s collective judgement that it nearly led to the human species being wiped out. In his story, social media is up there with gene editing or bioweapons as a forbidden technology, because the danger is so great when its seductions meet human fallibilities and vulnerabilities. I think intentions are important. But it’s very dangerous to ascribe too much foresight and brilliance to the people creating tools. I’ve spent quite a lot of time in the headquarters of tech companies dealing with very smart people, and it’s important to remember that even very smart people often have quite narrow knowledge and incentives. So, yes, intentions are important, but what you really need to be interested in is the blind spots, the lack of foresight, the capacity of people to pursue profit at the expense of other issues. The big thing for me is what I call the ‘virtuous processes’ of technology’s development and deployment. What I mean by that is forms of regulation and governance where you don’t get to ignore certain consequences, to move fast and break things and not worry about the result: where you are obligated to weigh up the wider impacts of a technology. You need to assume there are a lot of blindspots, lots of stuff that will emerge over time that you can’t anticipate. It’s one of the great lessons of systems thinking. I quote the French philosopher Paul Virilio in the book: “When you invent the ship, you invent the shipwreck.” This refers to those accidental revelations that will always come with technology. They’re inevitable; but this makes it all the more important to have feedback mechanisms whereby unintended or undesirable results—damage, injustice, unfairness—can be filtered back into the system, and checks and balances and mitigations created. There’s been a lot of good stuff written recently about the Luddites. Brian Merchant has a great recent book recasting the Luddites as a sophisticated labour movement: not just people who didn’t like technology, who wasted everyone’s time by busting up factories and resisting the inevitable. He’s partly saying that, actually, all forms of automation potentially bring a great deal of injustice and disempowerment and exploitation. And much of the story of industrialisation has entailed, very gradually, societies working out collectively how industrial processes can be compatible with respect for human bodies and minds. Of course, there are urgent conversations we still need to have about how models of production and modern lifestyles can be made compatible with sustainability. It’s not about moving back to some mythical Eden before technology; I don’t think that’s possible or meaningful. But I do think that keeping faith with our ability to change and adapt rapidly is important. We have the tools and the compassion, the awareness and the empathy, to come up with solutions for a way to live together. And, ironically, best serving these values tends to mean focusing on the local and the practical and the incremental, not the grand and the hand-waving."
Alison Gopnik · Buy on Amazon
"I love this book so much. Alison’s work has been so important to me. She brilliantly argues in this book and in her research that if you want to understand our species, in a psychologically and scientifically literate way, you have to be deeply interested in children and childhood. Childhood is key to our uniqueness as a species. Human childhood is a total outlier in biological terms in the mammal kingdom. We have this incredibly long period of enormous dependency, neuroplasticity and flexibility. Fairly obviously, to be a language-using and technology-crafting species, you need an incredible capacity for learning: to be able primarily to acquire skills through nurture rather than nature, through teaching and communication rather than through instinct. But how did this come about? Gopnik makes the case that over millions of years, our lineage doubled and then tripled down on this incredibly long, vulnerable, flexible childhood, because it conferred evolutionary advantages in terms of our ability to create tools, to build protections through technology. But she also points out that all this was necessarily bound up with an incredible capacity for mutual care, compassion, and for nurture. These traits are the absolute fundamentals for human survival and thriving. She calls children the R&D department of the human species. It’s a wonderful image; and it captures the entwined capacities for care and change that underpin technology and culture alike. The human child is born incredibly vulnerable. It’s very dangerous for the human mother to give birth. And then, once the child is born, it’s utterly dependent for months and months and months—like no other ape. It takes months to even learn to sit up. Reproductive maturity is a decade, a decade and a half away. The prefrontal cortex continues to grow and develop into our twenties. But it brings great gifts: this capacity for change and learning and teaching. Support Five Books Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount . So much of the time, discussions of technology are obsessed with notionally independent adults—perhaps men with disposable income. But that’s an impoverished description of our species, both how we got here and how we build a future together. Pretty obviously, the future—in the most literal of ways—is children. They are born with this incredible plasticity, a wonderful lack of instinctual lock-in. So, for me, any ethics of technology has to be deeply, deeply interested in childhood and children: in how we learn, how we teach, how we change, and how the knowledge of one generation is passed on and adapted. Gopnik has also written about AI as a social technology, as a kind of reification of the knowledge and understanding latent in language and culture—something more like the library to end all libraries than a mind. I think this is a very useful framing. AI isn’t, or shouldn’t be, our rival. It’s a resource: a cultural and technological compendium of our achievements. But it cannot do any of the most important stuff, in terms of nurture, care, teaching, hoping. If you are thinking about how to create systems that have values compatible with human and planetary thriving, children and child-raising are good models. If you want to look at the conditions a human needs to thrive, look at children. It’s crushingly obvious that children need love. Of course, they need to be safe and warm and fed more than anything; but it’s love that is the driving and connecting force. The love and support of family and friends and kin is more important than piles of stuff and gadgets. The idea that children might be taught and raised by super-intelligent AI is just delusional. It’s stupid. There’s a lovely line in the philosopher Alasdair MacIntyre’s lecture series Dependent Rational Animals , where he says that his attempts in his earlier work to come up with an ethics independent of biology was just wrong and impossible. To paraphrase, you need an ethical self-conception rooted in the fact that you’re part of humanity, part of a species, not just a notionally independent adult. We have all been, and we all will be again, utterly reliant upon the care of others. Every single one of us was born into total dependency. We will all sicken, age and die. There’s nothing in our lives that is truly autonomous, when you think about it. Even the most libertarian forms of individualism are wholly predicated upon massive shared supplies of goods and services, money and trade, manufacture and technology. These people in their bunkers with their generators, what are they doing? It’s a weird denial of our profound inter-dependency. And technology is the most inter-dependent thing of all, even more so than us. Technology is incredibly needy. I think the desire to be wholly independent of others embodies a confusion between the enormously important freedom to pursue your goals, to live your life, and the fact that every opportunity and tool you have at your disposal is ultimately the product of countless others’ lives and labour. Gopnik makes this point very powerfully. She says: I am the child of countless minds. Everything around me—the light, the chair, the clothes I wear—is the product of century after century of human ingenuity and inheritance."
Stuart Russell · Buy on Amazon
"Yes, this is a book by a computer scientist who has thought deeply about the history and future trajectory of artificial intelligence , and who is very literate in the cutting edge discourse around it. He is no Luddite, in the pejorative sense, and in fact embraces many consequentialist ideas because they are extremely fit for some purposes: if you are talking about a system, you do need to take a close pragmatic interest in its inputs and outputs. One of the reasons I find this book important is that it’s very clearly written. It’s not a long or a difficult book. Russell zeroes in on the significance of doubt, and this is a fundamental point that I think a lot of people miss in the field of computer science, which is simply that any kind of real-world problem (what should we do? what might we do? what would be the best outcome?) is computationally intractable. In other words, there will always be some uncertainty. There is such a thing as a perfect game of chess. It’s very difficult to compute, but there is such a thing. A rules-based, deterministic game can be optimised. But human life can’t be optimised. It can be improved, but it can’t be optimised. ‘What should I have for lunch?’ is not a question that has an optimal answer. Given all the calculation time in the universe, you still couldn’t calculate the perfect, super-optimal lunch. Of course, we all know the world is probabilistic. Yet, when it comes to computation, a lot of people seem to forget this and start looking for ‘the answer’ when it comes to complex social challenges, the future of AI, and so on. Russell emphasises the significance of the fact that there will always be a plurality of different priorities, and you’ll never be able to come up with the priority. Then, crucially, he talks in a practical way about the importance of building doubt into machines. AIs, he argues, should be doubtful to the extent that we are about doubtful about what goal is worth pursuing. And they should also be willing to be switched off. The people who construct them should prioritise the existence of an off switch, and a process of constructive doubt, over trying to optimise technology towards a supreme, transcendent final goal for humanity. Human Compatible is a humane book about the challenges and opportunities of AI. It’s also a very non-hype book. By which I mean, it talks about the trajectory of artificial intelligence, the enormous potentials of the pattern recognition on a vast scale that it offers, without indulging fantasies. AI has a great potential to do good and to help us solve problems, but the point is not that it or anybody will ever know best, but rather that we should ensure the values and tendencies encoded into powerful systems are compatible with human thriving, and the thriving of life on this planet. And that compatibility must in turn entail doubt, plurality, and an open interrogation of aims and purposes, not optimisation. The thing I worry about, perhaps more than Russell, is that some technologists seem determined to reason backwards from a hypothetical, imagined future. There’s the so-called ‘singularity,’ the point beyond which computers become self-improving, and potentially solve all human problems. So the only thing that matters is getting to the singularity and having good rather than bad super machines. Then, at the point of singularity or beyond it, death will be solved, the losses of our ecosystem will be redressed, there can be an infinite number of people living infinitely magnificent lives, and so on. Or, if we do things wrong, everyone will be in computer hell. The problem with all this is that you’re reasoning backwards from a fixed conclusion you’ve arrived at through hand-waving and unacknowledged emotion. You’re engaging in metaphysical, or even eschatological speculation, while insisting that it’s all perfectly logical and evidence-based. It’s really important, I think, to resist focusing on imaginary problems or treating hypotheticals with high degrees of certainty. This is precisely the opposite of what we need to do, which is focusing on real problems and opportunities, while preserving uncertainty and constructive doubt: while putting actual ideas and theories to meaningful, scientific tests."
Carissa Véliz · Buy on Amazon
"This is both a philosophical and a polemical book: one that links big ideas to immediate actions in the real world. I see it as of a piece with important writing by philosophers like Evan Selinger and Evan Greer in the US, who have sounded the alarm around the normalisation of various kinds of surveillance and data collection. A lot of contemporary, AI-enabled technology is data hungry, with the broad promise that you give it your data, it can do more and more: keep you safe, make employees more efficient and more profitable, track students and optimise their learning, stop accidents, catch thieves or dissidents; and so on. Some of this may be true, or even desirable. But privacy is incredibly important. It’s the space within which various kinds of trust, self-authorship, self-control and thriving can take place. And it’s also very important to the civic contract: the ability of different people to meaningfully control their lives and have agency within them. A line I like in Veliz’s book is that, contrary to the idea that philosophy is about dispassionate consideration, it’s appropriate and indeed important to protest and to show anger in the face of incursions upon your liberty. Resistance is not futile, but necessary. In democracies at least, we are lucky enough to be able to say: no, we do not want ubiquitous surveillance in college campuses; no, we do not want everyone’s face being recognised in public places; no, we do not want databases of everyone who goes to a protest. We don’t want certain features of people to be tracked at all. Because the kind of power that this gives to the state, or corporations, or other small bodies of people, is dangerous and corrosive of a lot of the values we need in human society. A loss of privacy makes it easy for other people to have power over you and to manipulate you. Therefore, having control over that thing called data—which sounds so neutral—is powerful. Once again, it’s not neutral at all. It’s information about who you go to bed with, or what your children do, or who your God is. We’re back to the point I began with. Far from technology being a dispassionate expert arena, letting people harvest data about you in an unfettered way takes away your privacy and gives others power over you. And this doesn’t have to be the case. Some of us, at least, can push back against it, demand protections. It’s not inevitable. Between all these things, this is an eloquent and important book, and it’s one I enjoyed a lot. It’s a practical call for action, with examples that makes the case for action right now."
Gaia Vince · Buy on Amazon
"Yes. In some ways this book has inspired me most directly in my own writing. It’s big, it’s beautiful, it’s well-written and unashamedly fascinated by the biggest of big pictures in terms of where we’ve come from as a species. And its themes are interwoven with deep readings of history, spirituality, language, belief. It’s almost dizzying, a compendium of fascinating ideas about humanity. I love the way Vince leaps between the particular—the details of, for example, how archaeological evidence shows us hominins gradually learning to control fire—and the vast sweep of history. Our capacity to combine these perspectives is itself a great human gift. As she notes, we exist at an unprecedented moment in history: we are a planet-altering species, we are transcending the purely biological. Humanity is utterly remarkable. Yet at the same time, we remain a part of nature. We are part of this planetary system; a strange, self-remaking part. And we can’t possibly understand ourselves without being deeply interested in this connectivity and the history of our own emergence. So, yes, I love the big-ness of this book—its eloquence and insistence that there is a commonality between archaeology, sociology, biology; the spiritual, the mental, the computational. I put it down as a book of ethics, because for me, the great task of ethics is to connect the facts we know to the question of what we should do, and why. So: what does thriving look like for our species in the light of what we know about our biology, our technology, our history? I constantly want to make this connection. You know, I come back to someone like Kant, who is almost a byword for the generalised, the abstract, or the idealistic. He can seem an impossibly demanding ethicist, a philosopher’s philosopher. Yet he was famously awoken from his dogmatic slumber by Hume ’s empiricism. He did all the things he did because he cared so deeply about facts —about us as beings, about nature, about truth. He had some appallingly unscientific and ignorant ideas, especially around race. But even this emphasizes that the great challenge for ethics is to address new forms of knowledge and old forms of ignorance: to help us face the facts of existence with clear eyes. So. Which of our ideas, right now, are ripe for overturning? You could say: the way we treat animals, the planet, even our children. The way we pretend technology may dissolve all our problems. The way we pretend that we are completely rational when actually we are, often, in the grip of unacknowledged emotions. So I love Vince’s book. It’s deeply attentive to our spiritual side, to our intellectual side—but biologically literate at the same time. I should also say that the author has written more recently about climate change and global migration trends, both incredibly important particularities in the 21st century. Any ethics worth its salt has to be deeply interested in the facts and politics of the present moment. The duty is to keep on trying to understand the world and ourselves – and, if necessary, to keep on changing our mids. I love that line. You see it quoted a lot; the Paris Review interview it’s taken from is a great interview. And he wasn’t wrong. One of the most important facts about even the cutting edge of AI—transformer technologies, convolutional deep learning, all that jazz—is that its vast understanding simply compresses and queries huge amounts of our own knowledge. So, yes, we have wonderful machines that can endlessly give us answers. But only we can define the questions worth asking. This is one reason I keep coming back to ancient myths in Wise Animals : those stories that have immemorially helped us to structure and explore our longings, fallibilities, identifications. Myths often remind us of human potential and hubris in the same breath. And one thing they say again and again is that if you have a tool—a gadget, a ring, a magical sword—that gives you the power to level mountains, to know answers, well, be careful what you wish for. Our species has this Promethean spark, this godlike power. Using it wisely is our defining challenge. And this means, for me, embracing the virtues of compassion and humility alongside our embodied, fallible humanity – and rejecting consequentialist fantasies of optimisation that may lead us to a terrible place at great speed."

Suggest an update?