Bunkobons

← All books

If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All

by Eliezer Yudkowsky & Nate Soares

Buy on Amazon

Recommended by

"If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All makes one of the starkest and most uncompromising arguments in the entire AI debate: that building superhuman AI is not just risky, but likely to be fatal for humanity. The core claim of the book is that once AI systems become more intelligent than humans across the board—what’s often called superintelligence—they will not reliably share our goals or values. Even small misalignments, he argues, could have catastrophic consequences, because a sufficiently powerful system would pursue its objectives with extreme efficiency, potentially at the expense of human survival. A central idea in the book is that this is not a distant or abstract concern, but a near-term danger given the current pace of progress. Unlike more moderate voices in AI safety, Yudkowsky and Soares argue that we do not yet have a viable technical solution to the alignment problem—and that continuing to scale AI systems without solving it is reckless. This leads to their most controversial conclusion: that the world should consider slowing down or even halting the development of advanced AI until it can be made safe. The authors are explicitly sceptical of industry-led safety efforts and frame the situation as a global coordination problem, where competitive pressures push actors to take risks that could affect everyone. The book is deliberately provocative, but that is also why it’s valuable. It represents the strongest version of the existential risk argument, forcing readers to grapple seriously with the possibility that AI is not just transformative, but potentially irreversible in its consequences. A common criticism is that the book makes a very strong claim—near-certain extinction—without sufficient empirical grounding. Critics broadly argue that while the risks he describes are logically possible, the book overstates their likelihood and inevitability: it assumes that superintelligent AI would almost certainly become catastrophically misaligned, without strong real-world evidence from current systems. Others push back on the all-or-nothing framing, suggesting that AI failures are more likely to be partial, manageable, or gradual rather than immediately existential. A further line of criticism focuses on feasibility, with many arguing that proposals to halt or drastically slow AI development are politically unrealistic in a competitive global landscape. More generally, reviewers often describe the book as deliberately one-sided and alarmist, presenting the strongest possible version of the existential risk argument while giving relatively little attention to alternative perspectives—such as the view that AI risks, though real, can be mitigated through incremental safety work, regulation, and adaptation."
The Best AI Books in 2026 · fivebooks.com