Hyperweapon
Book resources
Erratum
First editions are always tricky. Lacking hindsight, small errors often slip between the manuscript, the layout and the printing. In this book, the uncorrected versions of the Figures unfortunately ended up in the printing file. Three of them therefore contain typos.
Click the button below to get the corrected versions.
Resource 1
The column of tanks, visible from Maxar Technologies satellite imagery.
Last consulted: 11/15/2024.
Resource 3
The Time Magazine cover of February 26, 2024, along with the associated article.
Last consulted: 11/15/2024.
Resource 4
Victoria Shi, the AI avatar spokesperson for the Ukrainian government.
Last consulted: 11/15/2024.
Resource 5
The M-G1, a robot dog equipped with a rocket launcher presented by the Russian army.
Last consulted: 11/15/2024.
Resource 7
An explainer video on how Deep Learning works, by YouTuber Science Etonnante.
Last consulted: 11/15/2024.
Resource 8
The demonstration video of Kargu drones by STM, the company that manufactures them.
Last consulted: 11/15/2024.
Resource 9
The presentation page for Defense Llama, an LLM dedicated to recommending and planning military strategies.
Last consulted: 11/15/2024.
Resource 10
OpenAI's agents play hide and seek using reinforcement learning.
Last consulted: 11/15/2024.
Resource 11
The BlackMamba software, capable of calling GPT to modify its own code and become malicious.
Last consulted: 11/15/2024.
Resource 12
The AlphaFold family of AI systems, which revolutionized research in chemistry and biology.
Last consulted: 11/15/2024.
Resource 13
The website presenting the Rebuild the Arsenal initiative.
Last consulted: 11/15/2024.
Resource 14
A video presenting the concept of instrumental convergence through the Paperclip Maximizer thought experiment.
Last consulted: 11/15/2024.
Resource 16
The System Card for o1-preview, describing the alignment tests conducted before deploying the AI.
Last consulted: 11/15/2024.
Resource 17
A very comprehensive article on the risks posed by AI systems. The fifth part of the document ("Rogue AIs") is dedicated to alignment problems posed by AI autonomy.
Last consulted: 11/15/2024.
Resource 18
The evaluation designed by METR to measure the self-replication capabilities of AI systems.
Last consulted: 11/15/2024.
Resource 19
The strategy envisioned by the Superalignment team to tackle the problem. The premise is: a superintelligence cannot be aligned by a group of humans because the gap between them is too wide. The strategy therefore relies on a "staircase" approach: a superintelligence would be aligned by a slightly less intelligent AI, itself aligned by a slightly less intelligent AI, and so on, until reaching an AI close enough to human intelligence to be aligned by humans.
Last consulted: 11/15/2024.
Resource 20
The website for the Situational Awareness report, where it can be downloaded for free.
Last consulted: 11/15/2024.
Resource 21
The Colossus data center (page generated by the author using the AI-powered search engine, Perplexity).
Last consulted: 11/15/2024.
Resource 22
OpenAI's article introducing the scaling laws associated with the time given to AI. The first diagram actually reveals two new scaling laws: at training time (train-time compute), giving more time to the AI improves its intelligence, and at usage time (test-time compute), giving more time to the AI improves its responses.
Last consulted: 11/15/2024.
Resource 23
DeepMind's article Position: Levels of AGI for Operationalizing Progress on the Path to AGI introducing their taxonomy of AI systems.
Last consulted: 11/15/2024.
Resource 24
A video of the Figure-01, a "generalist" intelligent robot embedding OpenAI technologies.
Last consulted: 11/15/2024.
Resource 25
The final scene from WarGames, in which the AI concludes that with an existential weapon, the best war strategy is not to play.
Last consulted: 11/15/2024.
Resource 26
The Statement on AI Risk page with the list of signatories.
Last consulted: 11/15/2024.
Resource 27
The video of Rishi Sunak's speech at the Royal Society.
Last consulted: 11/15/2024.
Resource 28
Ilya Sutskever's tweet announcing the launch of SSI.
Last consulted: 11/15/2024.
Resource 29
AI governance researcher Zach Stein-Perlman studies and measures the actions taken by the most important AI companies regarding the safety of their technologies. His scores and all his analytical work are available on the AI Lab Watch website.
Last consulted: 11/15/2024.
Resource 30
The AI Act webpage, where you can find the text itself as well as many more accessible explanations.
Last consulted: 11/15/2024.
Resource 31
The Bletchley Declaration, signed by the countries that participated in the AI Safety Summit on November 1st and 2nd, 2023.
Last consulted: 11/15/2024.
Resource 33
The A Narrow Path plan, available in English and in French for the executive summary (translated by the author of this book and GPT-4o).
Last consulted: 11/15/2024.
Resource 34
The winning proposals from the Future of Life Institute competition.
Last consulted: 11/15/2024.
Resource 35
The European Network for AI Safety webpage presenting the network of European institutes dedicated to securing AI technologies.
Last consulted: 11/15/2024.
By the same author
HELO
The sci-fi graphic novel about Artificial General Intelligence
A hard-hitting graphic novel that explores the promises and dangers of General AI
