Hyperweapon

The resources of the book

Resource 1
The column of tanks, visible in satellite images from Maxar Technologies.

Last checked: 2024-15-11.

Resource 2
The research paper introducing The AI Scientist.

Last checked: 2024-15-11.

Resource 3
The cover of Time Magazine from February 26, 2024, along with the associated article.

Last checked: 2024-15-11.

Resource 4
Victoria Shi, the AI avatar spokesperson for the Ukrainian government.

Last checked: 2024-15-11.

Resource 5
The M-G1, a robotic dog equipped with a rocket launcher, showcased by the Russian army.

Last checked: 2024-15-11.

Resource 6
The BAD-2, the robotic dog used in Ukraine.

Last checked: 2024-15-11.

Resource 7
An educational video explaining how Deep Learning works.

Last checked: 2024-15-11.

Resource 8
The demonstration video of Kargu drones by STM, the company that markets them.

Last checked: 2024-15-11.

Resource 9
The presentation page of Defense Llama, an LLM dedicated to military strategy recommendation and planning.

Last checked: 2024-15-11.

Resource 10
OpenAI agents play hide-and-seek using reinforcement learning.

Last checked: 2024-15-11.

Resource 11
The BlackMamba software, capable of leveraging GPT to modify its code to become malicious.

Last checked: 2024-15-11.

Resource 12
The AlphaFold AI family, which has revolutionized research in chemistry and biology.

Last checked: 2024-15-11.

Resource 13
The website presenting the Rebuild the Arsenal initiative.

Last checked: 2024-15-11.

Resource 14
A video presenting the concept of instrumental convergence through the Paperclip Maximizer thought experiment.

Last checked: 2024-15-11.

Resource 15
RLHF applied to ChatGPT for alignment.

Last checked: 2024-15-11.

Resource 16
The System Card of o1-preview describing the alignment tests conducted prior to deploying the AI.

Last checked: 2024-15-11.

Resource 17
A comprehensive article on the risks posed by AI systems. The fifth section of the document (“Rogue AIs”) is dedicated to the alignment challenges posed by the autonomization of AI.

Last checked: 2024-15-11.

Resource 18
The evaluation designed by METR to measure the self-replication capabilities of AI systems.

Last checked: 2024-15-11.

Resource 19
The strategy of the Superalignment team tackling the problem. The premise is as follows: a superintelligence cannot be aligned by a group of humans because the gap between them is too vast. The strategy therefore relies on a “stepping-stone” approach: a superintelligence would be aligned by a slightly less intelligent AI, which in turn would be aligned by an even less intelligent AI, and so on, until reaching an AI sufficiently close to human intelligence to be aligned by humans themselves.

Last checked: 2024-15-11.

Resource 20
The website for the report Situational Awareness, available for free download.

Last checked: 2024-15-11.

Resource 21
The Colossus compute center (page generated by the author using the AI-based search engine, Perplexity).

Last checked: 2024-15-11.

Resource 22
The OpenAI article introducing the scaling laws* associated with the time allocated to AI.
The first diagram actually reveals two new scaling laws: during training (train-time compute), giving the AI more time enhances its intelligence, and during usage (test-time compute), giving the AI more time improves its responses.

Last checked: 2024-15-11.

Resource 23
The DeepMind article Position: Levels of AGI for Operationalizing Progress on the Path to AGI introducing their typology of AI systems.

Last checked: 2024-15-11.

Ressource 24
A video of Figure-01, a “generalist” intelligent robot equipped with OpenAI technologies.

Last checked: 2024-15-11.

Resource 25
Excerpt from the final scene of WarGame, in which the AI concludes that with an existential weapon, the best war strategy is not to play.

Last checked: 2024-15-11.

Ressource 26
La page du Statement on AI Risk avec la liste des signataires.

Last checked: 2024-15-11.

Resource 27
The video of Rishi Sunak’s speech at the Royal Society.

Last checked: 2024-15-11.

Resource 28
Ilya Sutskever’s tweet announcing the launch of SSI.

Last checked: 2024-15-11.

Resource 29
Zach Stein-Perlman, studies and measures the actions taken by major AI companies regarding the safety of their technologies. Its scores, along with all its analysis work, are available on the AI Lab Watch website.

Last checked: 2024-15-11.

Resource 30
The AI Act webpage, where the text itself and numerous more accessible explanations can be found.

Last checked: 2024-15-11.

Resource 31
The Bletchley Declaration, signed by the countries that participated in the AI Safety Summit on November 1-2, 2023.

Last checked: 2024-15-11.

Resource 32
The paper describing the MAGIC proposal.

Last checked: 2024-15-11.

Resource 33
The A Narrow Path plan.

Last checked: 2024-15-11.

Resource 34
The winning proposals from the Future of Life Institute competition.

Last checked: 2024-15-11.

Resource 35
The webpage of the European Network for AI Safety presenting the network of European institutes dedicated to securing AI technologies.

Last checked: 2024-15-11.

By the same author

Hypercreation
A Little Guide for Taming C-Borgs

“A must-read book to understand AI and its impact on creation in all its forms!”