AI takes over by self-improving
Narrative describing iterative self-improvement leading to capabilities for autonomy, resource acquisition, and influence.
This narrative presents one of the many possible ways in which an actual takeover may happen. We cannot predict how such a takeover will unfold exactly. We do however claim that once AIs have the dangerous capabilities listed on this page, takeover scenarios like this one become possible.
Dangerous capabilities required
Chart loading…
Sources
- • Bostrom, Nick. “Superintelligence: Paths, dangers, strategies.” Oxford University Press. (2014).
- • Good, Irving John. “Speculations concerning the first ultraintelligent machine.” Advances in computers. Vol. 6. Elsevier, 1966. 31-88.
- • Phuong, Mary, et al. “Evaluating frontier models for dangerous capabilities.” arXiv preprint arXiv:2403.13793 (2024).
- • Yamada, Yutaro, et al. “The ai scientist-v2: Workshop-level automated scientific discovery via agentic tree search.” arXiv preprint arXiv:2504.08066 (2025).
- • Yudkowsky, Eliezer. “Artificial intelligence as a positive and negative factor in global risk.” Global catastrophic risks 1.303 (2008): 184.