--- title: "Carnegie Mellon University: Two Types of AI Existential Risk – Decisive and Accumulative" slug: "carnegie-mellon-university-two-types-of-ai-existential-risk-decisive-and-accumulative" author: "Jeremy Weaver" date: "2025-02-05 20:00:55" category: "Premium" topics: "Decisive vs Accumulative AI x-risk, The Perfect Storm MISTER Scenario, Systems Analysis of AI Risk Propagation, AI Risk Governance Reform, Unifying Ethical, Social, and Existential Risk Frameworks" summary: "The content outlines two hypotheses on AI existential risk: one where a single catastrophic event from superintelligent AI causes collapse (decisive risk), and another where multiple smaller disruptions gradually erode societal resilience until a tipping point is reached (accumulative risk). It presents a \"MISTER\" scenario demonstrating how various AI-related threats interconnect and calls for a holistic, integrated approach to AI risk governance that combines ethical, social, and existential considerations." banner: "" thumbnail: "" --- Carnegie Mellon University: Two Types of AI Existential Risk – Decisive and Accumulative



Summary of Read Full Report

Examines two contrasting hypotheses regarding existential risks from artificial intelligence. The decisive hypothesis posits that a single catastrophic event, likely caused by advanced AI, will lead to human extinction or irreversible societal collapse.

The accumulative hypothesis, conversely, argues that a series of smaller, interconnected AI-induced disruptions will gradually erode societal resilience, culminating in a catastrophic failure. The paper uses systems analysis to compare these hypotheses, exploring how multiple AI risks could compound over time and proposing a more holistic approach to AI risk governance. Finally, it addresses objections and discusses implications for long-term AI safety.

The provided paper challenges the conventional view of AI existential risk (x-risk) as sudden, decisive events caused by superintelligent AI, proposing instead that AI x-risks can accumulate gradually through interconnected disruptions. This alternative, the "accumulative AI x-risk hypothesis," suggests that seemingly minor AI-driven problems can erode societal resilience, leading to a potential collapse when a critical threshold is crossed. Here are some of the most interesting points: