AI Existential Risk
Jump to navigation
Jump to search
An AI Existential Risk is a technological existential risk that is a catastrophic risk from artificial general intelligence potentially causing human extinction or permanent civilization collapse.
- AKA: AGI Existential Risk, Artificial Intelligence Existential Risk, AI X-Risk.
- Context:
- It can typically challenge Human Progress Measure assumptions about technological benefits.
- It can typically require Enlightenment Ideal-based approaches for risk governance.
- It can often generate Technological Optimism Attitudes or Fatalism Attitudes.
- It can often compare to Nuclear War Risk and Climate Change Problem in existential threat level.
- It can range from being a Near-term AI Existential Risk to being a Long-term AI Existential Risk, depending on its timeline estimate.
- It can range from being a Accidental AI Existential Risk to being a Deliberate AI Existential Risk, depending on its causation mode.
- It can range from being a Gradual AI Existential Risk to being a Sudden AI Existential Risk, depending on its emergence speed.
- It can range from being a Preventable AI Existential Risk to being a Inevitable AI Existential Risk, depending on its mitigation possibility.
- ...
- Examples:
- AI Capability Risks, such as:
- AI Alignment Risks, such as:
- AI Deployment Risks, such as:
- ...
- Counter-Examples:
- Narrow AI Risk, which involves limited domain impact.
- AI Bias Risk, which causes discrimination rather than extinction.
- AI Unemployment Risk, which affects economy rather than survival.
- See: Catastrophic Risk, Nuclear War Risk, Pandemic Risk, Climate Change Problem, Artificial General Intelligence, Technological Optimism Attitude, Existential Risk.