Artificial Intelligence (AI) Risk
An Artificial Intelligence (AI) Risk is a information technology risk (system risk, potential harm, or negative consequence) arising from the development and deployment of advanced AI systems.
- Context:
- It can be addressed with AI Risk Mitigation Strategy, AI Governance, and AI Safety Seasures.
- It can range from being a Primitive AI Rik (such as from attention getting recommender systems) to being an Advanced AI Risk.
- …
- Example(s):
- Counter-Example(s):
- See: Artificial General Intelligence, Global Catastrophic Risk, Superintelligence, AI Takeover, Ethical AI, Technological Singularity.
References
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence Retrieved:2023-7-31.
- Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or another irreversible global catastrophe.[1] [2]
One argument goes as follows: The human species currently dominates other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass humanity in general intelligence and become superintelligent, then it could become difficult or impossible to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[3]
The plausibility of existential catastrophe due to AI is widely debated, and hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous behavior may emerge, and whether practical scenarios for AI takeovers exist. Concerns about superintelligence have been voiced by leading computer scientists and tech CEOs such as Geoffrey Hinton, Yoshua Bengio, Alan Turing, Elon Musk, and OpenAI CEO Sam Altman. In 2022, a survey of AI researchers with a 17% response rate found that the majority of respondents believed there is a 10 percent or greater chance that our inability to control AI will cause an existential catastrophe. [4] In 2023, hundreds of AI experts and other notable figures signed a statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Following increased concern over AI risks, government leaders such as United Kingdom prime minister Rishi Sunak and United Nations Secretary-General António Guterres[5] called for an increased focus on global AI regulation.
Two sources of concern stem from the problems of AI control and alignment: controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would resist attempts to disable it or change its goals, as such an incident would prevent it from accomplishing its present goals. It would be extremely difficult to align a superintelligence with the full breadth of significant human values and constraints.[1][6] [7] In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation.[8]
A third source of concern is that a sudden "intelligence explosion" might take an unprepared human race by surprise. Such scenarios consider the possibility that an AI which surpasses its creators in intelligence might be able to recursively improve itself at an exponentially increasing rate, improving too quickly for its handlers and society writ large to control.[1][6] Empirically, examples like AlphaZero teaching itself to play Go show that domain-specific AI systems can sometimes progress from subhuman ability to superhuman ability very quickly, although such systems do not involve the AI altering its fundamental architecture.
- Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or another irreversible global catastrophe.[1] [2]
- ↑ 1.0 1.1 1.2 Russell, Stuart; Norvig, Peter (2009). "26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
- ↑ Bostrom, Nick (2002). "Existential risks". Journal of Evolution and Technology. 9 (1): 1–31.
- ↑ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (First ed.). Oxford University Press. ISBN 978-0199678112.
- ↑ "The AI Dilemma". www.humanetech.com. Retrieved 10 April 2023. 50% of AI researchers believe there's a 10% or greater chance that humans go extinct from our inability to control AI.
- ↑ Fung, Brian (18 July 2023). "UN Secretary General embraces calls for a new UN agency on AI in the face of 'potentially catastrophic and existential risks' | CNN Business". CNN. Retrieved 20 July 2023.
- ↑ 6.0 6.1 Yudkowsky, Eliezer (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF). Global Catastrophic Risks: 308–345. Bibcode:2008gcr..book..303Y. Archived (PDF) from the original on 2 March 2013. Retrieved 27 August 2018.
- ↑ Russell, Stuart; Dewey, Daniel; Tegmark, Max (2015). "Research Priorities for Robust and Beneficial Artificial Intelligence" (PDF). AI Magazine. Association for the Advancement of Artificial Intelligence: 105–114. arXiv:1602.03506. Bibcode:2016arXiv160203506R. Archived (PDF) from the original on 4 August 2019. Retrieved 10 August 2019., cited in "AI Open Letter - Future of Life Institute". Future of Life Institute. January 2015. Archived from the original on 10 August 2019. Retrieved 9 August 2019.
- ↑ Dowd, Maureen (April 2017). "Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse". The Hive. Archived from the original on 26 July 2018. Retrieved 27 November 2017.
2023
- Philippe Beaudoin. (2023). facebook post 2023-08-12
https://facebook.com/story.php?story_fbid=pfbid04erNEX7MWC5PgMpiUrYRQccWbHQ7FTkkvntyxMpkJ29t3puCD13BVtwjoSnPWoD1l&id=679472999
- NOTE: It explores the urgency of focusing on AI safety and regulation. The author emphasizes that Artificial Intelligence (AI) is on the verge of becoming a highly impactful technology with both promises and dangers. Acknowledging that current, primitive AI systems are already causing harm, the author stresses the need to anticipate and mitigate risks associated with more advanced AI. The post outlines consensus points such as the existing harms that need regulation, the necessity of investing in fundamental aspects of AI research, and recognizing unknown risks linked to advanced AI. It also acknowledges ongoing debates regarding the nature of future risks, communication strategies, the balance between focusing on present and future harms, and the contention over whether future AI should be kept more "closed" or "open." The author concludes by calling for prioritized attention to AI safety through strong regulation and substantial research investments, aiming to guide the development of AI towards a safer and happier future.
2019
- (Perry & Uuk, 2019) ⇒ B. Perry, and R. Uuk. (2019). "AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk." In: Big Data and Cognitive Computing.
- NOTE: It discusses the considerations for integrating AI risk policy into the broader framework of governance, focusing on strategies for reducing AI risk.
2022
- (Piorkowski et al., 2022) ⇒ D. Piorkowski, M. Hind, and J. Richards. (2022). "Quantitative AI Risk Assessments: Opportunities and Challenges." In: arXiv preprint arXiv:2209.06317.
- NOTE: It reports on the current state of quantitative AI risk assessment and emphasizes the need for suitable metrics.
2018
- (Sotala, 2018) ⇒ K. Sotala. (2018). "Disjunctive Scenarios of Catastrophic AI Risk." In: Artificial Intelligence Safety and Security.
- NOTE: It explores different kinds of AI risk scenarios, particularly focusing on disjunctive scenarios of catastrophic AI risk.
2015
- (Scherer, 2015) ⇒ Matthew U. Scherer. (2015). “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.” Harv. JL & Tech. 29