Artificial Intelligence (AI) Risk

From GM-RKB
Jump to navigation Jump to search

An Artificial Intelligence (AI) Risk is a information technology risk (system risk, potential harm, or negative consequence) arising from the development and deployment of advanced AI systems.



References

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence Retrieved:2023-7-31.
    • Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or another irreversible global catastrophe.[1] [2]

      One argument goes as follows: The human species currently dominates other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass humanity in general intelligence and become superintelligent, then it could become difficult or impossible to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[3]

      The plausibility of existential catastrophe due to AI is widely debated, and hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous behavior may emerge, and whether practical scenarios for AI takeovers exist. Concerns about superintelligence have been voiced by leading computer scientists and tech CEOs such as Geoffrey Hinton, Yoshua Bengio, Alan Turing, Elon Musk, and OpenAI CEO Sam Altman. In 2022, a survey of AI researchers with a 17% response rate found that the majority of respondents believed there is a 10 percent or greater chance that our inability to control AI will cause an existential catastrophe. [4] In 2023, hundreds of AI experts and other notable figures signed a statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Following increased concern over AI risks, government leaders such as United Kingdom prime minister Rishi Sunak and United Nations Secretary-General António Guterres[5] called for an increased focus on global AI regulation.

      Two sources of concern stem from the problems of AI control and alignment: controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would resist attempts to disable it or change its goals, as such an incident would prevent it from accomplishing its present goals. It would be extremely difficult to align a superintelligence with the full breadth of significant human values and constraints.[1][6] [7] In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation.[8]

      A third source of concern is that a sudden "intelligence explosion" might take an unprepared human race by surprise. Such scenarios consider the possibility that an AI which surpasses its creators in intelligence might be able to recursively improve itself at an exponentially increasing rate, improving too quickly for its handlers and society writ large to control.[1][6] Empirically, examples like AlphaZero teaching itself to play Go show that domain-specific AI systems can sometimes progress from subhuman ability to superhuman ability very quickly, although such systems do not involve the AI altering its fundamental architecture.

  1. 1.0 1.1 1.2 Russell, Stuart; Norvig, Peter (2009). "26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  2. Bostrom, Nick (2002). "Existential risks". Journal of Evolution and Technology. 9 (1): 1–31.
  3. Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (First ed.). Oxford University Press. ISBN 978-0199678112.
  4. "The AI Dilemma". www.humanetech.com. Retrieved 10 April 2023. 50% of AI researchers believe there's a 10% or greater chance that humans go extinct from our inability to control AI.
  5. Fung, Brian (18 July 2023). "UN Secretary General embraces calls for a new UN agency on AI in the face of 'potentially catastrophic and existential risks' | CNN Business". CNN. Retrieved 20 July 2023.
  6. 6.0 6.1 Yudkowsky, Eliezer (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF). Global Catastrophic Risks: 308–345. Bibcode:2008gcr..book..303Y. Archived (PDF) from the original on 2 March 2013. Retrieved 27 August 2018.
  7. Russell, Stuart; Dewey, Daniel; Tegmark, Max (2015). "Research Priorities for Robust and Beneficial Artificial Intelligence" (PDF). AI Magazine. Association for the Advancement of Artificial Intelligence: 105–114. arXiv:1602.03506. Bibcode:2016arXiv160203506R. Archived (PDF) from the original on 4 August 2019. Retrieved 10 August 2019., cited in "AI Open Letter - Future of Life Institute". Future of Life Institute. January 2015. Archived from the original on 10 August 2019. Retrieved 9 August 2019.
  8. Dowd, Maureen (April 2017). "Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse". The Hive. Archived from the original on 26 July 2018. Retrieved 27 November 2017.

2023

2019

2022

2018

2015