Human-level Intelligence (AGI) Machine
A Human-level Intelligence (AGI) Machine is an intelligent machine with a system capability that approximates human-level intelligence.
- AKA: Strong AI, Artificial General Intelligence.
- Context:
- It can (typically) arise during an AGI Emergence Period.
- It can range from being a Disembodied AGI to being an Embodied AGI.
- It can range from being a Stand-Alone AGI to being a Networked AGI.
- It can range from being a Benevolent AGI to being a Malevolent AGI.
- It can be the focus of the AGI Research Community.
- It can be the outcome of a Race to Discover AGI.
- It could (likely) be one of the last Human Inventions.
- It could (likely) require more than 10 petaFLOPS [1]
- It could (likely) be a Conscious Machine.
- It can be a Technological Unemployment Cause.
- …
- Counter-Example(s):
- See: Unintelligent Machine, OpenCog, Whole Brain Emulation, AI Arms Race, Top-500 HPCs.
References
2020
- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/artificial_general_intelligence Retrieved:2020-2-20.
- Artificial general intelligence (AGI) is the intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, [1] [2] full AI, or general intelligent action.(Some academic sources reserve the term "strong AI" for machines that can experience consciousness.) Some authorities emphasize a distinction between strong AI and applied AI [3] (also called narrow AI[2] or weak AI ): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to perform the full range of human cognitive abilities.
As of 2017, over forty organizations were doing research on AGI.
- Artificial general intelligence (AGI) is the intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, [1] [2] full AI, or general intelligent action.(Some academic sources reserve the term "strong AI" for machines that can experience consciousness.) Some authorities emphasize a distinction between strong AI and applied AI [3] (also called narrow AI[2] or weak AI ): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to perform the full range of human cognitive abilities.
2019
- https://openai.com/charter/
- QUOTE: OpenAI’s mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. …
2017a
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/strong_AI Retrieved:2017-2-27.
- Strong artificial intelligence or, True AI, may refer to:
- Artificial general intelligence, a hypothetical machine that exhibits behavior at least as skillful and flexible as humans do, and the research program of building such an artificial general intelligence
- Computational theory of mind, the philosophical position that human minds are, in essence, computer programs. This position was named “strong AI” by John Searle in his Chinese room argument.
- Artificial consciousness, a hypothetical machine that possesses awareness of external objects, ideas and/or self awareness.
- Strong artificial intelligence or, True AI, may refer to:
2017c
- http://bigthink.com/videos/ben-goertzel-artificial-general-intelligence-will-be-our-last-invention
- QUOTE: … says Dr. Ben Goertzel – for better or worse. Humanity will always create and invent, but the last invention of necessity will be a human-level Artificial General Intelligence mind, which will be able to create a new AIG with super-human intelligence, and continually create smarter and smarter versions of itself. …
2017d
- https://www.wired.com/story/ray-kurzweil-on-turing-tests-brain-extenderstand-ai-ethics/amp
- QUOTE: ... Ray Kurzweil: ... You need the full flexibility of human intelligence to pass a valid Turing Test. There's no simple Natural Language Processing trick you can do to do that. If the human judge can't tell the difference then we consider the AI to be of human intelligence, which is really what you're asking. That's been a key prediction of mine. I've been consistent in saying 2029. …
2017 e.
- https://futureoflife.org/wp-content/uploads/2017/01/Yoshua-Bengio.pdf
- QUOTE: What’s Missing
- More autonomous learning, unsupervised learning
- Discovering the underlying causal factors
- Model-‐based RL which extends to completely new situations by unrolling powerful predictive models which can help reason about rarely observed dangerous states
- Sufficient computational power for models large enough to capture human-‐level knowledge
- Autonomously discovering multiple time scales to handle very long-‐term dependencies
- Actually understanding language (also solves generating), requiring enough world knowledge / commonsense
- Large-‐scale knowledge representation allowing one-‐shot learning as well as discovering new abstractions and explanations by ‘compiling’ previous observations
- QUOTE: What’s Missing
2013b
- https://en.wikipedia.org/wiki/File:Estimations_of_Human_Brain_Emulation_Required_Performance.svg
- QUOTE:
- QUOTE:
2014a
- (Müller & Bostrom, 2014) ⇒ Vincent C. Müller, and Nick Bostrom. (2014). “Future Progress in Artificial Intelligence: A Poll Among Experts.” In: AI Matters Journal, 1(1). doi:10.1145/2639475.2639478
- QUOTE: In some quarters, there is intense concern about high-level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity; in other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high-level machine intelligence coming up within a particular timeframe, which risks they see with that development and how fast they see these developing.
2014b
- (Brooks, 2014) ⇒ Rodney Brooks. (2014). “Artificial intelligence is a tool, not a threat.” In: Rethinking Robotics, November 10, 2014.
- QUOTE: … a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence. … Why so many years? As a comparison, consider that we have had winged flying machines for well over 100 years. But it is only very recently that people like Russ Tedrake at MIT CSAIL have been able to get them to land on a branch, something that is done by a bird somewhere in the world at least every microsecond. Was it just Moore’s law that allowed this to start happening? Not really. It was figuring out the equations and the problems and the regimes of stall, etc., through mathematical understanding of the equations. …
… If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools.
- QUOTE: … a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence. … Why so many years? As a comparison, consider that we have had winged flying machines for well over 100 years. But it is only very recently that people like Russ Tedrake at MIT CSAIL have been able to get them to land on a branch, something that is done by a bird somewhere in the world at least every microsecond. Was it just Moore’s law that allowed this to start happening? Not really. It was figuring out the equations and the problems and the regimes of stall, etc., through mathematical understanding of the equations. …
2012
- (Adams et al., 2012) ⇒ Sam S Adams, Itamar Arel, Joscha Bach, Robert Coop, Rod Furlan, Ben Goertzel, J Storrs Hall, Alexei Samsonovich, Matthias Scheutz, Matthew Schlesinger, Stuart C. Shapiro, and John F. Sowa. (2012). “[http://www.aaai.org/ojs/index.php/aimagazine/article/view/2322 Mapping the Landscape of Human-level A
2011
- http://versita.com/jagi/
- Artificial General Intelligence (AGI) is an emerging field aiming at the building of “thinking machines", that is, general-purpose systems with intelligence comparable to that of the human mind. While this was the original goal of Artificial Intelligence (AI), the mainstream of AI research has turned toward domain-dependent and problem-specific solutions;; therefore it has become necessary to use a new name to indicate research that still pursues the "Grand AI Dream". Similar labels for this kind of research include “Strong AI", “Human-level AI", etc.
The problems involved in creating general-purpose intelligent systems are very different from those involved in creating special-purpose systems. Therefore, this journal is different from conventional AI journals in its stress on the long-term potential of research towards the ultimate goal of AGI, rather than immediate applications. Articles focused on details of AGI systems are welcome, if they clearly indicate the relation between the special topics considered and intelligence as a whole, by addressing the generality, extensibility, and scalability of the techniques proposed or discussed.
Since AGI research is still in its early stage, the journal strongly encourages novel approaches coming from various theoretical and technical traditions, including (but not limited to) symbolic, connectionist, statistical, evolutionary, robotic and information-theoretic, as well as integrative and hybrid approaches.
- Artificial General Intelligence (AGI) is an emerging field aiming at the building of “thinking machines", that is, general-purpose systems with intelligence comparable to that of the human mind. While this was the original goal of Artificial Intelligence (AI), the mainstream of AI research has turned toward domain-dependent and problem-specific solutions;; therefore it has become necessary to use a new name to indicate research that still pursues the "Grand AI Dream". Similar labels for this kind of research include “Strong AI", “Human-level AI", etc.
2009
- (Moravec, 2009) ⇒ Hans Moravec. (2009). “Rise of the Robots--The Future of Artificial Intelligence.” In: Scientific American, 23.
2008
- (Zadeh, 2008) ⇒ Lotfi A. Zadeh. (2008). “Toward Human Level Machine Intelligence - Is It Achievable? The Need for a Paradigm Shift.” In: IEEE Computational Intelligence Magazine Journal, 3(3). doi:10.1109/MCI.2008.926583
- (Sandberg & Bostrom, 2008) ⇒ Anders Sandberg, and Nick Bostrom. (2008). “Whole Brain Emulation." Technical Report #2008-3, Future of Humanity Institute, Oxford University.
- QUOTE: Table 10: Estimates of computational capacity of the human brain. Units have been converted into FLOPS and bits whenever possible. Levels refer to Table 2.
- Source | Assumptions | Computational demands | Memory
- (Leitl, 1995) Assuming 1010 neurons, 1,000 synapses per neuron, 34 bit ID per neuron and 8 bit representation of dynamic state, synaptic weights and delays. [Level 5] 5·1015 bits (but notes that the data can likely be compressed).
- (Tuszynski, 2006) Assuming microtubuli dimer states as bits and operating on nanosecond switching times. [Level 10] 1028 FLOPS 8·1019 bits
- (Kurzweil, 1999) Based on 100 billion neurons with 1,000 connections and 200 calculations per second. [Level 4] 2·1016 FLOPS 1012 bits
- (Thagard, 2002) Argues that the number of computational elements in the brain is greater than the number of neurons, possibly even up to the 1017 individual protein molecules. [Level 8] 1023 FLOPS
- (Landauer, 1986) Assuming 2 bits learning per second during conscious time, experiment based. [Level 1] 1.5·109 bits (109 bits with loss)
- (Neumann, 1958) Storing all impulses over a lifetime. 1020 bits (Wang, Liu et al., 2003) Memories are stored as relations between neurons. 108432 bits (See footnote 17)
- (Freitas Jr., 1996) 1010 neurons, 1,000 synapses, firing 10 Hz [Level 4] 1014 bits/second (Bostrom, 1998) 1011 neurons, 5·103 synapses, 100 Hz, each signal worth 5 bits. [Level 5] 1017 operations per second
- (Merkle, 1989a) Energy constraints on Ranvier nodes. 2·1015 operations per second (1013-1016 ops/s)
- (Moravec, 1999; Morevec, 1988; Moravec, 1998) Compares instructions needed for visual processing primitives with retina, scales up to brain and 10 times per second. Produces 1,000 MIPS neurons. [Level 3] 108 MIPS 8·1014 bits.
- (Merkle, 1989a) Retina scale-up. [Level 3] 1012-1014 operations per second. (Dix, 2005) 10 billion neurons, 10,000 synaptic operations per cycle, 100 Hz cycle time. [Level 4] 1016 synaptic ops/s 4·1015 bits (for structural information) (Cherniak, 1990) 1010 neurons, 1,000 synapses each. [Level 4] 1013 bits
- (Fiala, 2007) 1014 synapses, identity coded by 48 bits plus 2x36 bits for pre- and postsynaptic neuron id, 1 byte states. 10 ms update time. [Level 4] 256,000 terabytes/s 2·1016 bits (for structural information)
- (Seitz) 50-200 billion neurons, 20,000 shared synapses per neuron with 256 distinguishable levels, 40 Hz firing. [Level 5] 2·1012 synaptic operations per secon 4·1015 - 8·1015 bits
- (Malickas, 1996) 1011 neurons, 102-104 synapses, 100- 1,000 Hz activity. [level 4] 1015-1018 synaptic operations per secon 1·1011 neurons, each with 104 compartments running the basic Hodgkin-Huxley equations with 1200 FLOPS each (based on
- (Izhikevich, 2004). Each compartment would have 4 dynamical variables and 10 parameters described by one byte each. 1.2·1018 FLOPS 1.12·1028 bits
- QUOTE: Table 10: Estimates of computational capacity of the human brain. Units have been converted into FLOPS and bits whenever possible. Levels refer to Table 2.
- ↑ Kurzweil, Singularity (2005) p. 260
- ↑ 2.0 2.1 : Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."
- ↑ Encyclopædia Britannica Strong AI, applied AI, and cognitive simulation or Jack Copeland What is artificial intelligence? on AlanTuring.net