2012 TheSingularityaReply

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Technological Singularity.

Notes

Cited By

Quotes

1 Introduction

I would like to thank the authors of the 26 contributions to this symposium on my article “The Singularity: A Philosophical Analysis”. I learned a great deal from the reading their commentaries. Some of the commentaries engaged my article in detail, while others developed ideas about the singularity in other directions. In this reply I will concentrate mainly on those in the first group, with occasional comments on those in the second.

A singularity (or an intelligence explosion) is a rapid increase in intelligence to superintelligence (intelligence of far greater than human levels), as each generation of intelligent systems creates more intelligent systems in turn. The target article argues that we should take the possibility of a singularity seriously, and argues that there will be superintelligent systems within centuries unless certain specific defeating conditions obtain.

I first started thinking about the possibility of an intelligence explosion as a graduate student in Doug Hofstadter’s AI lab at Indiana University in the early 1990s. Like many, I had the phenomenology of having thought up the idea myself, though it is likely that in fact I was influenced by others. I had certainly been exposed to Hans Moravec’s 1988 book Mind Children in which the idea is discussed, for example. I advocated the possibility vigorously in a discussion with the AI researchers Rod Brooks and Doug Lenat and the journalist Maxine McKew on the Australian TV show Lateline in 1996. I first discovered the term “singularity” on Eliezer Yudkowsky’s website in 1997, where I also encountered the idea of a combined intelligence and speed explosion for the first time. I was fascinated by the idea that all of human history might converge to a single point, and took that idea to be crucial to the singularity per se; I have been a little disappointed that this idea has receded in later discussions.

Since those early days I have always thought that the intelligence explosion is a topic that is both practically and philosophically important, and I was pleased to get a chance to develop these ideas in a talk at the 2009 Singularity Summit and then in this paper for JCS. Of course the main themes in the target article (the intelligence explosion, negotiating the singularity, uploading) have all been discussed at length before, but often in non-academic forums and often in non-rigorous ways. One of my aims in the target article was to put the discussion on a somewhat clearer and more rigorous analytic footing than had been done in previously published work. Another aim was to help bring the issues to an audience of academic philosophers and scientists who may well have much to contribute.

In that respect I am pleased with the diversity of the commentators. There are nine academic philosophers (Nick Bostrom, Selmer Bringsjord, Richard Brown, Joseph Corabi, Barry Dainton, Daniel Dennett, Jesse Prinz, Susan Schneider, Eric Steinhart) and eight AI researchers (Igor Aleksander, Ben Goertzel, Marcus Hutter, Ray Kurzweil, Drew McDermott, Jurgen Schmidhuber, Murray Shanahan, Roman Yampolskiy). There are also representatives from cultural studies (Arkady Plotnitsky), cybernetics (Francis Heylighen), economics (Robin Hanson), mathematics (Burton Voorhees), neuroscience (Susan Greenfield), physics (Frank Tipler), psychiatry (Chris Nunn), and psychology (Susan Blackmore), along with two writers (Damien Broderick and Pamela McCorduck) and a researcher at the Singularity Institute (Carl Shulman).

Of the 26 articles, about four are wholeheartedly pro-singularity, in the sense of endorsing the claim that a singularity is likely: those by Hutter, Kurzweil, Schmidhuber, and Tipler. Another eleven or so seem to lean in that direction or at least discuss the possibility of a singularity sympathetically: Blackmore, Broderick, Corabi and Schneider, Dainton, Goertzel, McCorduck, Shanahan, Steinhart, Shulman and Bostrom, Voorhees, and Yampolskiy. Three come across as mildly skeptical, expressing a deflationary attitude toward the singularity without quite saying that it will not happen: Dennett, Hanson, and Prinz. And about seven express wholehearted skepticism: Aleksander, Bringsjord, Greenfield, Heylighen, McDermott, Nunn, and Plotnitsky. About twelve of the articles focus mainly on whether there will or will not be a singularity or whether there will or will not be AI: the seven wholehearted skeptics along with McCorduck, Prinz, Schmidhuber, Shulman and Bostrom, and Tipler. Three articles focus mainly on how best to negotiate the singularity: Goertzel, Hanson, and Yampolskiy. Three focus mainly on the character and consequences of a singularity: Hutter, Shanahan, and Voorhees. Three focus mainly on consciousness: Brown, Dennett, and Kurzweil. Three focus mainly on personal identity: Blackmore, Corabi and Schneider, and Dainton. Two focus on connections to other fields: Broderick and Steinhart. Numerous other issues are discussed along the way: for example, uploading (Greenfield, Corabi and Schneider, Plotnitsky) and whether we are in a simulation (Dainton, Prinz, Shulman and Bostrom).

I will not say much about the connections to other fields: Broderick’s connections to science fiction and Steinhart’s connections to theology. These connections are fascinating, and it is clear that antecedents of many key ideas have been put forward long ago. Still, it is interesting to note that very few of the science fiction works discussed by Broderick (or the theological works discussed by Steinhart) focus on a singularity in the sense of a recursive intelligence explosion. Perhaps Campbell’s short story “The Last Evolution” comes closest here: here humans defend themselves from aliens by designing systems that design ever smarter systems that finally have the resources to win the war. There is an element of this sort of recursion in some works by Vernor Vinge (originator of the term “singularity”), although that element is smaller than one might expect. Most of the other works discussed by Broderick focus simply on greater-thanhuman intelligence, an important topic that falls short of a full singularity as characterized above.

At least two of the articles say that it is a bad idea to think or talk much about the singularity as other topics are more important: environmental catastrophe followed by nuclear war (McDermott) and our dependence on the Internet (Dennett). The potential fallacy here does not really need pointing out. That it is more important to talk about topic B than topic A does not entail that it is unimportant to talk about topic A. It is a big world, and there are a lot of important topics and a lot of people to think about them. If there is even a 1% chance that there will be a singularity in the next century, then it is pretty clearly a good idea for at least a hundred people (say) to be thinking hard about the possibility now. Perhaps this thinking will not significantly improve the outcome, but perhaps it will; we will not even be in a position to make a reasoned judgment about that question without doing a good bit of thinking first. That still leaves room for thousands to think about the Internet and for millions to think about the environment, as is already happening.

This reply will largely follow the shape of the original article. After starting with general considerations, I will spend the most time on the argument for an intelligence explosion, addressing various objections and analyses. In later sections I discuss issues about negotiating the singularity, consciousness, uploading, and personal identity.

2 The Argument for an Intelligence Explosion

The target article set out an argument for the singularity as follows.

  1. ) There will be AI (before long, absent defeaters).
  2. ) If there is AI, there will be AI+ (soon after, absent defeaters).
  3. ) If there is AI+, there will be AI++ (soon after, absent defeaters).
  4. ) There will be AI++ (before too long, absent defeaters).

Here AI is human-level artificial intelligence, AI+ is greater-than-human-level artificial intelligence, and AI++ is far-greater-than-human-level artificial intelligence (as far beyond smartest humans as humans are beyond a mouse). “Before long” is roughly “within centuries” and “soon after” is “within decades”, though tighter readings are also possible. Defeaters are anything that prevents intelligent systems from manifesting their capacities to create intelligent systems, including situational defeaters (catastrophes and resource limitations) and motivational defeaters (disinterest or deciding not to create successor systems).

The first premise is an equivalence premise, the second premise is an extension premise, and the third premise is an amplification premise. The target article gave arguments for each premise: arguments from brain emulation and from evolution for the first, from extendible technologies for the second, and from a proportionality thesis for the third. The goal was to ensure that if someone rejects the claim that there is a singularity, they would have to be clear about which arguments and which premises they are rejecting.

This goal was partially successful. Of the wholehearted singularity skeptics, three (Bringsjord, McDermott, and Plotnitsky) engage these arguments in detail. The other four (Aleksander, Greenfield, Heylighen, and Nunn) express their skepticism without really engaging these arguments. The three mild skeptics (Dennett, Hanson, and Prinz) all engage the arguments at least a little.

Greenfield, Heylighen, and Nunn all suggest that intelligence is not just a matter of informationprocessing, and focus on crucial factors in human intelligence that they fear will be omitted in AI: understanding, embodiment, and culture respectively. Here it is worth noting that nothing in the original argument turns on equating intelligence with information-processing. For example, one can equate intelligence with understanding, and the argument will still go through. The emulation and evolution arguments still give reason to think that we can create AI with human-level understanding, the extendibility point gives reason to think that AI can go beyond that, and the explosion point gives reasons to think that systems with greater understanding will be able to create further systems with greater understanding still.

As for embodiment and culture, insofar as these are crucial to intelligence, AI can simply build them in. The arguments apply equally to embodied AI in a robotic body surrounded by other intelligent systems. Alternatively, one can apply the emulation argument not just to an isolated system but to an embedded system, simulating its physical and cultural environment. This may require more resources than simulating a brain alone, but otherwise the arguments go through as before. Heylighen makes the intriguing point that an absence of values may serve as a resource limitation that slows any purported intelligence explosion to a convergence; but the only reason he gives is that values require a rich environmental context, so this worry is not a worry for AI that exists in a rich environmental context.

3 Negotiating the Singularity

Three of the articles (by Goertzel, Hanson, and Yampolskiy) concern how we should best negotiate the singularity, and three (by Hutter, Shanahan, and Voorhees) concern its character and consequences. Most of these do not engage my article in much depth (appropriately, as these were the areas in which my article had the least to say), so I will confine myself to a few comments on each.

4 Consciousness

Although the target article discussed consciousness only briefly, three of the commentators (Brown, Dennett, and Kurzweil) focus mainly on that issue, spurred by my earlier work on that topic or perhaps by the name of this journal.

Kurzweil advocates a view of consciousness with which I am highly sympathetic. He holds that there is a hard problem of consciousness distinct from the easy problems of explaining various functions; there is a conceptual and epistemological gap between physical processes and consciousness (requiring a “leap of faith” to ascribe consciousness to others); and that artificially intelligent machines that appear to be conscious will almost certainly be conscious. As such, he appears to hold the (epistemological) further-fact view of consciousness discussed briefly in the target article, combined with a functionalist (as opposed to biological) view of the physical correlates of consciousness.

5 Uploading and personal identity

The last part of the target article focuses on uploading: transferring brain processes to a computer, either destructively, nondestructively, gradually, or reconstructively. Two of the commentaries (Greenfield and Plotnitsky) raise doubts about uploading, and three (Blackmore, Dainton, and Schneider and Corabi) focus on connected issues about personal identity.

6 Conclusion

The commentaries have reinforced my sense that the topic of the singularity is one that cannot be easily dismissed. The crucial question of whether there will be a singularity has produced many interesting thoughts, and most of the arguments for a negative answer seem to have straightforward replies. The question of negotiating the singularity has produced some rich and ingenious proposals. The issues about uploading, consciousness, and personal identity have produced some very interesting philosophy. The overall effect is to reinforce my sense that there is an area where fascinating philosophical questions and vital practical questions intersect. I hope and expect that these issues will continue to attract serious attention in the years to come.

References

  • Aleksander, I. 2012. Design and the singularity: The philosopher’s stome of AI? Journal of Consciousness Studies 19.
  • Blackmore, S. 2012. She won’t be me. Journal of Consciousness Studies 19:16-19.
  • Bostrom, N. 2003. Are we living in a simulation? Philosophical Quarterly.
  • Bringsjord, S. 2012. Belief in the singularity is logically brittle. Journal of Consciousness Studies 19.
  • Broderick, D. 2012. Terrible angels: The singularity and science fiction. Journal of Consciousness Studies 19:20-41.
  • Brown, R. 2012. Zombies and simulation. Journal of Consciousness Studies 19.
  • Chalmers, D.J. 1990. How Cartesian dualism might have been true. [1]
  • Chalmers, D.J. 1995. Minds, machines, and mathematics. Psyche 2:11-20.
  • Chalmers, D.J. 1997. Moving forward on the problem of consciousness. Journal of Consciousness Studies 4:3-46.
  • Chalmers, D.J. 2005. The Matrix as metaphysics. In (C. Grau, ed.) Philosophers Explore the Matrix. Oxford University Press.
  • Chalmers, D.J. 2006. Perception and the fall from Eden. In (T. Gendler and J. Hawthorne, eds) Perceptual Experience. Oxford University Press.
  • Chalmers, D.J. 2010. The singularity: A philosophical analysis. Journal of Consciousness Studies 17:7-65.
  • Corabi, J. & Schneider, S. 2012. The metaphysics of uploading. Journal of Consciousness Studies 19.
  • Dainton, B. 2012. On singularities and simulations. Journal of Consciousness Studies 19:42-85.
  • Dennett, D.C. 1978. Where am I? In Brainstorms (MIT Press).
  • Dennett, D.C. 1995. Facing backward on the problem of consciousness. Journal of Consciousness Studies 3:4-6.
  • Dennett, D.C. 2012. The mystery of David Chalmers. Journal of Consciousness Studies 19:86-95.
  • Goertzel, B. 2012. Should humanity build a global AI nanny to delay the singularity until its better understood? Journal of Consciousness Studies 19:96-111.
  • Greenfield, S. 2012. The singularity: Commentary on David Chalmers. Journal of Consciousness Studies 19:112-118.
  • Hanson, R. 2012. Meet the new conflict, same as the old conflict. Journal of Consciousness Studies 19:119-125.
  • Heylighen, F. 2012. Brain in a vat cannot break out. Journal of Consciousness Studies 19:126-142.
  • Hofstadter, D.R. 1981. A conversation with Einstein’s brain. In D.R. Hofstadter and D.C. Dennett, eds., The Mind’s I. Basic Books.
  • Hutter, M. 2012. Can intelligence explode? Journal of Consciousness Studies 19:143-166.
  • Kurzweil, R. 2012. Science versus philosophy in the singularity. Journal of Consciousness Studies 19.
  • Lampson, B.W. 1973. A note on the confinement problem. Communications of the ACM 16:613- 615.
  • McCorduck, P. 2012. A response to “The Singularity,” by David Chalmers. Journal of Consciousness Studies 19.
  • McDermott, D. 2012. Response to ‘The Singularity’ by David Chalmers. Journal of Consciousness Studies 19:167-172.
  • Moravec, H. 1988. Mind Children: The Future of Robot and Human Intelligence. Harvard University Press.
  • Nunn, C. 2012. More splodge than singularity? Journal of Consciousness Studies 19.
  • Plotnitsky, A. 2012. The singularity wager: A response to David Chalmers’ “The Singularity: A
  • Philosophical Analysis”. Journal of Consciousness Studies 19.
  • Prinz, J. 2012. Singularity and inevitable doom. Journal of Consciousness Studies 19.
  • Sandberg, A. and Bostrom, N. 2008. Whole brain emulation: A roadmap. Technical report 2008-3, Future for Humanity Institute, Oxford University. [2]
  • Schmidhuber, J. 2012. Philosophers & futurists, catch up! Journal of Consciousness Studies 19:173-182.
  • Shanahan, M. 2012. Satori before singularity. Journal of Consciousness Studies 19.
  • Shulman, C. & Bostrom, N. 2012. How hard is artificial intelligence? Evolutionary arguments and selection effects? Journal of Consciousness Studies 19.
  • Steinhart, M. 2012. The singularity: Beyond philosophy of mind. Journal of Consciousness
  • Studies 19.
  • Tipler, F. 2012. Inevitable existence and inevitable goodness of the singularity. Journal of Consciousness Studies 19:183-193.
  • Voorhees, B. 2012. Parsing the singularity. Journal of Consciousness Studies 19.
  • Yampolskiy, R. 2012. Leakproofing the singularity: Artificial intelligence confinement problem. Journal of Consciousness Studies 19:194-214.

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2012 TheSingularityaReplyDavid J. ChalmersThe Singularity: A Reply