Now that we have your probability distribution for when neuromorphic AI will be created, the next step is to pick a probability distribution for the creation of non-neuromorphic human-level AI — that is, human-level AI designed either by implementing a theory of intelligence that works or by reverse-engineering the brain at some broad level, rather than by just directly copying the brain in as much detail as possible and running it as software. Once again, assume no major disruptions to business as usual.
Claim: "Am
I disappointed by the amount of progress in cognitive science and AI
in the past 30 years or so? Not at all. To the contrary, I would
have been extremely upset if we had come anywhere close to reaching
human intelligence — it would have made me fear that our minds and
souls were not deep. Reaching the goal of AI in just a few decades
would have made me dramatically lose respect for humanity, and I
certainly don't want (and never wanted) that to happen. Do I still
believe it will happen someday? I can't say for sure, but I suppose
it will eventually, yes. I wouldn't want to be around then, though.
Indeed, I am very glad that we still have a very very long ways to
go in our quest for AI."
Implication: Human level AI is not
likely to be developed in the next 60 years.
Source: Hofstadter, Douglas R. "An Interview with Douglas R.
Hofstadter, following 'I am a Strange Loop'" Tal Cohen's
Bookshelf. 11 June 2008. Retrieved 9 Aug. 2008
<http://tal.forum2.org/hofstadter_interview>.
Claim:
"Fundamental conceptual advances are required to
reach human level AI. Maybe we'll have it in five years, maybe it
will take 500 years, although I doubt it will take that
long."
Implication: Because uncertainty is so great,
we should have wide confidence bounds, but not too wide.
Source: John McCarthy. (2002). "Forrest Sawyer, John McCarthy
respond to Ray Kurzweil; Kurzweil answers McCarthy". The
Reality Club: Ray Kurzweil: The Singularity.
<http://www.edge.org/discourse/singularity.html>
Claim:
"Fully intelligent robots before 2050"
Implication:
The probability for non-neuromorphic human-level AI builds up until 2050.
Source: Moravec, Hans.
Mind Children: The Future of Robot and Human Intelligence.
New York: Harvard UP, 1990.
Claim:
Extrapolations from the visual computation of the retina and Deep
Blue/Kasparov put human brain capacity at around 100 TFLOPS (1014
floating operations per second), about ten times slower than the
fastest supercomputer in 2008, IBM's Roadrunner, and about 400 times
faster than a PlayStation 3.
Implication: We would already
be able to functionally simulate the human brain on today's
supercomputers, if we had the necessary knowledge.
Source:
Moravec, Hans. "When will computer hardware match the human
brain?"
Journal of Evolution and Technology 1 (1998):
1-12. <http://www.transhumanist.com/volume1/moravec.htm>.
Claim:
"This paper outlines the
case for believing that we will have superhuman artificial
intelligence within the first third of the next century. I
would all-things-considered assign less than a 50% probability to
superintelligence being developed by 2033. I do think there is great
uncertainty about whether and when it might happen, and that one
should take seriously the possibility that it might happen by then,
because of the kinds of consideration outlined in this
paper."
Implication: There is a substantial, but less
than 50% chance that human-level AGI will be developed by
2033.
Source: Bostrom, Nick. "How long before
superintelligence?" International Journal of Future Studies
2 (1998): 1-13. 8 Aug 2008
<http://www.nickbostrom.com/superintelligence.html>.
Claim: Non-neuromorphic AI will be built before neuromorphic AI,
because it's easier to build AI based on understanding the general
principles of the brain (like how the neocortex works) than
painstakingly copying all its detail.
Implication: Non-neuromorphic AI will come before neuromorphic AI.
Source: Hawkins, Jeff. (2004). On
Intelligence. Times Books.
Claim: In
the same way it was easier to build an airplane (abstract
implementation of flight) than an artificial bird (precise prior
implementation of flight), it will be easier to build AI based on a
theory of intelligence (non-neuromorphic) than copying all the
complexity of the brain (neuromorphic).
Implication: Whenever
neuromorphic AI arrives, non-neuromorphic AI is likely to be
completed first.
Source: Various.