How likely is it that any type of non-biological human-level Artificial Intelligence is possible in principle? Include simulations of the human brain ("neuromorphic AI"). Ignore whether or not it will actually be created in practice — just state whether it's possible in principle.
Claim:
"'Could a machine think?' On the argument advanced here
only a machine could think, and only very special kinds of machines,
namely brains and machines with internal causal powers equivalent to
those of brains. And that is why strong AI has little to tell us
about thinking, since it is not about machines but about programs,
and no program by itself is sufficient for thinking."
Implication:
AIs based just on running programs will not work, so non-biological AI in
general may not be possible.
Source: Searle, John R.
"Minds, brains, and programs." Behavioral and Brain
Sciences 3 (1980): 417-57. 8 Aug. 2008
<http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html>
Claim: The
human brain is not a computer, and therefore cannot be simulated by
a digital computer.
Implication: Any kind of human-level AI is
implausible, even given arbitrary computing power.
Sources:
Michael Denton, William Dembski, and others in Are
We Spiritual Machines?
Claim: "Some
people say that computers can never show true intelligence, whatever
that may be. But it seems to me that if very complicated chemical
molecules can operate in humans to make them intelligent, then
equally complicated electronic circuits can also make computers act
in an intelligent way. And if they are intelligent, they can
presumably design computers that have even greater complexity and
intelligence."
Implication: Artificial intelligence
is possible.
Source: Stephen Hawking. (2007).
http://www.singinst.org/summit2007/quotes/stephenhawking/
Claim: Human
consciousness and creativity are based on quantum effects unique to
our neurology.
Implication: Non-biological AI is
unlikely.
Source: Penrose, Roger. (1989). The
Emperor's New Mind. Oxford University Press.