Creativity: The Mind, Machines, and Mathematics: Public Debate

author: Ray Kurzweil, Kurzweil Technologies, Inc.
author: David Gelernter, Yale University
author: Rodney A. Brooks, Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, MIT
published: Dec. 16, 2011,   recorded: November 2006,   views: 5864

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


Two of the sharpest minds in the computing arena spar gamely, but neither scores a knockdown in one of the oldest debates around: whether machines may someday achieve consciousness. (NB: Viewers may wish to brush up on the work of computer pioneer Alan Turing and philosopher John Searle in preparation for this video.)

Ray Kurzweil confidently states that artificial intelligence will, in the not distant future, “master human intelligence.” He cites the “exponential power of growth in technology” that will enable both a minute, detailed understanding of the human brain, and the capacity for building a machine that can at least simulate original thought. The “frontier” such a machine must cross is emotional intelligence—“being funny, expressing loving sentiment…” And when this occurs, says Kurzweil, it’s not entirely clear that the entity will have achieved consciousness, since we have no “consciousness detector” to determine if it is capable of subjective experiences.

Acknowledging that his position will prove unpopular, David Gelernter launches his attack: “We won’t even be able to build super-intelligent zombies unless we approach the problem right.” This means admitting that a continuum of cognitive styles exists among humans. As for building a conscious machine, he sees no possibility of one emerging from even the most sophisticated software. “Consciousness means the presence of mental states strictly private with no visible functions or consequences. A conscious entity can call on a thought or memory merely to feel happy, be inspired, soothed, feel anger…” Software programs, by definition, can be separated out, peeled away and run in a logically identical way on any computing platform. How could such a program spontaneously give rise to “a new node of consciousness?”

Kurzweil concedes the difficulty of defining consciousness, but does not want to wish away the concept, since it serves as the basis for our moral and ethical systems. He maintains his argument that reverse engineering of the human brain will enable machines that can act with a level of complexity, from which somehow consciousness will emerge.

Gelernter replies that believing this “seems a completely arbitrary claim. Anything might be true, but I don’t see what makes the claim plausible.” Ultimately, he says, Kurzweil must explain objectively and scientifically what consciousness is -- “how it’s created and got there.” Kurzweil stakes his claim on our future capacity to model digitally the actions of billions of neurons and neurotransmitters, which in humans somehow give rise to consciousness. Gelernter believes such a machine might simulate mental states, but not actually pass muster as a conscious entity. Ultimately, he questions the desirability of building such computers: “We might reach the state some day when we prefer the company of a robot from Walmart to our next-door neighbor or roommates.”

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Morgan, February 18, 2016 at 6:32 a.m.:

I believe it is possible for fully conscious artificial intelligence to be built. However, a precursor to this feat must be the complete and utter mastery of how the human brain works and why the human brain works the way it does. If humans do not fully understand their own consciousness, or perceived consciousness, then it becomes impossible to replicate this so-called consciousness through artificial intelligence. Although we understand many of the brains functions and processes, the scientific community is still very far away from fully understanding the human brain simply because of its vast complexity. This means the scientific community is similarly far away from creating a fully conscious artificially intelligent being. The feat is possible, but still a distant idea as opposed to an upcoming formality.

Comment2 Jeremy Grimm, June 2, 2017 at 12:20 a.m.:

I was disappointed by this "debate". It reminded me of debates about Global Warming. I wanted some discussion of the nature of consciousness but the Turing Test is more than a little long in the tooth. My takeaway was a regard for Kurzweil's cat. I too perceive consciousness in cats and other non-human creatures -- but this debate fails to clarify to me just what is consciousness.

I agree with Morgan that a fully conscious artificial intelligence can be built. I also believe consciousness is a critical piece in building what I could regard as artificial intelligence.

Write your own review or comment:

make sure you have javascript enabled or clear this field: