Wednesday, September 21, 2005

Superintelligence For Dummies

If you were dogged enough to wade through my interminable rant against pop science, you may recall that I singled out Ray Kurzweil for particular abuse.

Thus, I'm pleased to note that PZ Myers has torn Kurzweil a brand-new fundamental aperture, large enough for the installation of an artifical anus built by self-replicating nanotubes, and powered by epidermal temperature fluctuations.

What's my problem with Kurzweil? Well, here's a brief summation of his views, in his own words:

Within 25 years, we'll reverse-engineer the brain and go on to develop superintelligence. Extrapolating the exponential growth of computational capacity (a factor of at least 1000 per decade), we'll expand inward to the fine forces, such as strings and quarks, and outward. Assuming we could overcome the speed of light limitation, within 300 years we would saturate the whole universe with our intelligence.
Where does one start? Kurzweil assumes that because the brain is mechanical - a fact that no one denies - it can therefore be reverse-engineered and built to spec, and will accordingly generate consciousness, just like that.

People who make this claim tend to insist that those who disagree with them are in thrall to mystical or quasi-mystical or otherwise obscurantist views. But there's really no reason to look at things that way. There are all sorts of physical and cognitive boundary conditions that might preclude the acquisition of certain types of knowledge. One could argue that it might be possible to send a probe beyond the event horizon of a black hole and have it report back to us, but no one has any idea of how to do it, and there's nothing unscientific or mystical about the belief that we'll never manage to pull it off. They laughed at Galileo Galilei, true enough, but they also laughed at Trofim Denisovich Lysenko.

By the same token, compiling a complete topographic database of every planet in the universe is theoretically possible; topographic scanning is a simple mechanical operation, after all, and an armada of unmanned spacecraft could be sent to do the work. Nonetheless, it's not going to happen. The impossibility of traveling faster than the speed of light is one important obstacle (despite Kurzweil's ghastly assumption that we'll overcome it in order to "saturate the universe with our intelligence"); there are also logistical, financial, and epistemic obstacles.

Kurzweil and his ilk tend to dismiss such objections as arguments from incredulity, without conceding (at least, not to my satisfaction) that incredulity is often far more rational than belief. To my mind, when proponents of strong AI invoke the myriad advances in "artificial intelligence" made over the past few decades, it's not unlike proponents of faster-than-light travel pointing to the progress aviation has made between the Wright brothers' first flight and the advent of the space shuttle. It's simply a non sequitur.

Kurzweil also claims that in understanding the brain well enough to duplicate it, we'll come to understand ourselves. I find that notion incoherent. You might be able to understand how the brain physically generates a state of consciousness, but that doesn't mean that you'll then understand the subjective content - the meaning - of that state of consciousness. But meaning of this sort is precisely what we're concerned with when we talk about "understanding" ourselves. To use a common example, you can objectively analyze the mechanical process that gives rise to a subjective feeling of love, without reaching an objective understanding of why an individual has fallen in love with a specific person. Knowing the physical basis of a mental state doesn't explain subjective, self-reflexive experience, which is the only thing one really cares to understand about consciousness.

Suppose I read Oliver Twist; what I end up with in my brain is not a word-for-word copy of the book, buttressed by a thorough, accurate knowledge of the meaning of every word and reference in it, but a translation of the story into a wholly different form.

This is a mysterious process, to say the least. Among other things, it includes a transformation of the book into several different sorts of experience; this normally involves the generation of "visual" information (e.g., a sense of what Oliver "looks" like), and perhaps an empathic response (e.g., a sense of what it's "like" to be Oliver).

There's also the question of what it's like to read Oliver Twist (e.g., what it's like to read it while sick in bed with a cold, feeling mildly impatient with the discursive qualities of Victorian prose), and the question of what it's like to read about Oliver Twist, in the sense of one's individualistic response to the story's emotional content. Even if we assume that subjective experiences such as these are functionally definable, let alone reducible to specific physical processes that human ingenuity can feasibly mimic by means of technology, I think it's fair to say that no one has any idea how to go about doing it.

Some philosophers (e.g., Dennett) claim that qualia - subjective sensory experiences such as most of those I've invoked above - simply don't exist...a point of view I'm not alone in finding problematic (and, for that matter, weirdly dualistic). Other theorists believe that qualia are emergent, and will naturally be present in any AI system that can pass the Turing Test (or at least appear to be present, to such an extent that we must take their existence on faith, as most of us do when talking to other human beings). Both theories could be - and probably are - wrong, along with much of what we currently think we know about consciousness and cognition.

Now, the riddles of consciousness may simply comprise a few knotty problems, rather than eternal mysteries, but actually solving those problems - rather than wishing them away through the adoption of a dismissive philosophical stance - would seem to be the minimum requirement for designing a "conscious" machine, let alone a "spiritual" one. But Kurzweil continually invokes the increased storage space and processing speed of hardware, as though computational capacity has something obvious and coherent to do with those elusive phenomena which, in humans, we refer to as "consciousness" or "spirituality."

Ultimately, I think that Kurzweil's views - which include a belief in personal immortality through the "downloading" of the self - are little more than quasi-religious claptrap, and I sometimes suspect that the much-heralded "death of God" has caused many of us to invest our slack-jawed credulity - which is a human birthright, after all - in scientific rather than religious triumphalism. I'm not sure that this constitutes ethical or intellectual progress.

There's no doubt that Kurzweil's a smart man. But being smart and being wise are two different things. At Pharyngula, a commenter named Nix sums up the matter nicely:
The smarter you are, the more elaborate the castles you can build in the sky...supported by almost nothing at all but your own aspirations.


monkeygrinder said...

There is a concept in science fiction of a singlularity -- which allows for some type of hyper-change to take place which changes all the rules of our local universe.

It has been described by other, unimpressed sf writers as "the turd in the punch bowl"

penguinrocket said...

Thank you for this. I read Kurzweill's What Will Be a couple of years ago and was similarly disturbed by what you aptly describe as his "absurd techno-triumphalist claptrap."

What bothered me even more was knowing that the Kurzweill's panglossian fantasy is merely a reflection of the barely articulated assumption of the American illiterati--enamored as they are of Moore's Law and other "proven" indicators of neverending progress--that technology (in concert with "the market," naturally) will always save the day and thereby make it possible for us to continue to live our apocalyptically lavish lifestyles forever, with no need for pesky concerns about such "crunchy" concepts as "sustainability."

This is perhaps what happens in a society that watches lots of TV but has no time for history.

Phila said...


That's actually what Myers was attacking...this idiotic chart that Kurzweill cooked up to "prove" it we're on the verge of his idea of a singularity.


Well said.

monkeygrinder said...

Oh my -- that is funny. I hadn't checked the link.

Nittacci said...

Despite your distaste for popular science, I'm grateful for every issue of New Scientist that crosses my threshold. As someone who is NOT a member of the reality-based community (my last science class was freshman biology, decades ago), but rather an artist, New Scientist provides me with the greatest part of the little science information I get.
And this excellent blog, of course.

Phila said...


Oh, I'm not really dead set against pop science, and I completely understand where you're coming from. I think your attitude is a good one.

If science is going to be our new mythology, that's fine with me (except for its lack of a coherent ethics, that is). I just want people to acknowledge it.

WHT said...

So Kurzweill developed some speech recognition algorithms. He got lucky, was able to pick the low-hanging fruit on the AI tree, and that gives him the name recognition to bloviate like this?

All I can add: at least he is not David Galernter.

Dr. Pedant said...

Thank you for this. Disturbingly, it seems almost a given among many who should know better, that human consciousness will be duplicated electronically soon. Every time I hear this, I'm reminded of the famous cartoon of two scientists looking at a blackboard filled with equations, in the middle of which are the words "and then a miracle occurs..."

Hard to know how we're going to duplicate consciousness, when we don't have the foggiest what it is.

I'd also just mention that the looming advent of AI has been trumpeted for at least 50 years. We seem no closer, however. All my computers remain dumb as dirt.

David Veksler said...

Allow me to defend Kurzweil:

The problem with comparing the development of AI to breaking the speed of light - and with your entire argument -- is that whereas violating the speed of light is theoretically impossible, we have irrefutable evidence that intelligent machines - humans exist. Before the Wright brothers, we didn't know how to fly - but we knew that heavier than air flight was possible to birds - it was just a question of engineering. One could have reasonably argued that flight would require a wing-like mechanism - and therefore be tremendously complex (a human-bearing thornicopter only recently took flight) but it would be unreasonable to say that heavier-than-air human flight was technologically impossible to man.

We do, in fact, have some evidence that AI is technologically possible. It's not correct to say that AI has not progressed in 50 years. Weak AI is used in daily life in everything from search engines to auto-drying washing machines. One of the basic discoveries of neuroscience during the last 50 years is that intelligence is not a hugely complex process - it's an aggregation of many simple, semi-independent, algorithms. We are now developing the foundational algorithms that will make true intelligence possible. The true complexity of those algorithms is an unknown - but research on animal behavior suggests that many of them are fairly basic.

Accelerating change is another essential concept you seem to have failed to grasp - if you think it's only about *faster* computers - you haven't been around for the last 10-20 years.