If you were dogged enough to wade through my interminable rant against pop science, you may recall that I singled out Ray Kurzweil for particular abuse.
Thus, I'm pleased to note that PZ Myers has torn Kurzweil a brand-new fundamental aperture, large enough for the installation of an artifical anus built by self-replicating nanotubes, and powered by epidermal temperature fluctuations.
What's my problem with Kurzweil? Well, here's a brief summation of his views, in his own words:
Within 25 years, we'll reverse-engineer the brain and go on to develop superintelligence. Extrapolating the exponential growth of computational capacity (a factor of at least 1000 per decade), we'll expand inward to the fine forces, such as strings and quarks, and outward. Assuming we could overcome the speed of light limitation, within 300 years we would saturate the whole universe with our intelligence.Where does one start? Kurzweil assumes that because the brain is mechanical - a fact that no one denies - it can therefore be reverse-engineered and built to spec, and will accordingly generate consciousness, just like that.
People who make this claim tend to insist that those who disagree with them are in thrall to mystical or quasi-mystical or otherwise obscurantist views. But there's really no reason to look at things that way. There are all sorts of physical and cognitive boundary conditions that might preclude the acquisition of certain types of knowledge. One could argue that it might be possible to send a probe beyond the event horizon of a black hole and have it report back to us, but no one has any idea of how to do it, and there's nothing unscientific or mystical about the belief that we'll never manage to pull it off. They laughed at Galileo Galilei, true enough, but they also laughed at Trofim Denisovich Lysenko.
By the same token, compiling a complete topographic database of every planet in the universe is theoretically possible; topographic scanning is a simple mechanical operation, after all, and an armada of unmanned spacecraft could be sent to do the work. Nonetheless, it's not going to happen. The impossibility of traveling faster than the speed of light is one important obstacle (despite Kurzweil's ghastly assumption that we'll overcome it in order to "saturate the universe with our intelligence"); there are also logistical, financial, and epistemic obstacles.
Kurzweil and his ilk tend to dismiss such objections as arguments from incredulity, without conceding (at least, not to my satisfaction) that incredulity is often far more rational than belief. To my mind, when proponents of strong AI invoke the myriad advances in "artificial intelligence" made over the past few decades, it's not unlike proponents of faster-than-light travel pointing to the progress aviation has made between the Wright brothers' first flight and the advent of the space shuttle. It's simply a non sequitur.
Kurzweil also claims that in understanding the brain well enough to duplicate it, we'll come to understand ourselves. I find that notion incoherent. You might be able to understand how the brain physically generates a state of consciousness, but that doesn't mean that you'll then understand the subjective content - the meaning - of that state of consciousness. But meaning of this sort is precisely what we're concerned with when we talk about "understanding" ourselves. To use a common example, you can objectively analyze the mechanical process that gives rise to a subjective feeling of love, without reaching an objective understanding of why an individual has fallen in love with a specific person. Knowing the physical basis of a mental state doesn't explain subjective, self-reflexive experience, which is the only thing one really cares to understand about consciousness.
Suppose I read Oliver Twist; what I end up with in my brain is not a word-for-word copy of the book, buttressed by a thorough, accurate knowledge of the meaning of every word and reference in it, but a translation of the story into a wholly different form.
This is a mysterious process, to say the least. Among other things, it includes a transformation of the book into several different sorts of experience; this normally involves the generation of "visual" information (e.g., a sense of what Oliver "looks" like), and perhaps an empathic response (e.g., a sense of what it's "like" to be Oliver).
There's also the question of what it's like to read Oliver Twist (e.g., what it's like to read it while sick in bed with a cold, feeling mildly impatient with the discursive qualities of Victorian prose), and the question of what it's like to read about Oliver Twist, in the sense of one's individualistic response to the story's emotional content. Even if we assume that subjective experiences such as these are functionally definable, let alone reducible to specific physical processes that human ingenuity can feasibly mimic by means of technology, I think it's fair to say that no one has any idea how to go about doing it.
Some philosophers (e.g., Dennett) claim that qualia - subjective sensory experiences such as most of those I've invoked above - simply don't exist...a point of view I'm not alone in finding problematic (and, for that matter, weirdly dualistic). Other theorists believe that qualia are emergent, and will naturally be present in any AI system that can pass the Turing Test (or at least appear to be present, to such an extent that we must take their existence on faith, as most of us do when talking to other human beings). Both theories could be - and probably are - wrong, along with much of what we currently think we know about consciousness and cognition.
Now, the riddles of consciousness may simply comprise a few knotty problems, rather than eternal mysteries, but actually solving those problems - rather than wishing them away through the adoption of a dismissive philosophical stance - would seem to be the minimum requirement for designing a "conscious" machine, let alone a "spiritual" one. But Kurzweil continually invokes the increased storage space and processing speed of hardware, as though computational capacity has something obvious and coherent to do with those elusive phenomena which, in humans, we refer to as "consciousness" or "spirituality."
Ultimately, I think that Kurzweil's views - which include a belief in personal immortality through the "downloading" of the self - are little more than quasi-religious claptrap, and I sometimes suspect that the much-heralded "death of God" has caused many of us to invest our slack-jawed credulity - which is a human birthright, after all - in scientific rather than religious triumphalism. I'm not sure that this constitutes ethical or intellectual progress.
There's no doubt that Kurzweil's a smart man. But being smart and being wise are two different things. At Pharyngula, a commenter named Nix sums up the matter nicely:
The smarter you are, the more elaborate the castles you can build in the sky...supported by almost nothing at all but your own aspirations.