Tuesday, September 12, 2006

Things We Want to Become True

An interesting debate between Mitchell Kapor and Ray Kurzweil on whether or not the statement "By 2029 no computer - or "machine intelligence" - will have passed the Turing Test. " is true. Obviously I am on Kapor's side, althought I admit a) I'd like to lose and b) it's at least theoretically possible I will.

What I think is interesting is their styles of arguement. Ray is arguing like a SciFi fan, glossing over the hard truths to get to the visionary statement, then turning around and saying that, becuase the vision is so compelling, it *must* be true. Kapor is arguing more about the science involved, and one statement in particular has been key in my own arguments:

Additionally, part of the burden of proof for supporters of intelligent machines is to develop an adequate account of how a computer would acquire the knowledge it would be required to have to pass the test. Ray Kurzweil's approach relies on an automated process of knowledge acquisition via input of scanned books and other printed matter. However, I assert that the fundamental mode of learning of human beings is experiential. Book learning is a layer on top of that. Most knowledge, especially that having to do with physical, perceptual, and emotional experience is not explicit, never written down. It is tacit. We cannot say all we know in words or how we know it. But if human knowledge, especially knowledge about human experience, is largely tacit, i.e., never directly and explicitly expressed, it will not be found in books, and the Kurzweil approach to knowledge acquisition will fail.

In one of his books, Charlies Stross has the ship AI of a starship crossing space reach a critical point and transcend. It figures out enough physics to build an instantaneous network and cuts it's journey short. This has always struck me as the fundemental problem with how AI is protrayed and seems to be the strength of Ray's arguement. It will just happen. OTOH, those of us who have done scientific research know it's a messy business. Lots of mistakes, lots of errors and, most importantly, lots and lots of experiments. This is what Kapor is arguing, and it seems far more grounded in reality.

Will we have computers as fast or faster than the human brain by 2029? Possibly. Will they have human intellect? Well, it takes a human brain, interacting with the world many years to get to concrete operational thinking (indeed, many never do. There is much less difference between your brain and a person with an IQ of 50 than there is between your brain and chimps). It seems very unlikely that we'll have the ablity to do the soft skills, the reasoning by that point.

Much as I want it to be true, I have to put my money with Kapor.

No comments: