I read a great deal of Ray's book yesterday on the plane from Seattle to Boston, and came away, as usual, unwillingly impressed by Ray. I had been prepared to write the following one line review of his book:
“Ray Kurzweil has written a Harlequin Romance novel for the technoratti.”
This was based primarily on the conversations I used to have with him and generally how they went. At KAI it would usually be Vlad and I against Ray and Francis on these topics and I'd bet my last dollar each side believed that it came out the better. Unfortunately I can't write that line for my review, it would be neither completely honest or fair. There are, however, elements of the romance novel in it.
First, the highlights:
I mentioned the other day that, while looking at in the bookstore, I read an appendix where Ray tries to give a mathematical justification for the singularity. The basic issue I had with it is that he made an assumption which I thought was unjustified and, if cast ever-so-slightly differently ( and more honestly in a scientific sense), the effect goes away. I sat down afterward and wrote what I thought was the more objective equation, solved it for a family of solutions and saw how Ray's solution was a subset within that superset. Ray's not wrong, just ... judicious in his choice of parameters. Upon looking through the book at leisure, I found that Ray had gone over this criticism in full. Not only had he enclosed a version of what I thought was the right general solution, he spent some time explaining why he thought he chose a conservative solution and argued that his solution should be stronger than it was.
Kudos to Ray for doing this.
However, his justification for doing this is circular. It basically boils down to, we know a singularity is coming, if I chose this parameter and do some math I get and equation with a singularity, therefore a singularity is coming. Q.E.D.
The basic problem is how strongly "the rate of technological change" couples to "the amount of knowledge in the world". Squishy concepts to be writing equations for in the first place. Ray assumes the coupling is directly proportional at least and maybe proportional to the square or some higher exponent. I take the more conservative approach and say while I think it's proportional, there are alot of other things the "amount of world knowledge" is dependent upon and say it's some power law X^(n) where 0<n<1. In Ray's solution you eventually get a singularity, in mine you "merely" get exponential growth. Given that nothing in nature (yes even black holes) generates a true singularity, I think mine is a more prudent set of assumptions. Ray, always the optimist, disagrees.
Also, kudos to Ray for directly confronting a lot of critical arguments. He has a whole chapter on a dozen or so objections people raise (I only thought of 3 or 4 of them). Some of them like, computers don't have souls so god will not recognize them as sentient beings, seem... too esoteric or philosophical for me. While I applaud him for raising them, I was not convinced about how he dismisses them, mostly with one version or another of "trust me, it will all be great".
The rest of the book is good and if you're one of the folks that already believes this is going to happen, this will help you rationalize believing that. While I was unconvinced of the approach of a singularity in 2050, the consolation prize (exponential growth) is pretty good and I do happen to believe that. Ray does a superlative job of laying out the current direction of technology, innovation and growth. He does a less successful job with economics, i.e. scarcity is solved and we all live in a communist utopia in 2050, law, human nature and, surprising to me, the nature of the eventual AIs.
This last point bears some explanation as I learned everything I know about AI from working at Ray's company, but not from Ray. It's also important to note here that Ray himself did not work on research or code at KAI. While I think he he understands AI very well, I don't think he quite gets what is going on in there. Basically his assumption is that neural nets will mimic the human brain and that the AIs the come in 2050 will all be human-based benevolent gods who (more or less) love us and keep as pets. My view is different. I think the AIs will be as alien to us as the dolphins, chimps and whales are. Actually, more so, as the latter all have meaty brains, meaty urges, meaty lusts and above all, meaty motivations. I think the AIs will be nothing like us at all, with no sense of nostalgia, no sense of history or parentage etc. In short, from our perspective as humans, they will be functionally psychotic, with nothing to ground them in the meat world and no real allegiance to us. It's likely to be more like Skynet than the Eschaton. Humans are not likely to understand the monomania and motivations of real AI based superintelligence and, if it understands us at all, it's not likely to care.
However, while never in doubt, I am often wrong and hopefully that is the case here. Read the book, it's got a lot going for it despite my reservations.
And, like a said, exponential growth is a damn fine consolation prize.
If, in 2051, I'm reading this as a disembodied consciousness floating just outside Tau Ceti IV, I'll enthusiastically cop to having been a doubter.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment