There was a book written nearly 40 years ago that sits on my bookshelf only partially read. I read about a third of it about 25 years ago and got bored with it. If I had the time I would pull it off the shelf and give it another try and see if what he said could not be done is true with today’s computers. He claimed they will never be able to do some tasks. I am not entirely convinced such strong bets could be placed–if for no other reason than never is a very strong statement. But I do know that some revolutionary changes will have to be made before he can be proven wrong.
Chris has contributed some fascinating information to the discussion on exobrains. But not everyone may find it as fascinating or understandable as some of us geeks do. I hope to give a couple examples here that will be a little easier to relate to.
From a book I listened to recently I picked up an absolutely true insight that nearly caused a temporary meltdown of the control system of my muscular skeleton system. I nearly went deaf, blind, and limp as my brain dumped all I/O to the outside world and spent several seconds pondering the implications of that insight. What follows is a glimpse into the differences of computer “brains” and animal brains and highlights one area of tasks that are very, very difficult to do with todays computers and how much room for improvement there is. We may think computer hardware and software are really hot stuff. But when compared to meatware, for some tasks, its not even in the race.
We know how fast signals travel in the human nervous system. It isn’t electrical as much as it is chemical. People frequently say it’s electrical but, as Sean Flynn would say, that is only true for some values of electrical. Instead of signals taking about one nanosecond to travel one foot as is the case for true electrical signals it is (IIRC) something on the order of one millisecond per foot. About one million times slower. The “logic elements” of the brain have known response times as well. Thus by timing how long it takes for you to respond to some input we can determine how many “logic elements” were involved in making decisions based on the given input.
For example–suppose we were to show you set of random images that contain either a picture of some sort of cat (including leopards, lions, tigers, and domestic short hairs) or a picture of some sort of dog (including foxes, wolves, greyhounds, and Shih Tzus). You are to hit button “C” or “D” depending on whether it is a cat or a dog in the picture. If the pictures were clear and the animal was in full view normal, healthy humans could perform this task in well under one second. Even a three year old could do it. I’ve been programming computers professionally (admittedly in non-AI fields so this isn’t as strong a statement as I would like it to be) for about 25 years but I think it would be possible to choose the set of pictures such that the best computer program, given the same time limits, would have an error rate would be little better than chance yet the human would have an error rate of near zero.
In a similar vein I once had someone at the CIA tell me they had spent millions and millions of dollars developing software that would analyze photographs and find objects of military interest. These would be things like tanks, missles, and AK-47s. When the CIA is interested in these type of objects the picture they get to work with were not taken with the full cooperation of the owners of said military hardware. Hence the objects of interest may be in less than full view of the person taking the picture and a considerably harder problem than the dog and cat problem outlined above. Said CIA employee told me that after spending all those millions of dollars what they found worked best was if they put the pictures on the wall above the urinal and left a pencil nearby. At the end of the day all the objects of military interest would be circled. No computer could match that in processing time, accuracy, or cost.
Once we subtract out the nerve transmission time from your eyes to your brain and from your brain to your fingers we find that the maximum “depth” of the “logic elements” is about 200. There were many, many elements working in parallel but no path exceeded 200 elements in length. There is no existing computer, no matter how massively parallel it’s “logic elements” are, no matter how sophisticated the algorithms to take advantage of parallel processing power, that can reach a decision for the given problem with that sort of depth. The depth of the logic paths are going to be 100s of thousands of times deeper for the computer and yet it would end up with far, far, inferior decisions.
That still gives me goose bumps.