X-Message-Number: 15914 Date: Thu, 22 Mar 2001 07:55:00 -0500 From: Thomas Donaldson <> Subject: about machine intelligence etc Hi everyone! It's coming close to PERIASTRON time again, in which I will be offline for a while. However the discussion on Cryonet 15908-15912 about future AI machines versus improved humans deserves just a few words. 1. As seems to happen often, the issue of emotions etc seems to have been forgotten by some of those discussing these issues. They deserve the same attention as intelligence, certainly if you wish to make a truly autonomous AI machine (not that we've yet come close, but people are working on it). If we really want independent machines, we're going to have to understand not just how our brains produce and deal with knowledge, but how they produce and deal with emotions too. Right now, this isn't known ... though they know more about emotions than about knowledge. 2. Even to improve human beings we need first to understand how our different memories work. Again, we are moving in that direction, but haven't yet succeeded. We do not really connect a computer to ourselves simply by putting it inside our skull and connecting it to a few nerves. The only difference between such a device and one which is separate from us and operated with our voice or our hands is that the neurons to which it is connected aren't the same... and probably those of our voice or fingers would actually do much better (after all, they're designed for just such tasks). Sure, we have it connected in a literal sense. But that connection has no relation to the way in which our nerves connect to other nerves... which presumably is what is wanted by such a system. 3. If a group wishes to make a hyperintelligent and hyperhelpful computer, they have a hard problem ahead of them. But even supposing that they did so, a major problem will still remain unsolved: how to convince everyone that this computer is hyperhelpful. We may someday seem lots of such computers, all claimed to be hyper- intelligent and hyperhelpful by the groups which made each one. And knowing how humans work, those computers will probably disagree on how to be hyperhelpful, too, and so we get not only humans who disagree with one another but computers too. For some this may be fun to watch, if nothing else. None of them will take over. Perhaps the human beings (aided by various computer peripherals) will decide to turn off all of these special computers (no doubt after some time and arguments) but as an aim it looks to me like an ultimate waste of time. 4. As a means to improve ourselves, computers have lots of potential. They need not even attach to our anatomy (or brain) to do this, though when we work out how our brains work it may be useful to attach some of them more directly. Yet just like any tool, they will need some understanding on our part to use them. Sure, we'll be able to make it relatively easy to acquire that understanding, but it will still be needed... and RELATIVELY easy doesn't mean easy. No doubt there are other appropriate comments, too, but I will leave it to these for now. Best wishes and long long life for all, Thomas Donaldson Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15914