X-Message-Number: 25558
Date: Sun, 16 Jan 2005 07:39:32 -0500
From: Thomas Donaldson <>
Subject: CryoNet #25538 - #25548

Lots of mixtures of partial truths with outright falsities, this time:

For Henri Kluytmans:

If I understand you correctly, you're claiming that we can use our
partial simulations of insect brains to reach a system which acts like
a human brain much faster than starting from scratch. This confuses the
results of a simulation with an understanding of what really happens.

As I understand the situation in AI today, it's been very easy to 
make simulated systems (all in your computer, possibly a parallel
computer) which IN YOUR COMPUTER act as if they are (say) real insects,
or (for the future) real people. However when we actually try to use
this simulated system to construct a real robot which acts like 
a real insect, the system fails completely. The real world exceeds in
complexity any world we can simulate by many orders of magnitude.

Again, you say that your created AI will be able to (think? act?) 
millions of times faster than we do. First, as parallel systems 
ourselves, we aren't so clearly limited by the speed of a single
neuron. Second, and most important, an AI that spins its wheels
millions of times faster gets nowhere unless it gets data just
as rapidly, and also can act just as rapidly --- neither oneof
which look as easy as increasing computer speed. In short, your
brain will have nothing to think about at all for almost every
one of its cycles. They are wasted energy and the hardware which
performs them is wasted hardware. (Unfortunately the original ideas
of Turing dealt ONLY with COMPUTING. In real life we have to do
something too --- presumably that job was left to humans).

As for any "singularity", there is an even worse problem. We
human beings use computers to solve OUR problems. Even if we had
computers able to solve (at least the computing part) of our
problems instantly, not millions of times faster but instantly.
We would still have to make sure we gave the computer the right
problem (or maybe decide that we didn't when we get the answer
back) and then decide whether or not to act on it. Sure, you
can (in theory) go off and create some superfast computer which
goes out into the world and solves ITS problems. At worst, you 
will have created a danger to the rest of us, at best something
that we all wave goodbye to and continue to conduct our own
business. (Incidentally, my own background is that of a mathematician,
and even the choice of MATH PROBLEMS to be solved is one that
we human beings make, not one that we would hand to a computer.
For that matter, if we (possibly fantastic possibility) let
our instant solver go off and solve all possible math problems,
we'd still have to sort through them all to find those we ourselves
want to be solved).

There is an interesting phenomenon that some historians of science
have pointed out. All the scientific and technological work that
came before us is accepted, old hat, not all that significant.
It's what we're working on right now that will really advance
humanity. As for the time in which we've solved these problems,
everything will then calm down because all the important problems
have been solved. This feeling has gone on for centuries, ever
since science began. Funny, isn't it?

		Best wishes and long long life,

                    Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=25558