X-Message-Number: 8646
From: Thomas Donaldson <>
Subject: Re: CryoNet #8636 - #8641
Date: Tue, 30 Sep 1997 21:46:03 -0700 (PDT)

Hi again!

I shall be very brief, perhaps apologetically. If either John Clark or 
Peter Merel had read my description of these nonTuring computers, they should
have surely noticed that in each case the real number is carried out to 
a finite number of digits --- different each time, sure, but finite each
time, too. 

This means that they are no more physically impossible than a Turing machine
with a single infinite tape. Both do suffer from some impracticality, but
that does not mean that one is less impractical than the other.

For that matter, I never claimed myself that Siegelmann and Sontag had or
had not shown that our brains were "real number Turing machines". Quite to
the contrary. To show such a thing will take some deep knowledge of how
brains work, and possibly some new experiments.

However, if you want to discuss that issue, I will add the following:

We now have at least 2, probably many more, quite incommensurable models of
just what a "computer" might be. To claim that our brain "is" any kind of
computer therefore requires the person making such a claim to answer just
which kind. On a practical level, the suggestion given by these results
with "real-number Turing machines" is that the architecture of the machine
can make a LOT of difference. If we make everything finite, with some
upper bound, then we get to discuss the practical differences between
RN (== real number) Turing machines and IN (== integer) Turing machines.
If our brains are finite RN machines, we may be able to do some things 
much more easily than if they were finite IN machines.  Given finitude,
each machine could at least get the same results as the other, their
difference would consist of how efficiently they did so.

If I were to think about practical implications of that apparently 
abstract result, it's exactly that whicI would look at. As a very 
rough guess, it does seem interesting that neural nets can very efficiently
learn recognition tasks which computers (using their normal architecture,
not just trying to simulate a neural net) find very difficult. What is
meant by efficiency here? Well, the time and energy expended seems a good
test. With zero energy, time goes up; with lots of energy, time goes
down: but that is for a single computer architecture. The curves may 
lie differently depending on just which architecture you choose. And it
does seem to me from simple performance results that neural nets are very
good for recognition (an ability fundamental to language itself, incidentally).

Do I "believe" what I've just written? Well, if you press me I'd say that
it's a research project rather than a connected, fully worked out theory.
But Siegelmann and Sontag are, in their abstract way, telling us something
important. We cannot so blithely model every kind of thinking as an 
instance of a Turing computer. Not any more. You might find a better model
if you chose something else.

And a special word to Peter Merel here: at one time you thought I was 
either "off the wall" or completely wrong to suggest that 200 years from
now our notion of a computer would have changed a good deal, and it would
not seem so obvious any more that any animal (or plant?) merely by showing
some ability to think (intelligence?) must necessarily follow only one
patternIt is results such as those of Siegelmann and Sontag which make
me think what I said before. Tell me what kind of animal a computer is
and I will tell you whether or not our brains work like computers. And
then when we bring in quantum computers that makes 3 kinds, too.

			Best and long long life to all,

				Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8646