X-Message-Number: 9326
Subject: Recursion (see Recursion)
Date: Fri, 20 Mar 1998 12:38:16 -0500
From: "Perry E. Metzger" <>

> From: Thomas Donaldson <>
> Date: Fri, 20 Mar 1998 00:28:32 -0800 (PST)
> 
> As for Turing machines etc I have certainly read about them. I will point
> out, as I have already done, that the classical Turing machine with an
> infinite tape can no more exist than could a Siegelmann machine.

You still seem to fail to understand, but that is hardly surprising.

A Turing Machine is a MAXIMAL MODEL.

As with a Carnot Engine, it cannot be built, but it states what the
absolute limit on what can be built is. It is an interesting model in
so far as, given enough storage, you can get "arbitrarily close" to
what it does. A Carnot Cycle Engine similarly gives interesting data
on the limitations of engines.

A Siegelmann Machine is uninteresting in so far as it cannot even be
approached. You can't build a machine with a single infinite precision 
register, let alone a gaggle of them.

A machine "better" than a Turing Machine which cannot be built is no
more interesting than a machine "more efficient" than a Carnot Cycle
Engine. Since you can't even build a Carnot Cycle Engine but can only
approach it, what does it matter that you have another model around
that is somehow "better".

A Carnot Cycle engine is interesting, though, in so far as you can get
"arbitrarily close" to it in efficiency, so it provides you with an
interesting benchmark. Similarly, the Turing Machine provides an
interesting benchmark to the theoretical computer scientist -- if
something can't be done on a Turing Machine, it can't be done at all,
and thus we can gain some information about the world by studying the
"Maximum Computer".

Sure, you can go out and come up with a theoretical model of
computation that is more powerful -- and as I've noted, people have
played that game a million times in the past. "What happens when you
build a TM out of reals" is an old question, not a new one. The point
is that it isn't interesting in revealing ANYTHING about the real
world.

As I've stated repeatedly, the question here is very real world: can
we build a computer that simulates a human brain? That computer
needn't be a Turing Machine, but if a Turing Machine couldn't do it,
obviously nothing could. It is not interesting, however, to note that
some theoretical construct could be more powerful than a Turing
Machine if a human brain isn't as powerful as that theoretical
construct!

So lets get back to what we were discussing: can a brain be simulated
with a computer?

I've noted, over and over again, that there is no magic in
neurons. You've yet to describe any behavior they engage in that can't
be modeled perfectly well with a modern computer. In ten to twenty
years, our computers should be powerful enough to model the whole
brain if need be.

You've yet to demonstrate to us even a single characteristic of
neurons that can't be modeled. You've made vague hand motions about
"molecules being in any one of an infinite number of positions", but
the fact is that doesn't really impact the operation of a neuron much, 
given that it is large enough that statistical mechanics governs much
of it, and not the position of the individual water molecules in the
protoplasm.

What we've heard are vague claims that somehow neurons perform
calculations impossible to perform on digital computers, but we've yet 
to have an example shown to us. You've yet to give us any convincing
argument. What you've done is wave your hands, vigorously.

> I won't quote any authorities here, but a slightly broader
> definition would work better: an algorithm is recursive if it
> repeatedly applies a simple function to the result of previously
> applying the function.

Sorry, but that isn't correct. It isn't even close.

That isn't "recursion".

"Recursion" is the definition of a function in terms of itself, as in: 
`The factorial of N is N times the factorial of N - 1'.

A properly defined recursive function can be translated into a
computable algorithm, although some recursive function definitions are 
infinitely recursive and thus cannot be computed.

> Clearly Mr. Metzger likes stacks and has decided, perhaps with
> authorities behind him, that this is iteration rather than recursion.

No, I've just got an idea of what I'm talking about, and you don't.

> The important point here is that we repeatedly use our result,

That's iteration, Mr. Donaldson, not recursion.

I'm ending this particular branch discussion now. Why? Because it is
obvious that your grasp of the topic we are discussing is sufficiently
pathetic that you don't even understand the terminology of the field.

Imagine having an argument with someone about mathematics, who insists 
that "integration" must mean something to do with having all the even
and odd numbers together in a single function and not segregating
them, or trying to have a discussion with someone about law who
insists that "tort law" is the law of how to bake lindzer torte cakes.

You aren't quite that bad, Mr. Donaldson, but its bad enough that
having this discussion with you is painful. You believe that you don't 
need to know anything in order to talk about a topic. Well, sadly, you 
do.

The question of Turing Machines is, of course, a side issue. It is one 
where you have vigorously annoyed me largely because I happen to know
something about the topic and found your ignorant prattle about them
wrong on so basic a level that I felt it was necessary to correct
you.

Just to review the progress of this discussion, however:

1) Mr. Donaldson contented that perception was continuous. I noted
that neurons put out pulses. We never heard so much as a concession of 
point.

2) Mr. Donaldson repeatedly attempted to implicitly resurrect the
cartesian theater. I repeatedly called him on it. The arguments
disappeared, but we never heard a single "well, maybe I'm wrong about
that."

3) Mr. Donaldson has made repeated claims about theoretical computer
science, none of which betray even a single day's worth of study of
the topic, and many of which are painful to read. No apologies are
ever heard for misstatements, inaccurate terminology, or anything else 
inaccurate he says.

4) Mr. Donaldson has made claims to the effect that you couldn't
simulate a neuron because of "Chaos". I've repeatedly explained why
exactly matching outputs are not necessarily a criterion in this
instance for successful simulation -- I presented a fairly elaborate
argument about why statistically indistinguishable output was
sufficient. Mr. Donaldson stopped mentioning "Chaos" but never replied 
to the point or conceded it. As it was inconvenient, he apparently
decided to simply ignore it. That way he could get people not to
notice that it had been made, I suppose.

5) Mr. Donaldson has made repeated claims about how neural networks
are somehow not simulatable by Turing Machines because they somehow
produce non-Turing computable output. I've challenged him to provide
evidence for this claim. We've heard a claim to the effect that there
exists a theoretical model of computation more powerful than Turing
Machines (not really news as such models were first proposed in the
1950s but are uninteresting) but repeated requests for some sort of
mapping between this impossible to construct model and the human brain 
have fallen on deaf ears.

> I wasn't interested in arguing about words or recursion, in any case. The
> point I was originally making was that doing things in sequence, as a 
> model of how the world works, is a very poor model.

You might as well be saying that since calculus is performed with
integral signs, and no integral signs are found in the average glass
of water, you can't calculate the volume of the glass using calculus.

The way that the computer works has nothing to do with the way that
the thing it models works. I've made this point repeatedly, and you
have repeatedly ignored. it. I suppose this should be #6:

6) Mr. Donaldson has made repeated claims to the effect that because
computers don't behave like the things they are modeling they can't
model them. I have repeatedly debunked this by noting the lack of any
correspondance between successfully and usefully modeled devices (like 
aircraft engines) and computers -- there is no one to one
correspondance, and yet the computer models the device just
fine. Mr. Donaldson has never addressed this. He simply repeats his
original claim, over and over, like a mantra.

> As someone with a DE background, and who has done actual research in that
> field, I can hardly fail to notice chaos.

See #4, above, for why you should either address my repeated notes
about why this is a canard or shut up about it. (I suspect you will
simply persist in ignoring what I've said.)

For the benefit of those paying attention, one more time:

Two physically identical neurons will not behave exactly the same
way. Fifty of them will produce a statistical behavior, but not
identical behavior. One neuron will not, no matter how often you try,
produce the same output in response to the same input -- it will only
produce a statistically clustered and characterizable signal.

Given that I couldn't produce an IDENTICAL output to that of a given
neuron with another identical neuron, or even with the same neuron, it
is a bit much to ask that a simulation produce identical output. It is 
also unnecessary. All I have to do is produce something that creates
statistically indistinguishable output.

Remember, our goal here is to produce something that cannot be
distinguished (except by "opening it up") from a human. It is not
necessary to create IDENTICAL responses to Thomas Donaldson, as Thomas 
Donaldson's instant molecular replica won't produce IDENTICAL
responses, either. It is only necessary to produce ones that are not
DISTINGUISHABLE.

When I place Thomas Donaldson and a brain simulators in a Donaldson
clone (replacing the original brain) into a room with you, if you
can't pick which is which in repeated trials more often than flipping
a coin would, then we've succeeded.

Given this, who cares about all this "Chaos" handwaving, anyway? Sure,
the output signals will be different, but that is not interesting. Our
goal is not IDENTICAL output, it is statistically indistinguishable
output, and the two are not the same. A molecular instaclone of
Donaldson would be indistinguishable but not identical.

> And now we come to neurons. First of all, they show no obvious 
> resemblance to computers, and they are also (individually) much slower.

See #6 above.

The fact that I can refer to arguments by number is making this
discussion boring.

> ... chemical circuits do behave differently from electrical
> circuits. First of all, chemicals diffuse.

See #6, above.

As I have said repeatedly, the fact that there is no isomorphism
between a computer and an automotive bearing does not mean that a
computer cannot simulate the bearing.

The brain is NOT a computer. It does not function like a computer, it
contains no CPU, RAM, etc. I have said this repeatedly. You have of
course ignored it. I also explained carefully that no one was claiming 
that a brain was a computer, people were making the entirely different 
claim that it could be SIMULATED by a computer.

> Since neurons do signal electrically, the fact that they also use
> different biochemicals to do so suggests to me that evolution has
> found that to be superior to using only electrical signals.

I suspect the fact that you can't build an electrical circuit out of
cells had more to do with it.

Your argument is rather silly. I mean, a diamondoid matrix would be a
far more durable and strong material for bone than calcium compound
deposits. We are not built from diamondoid, not because diamondoids
are impossible or don't exist but simply because it isn't a natural
material for biological systems to build.

Your argument is tantamount to "wheels must be inferior to legs,
because animals have legs, not wheels".

It is also more or less on the level of "space travel must not be a
good thing, because otherwise nature would have evolved creatures
capable of space travel using only their own biological mechanisms."

> This difference alone tells me that you'll
> have to use much more complex electrical circuits to emulate a
> neuron than you would if neurons were simple electrical processors.

Comparing the complexity of a neuron to that of a computer is like
comparing the aesthetic qualities of a book to that of a
painting. They are incommensurate. You cannot compare the two. Your
statement thus is deviod of semantic content.

> Furthermore, even neurons show self repair.

So?

This is utterly idiotic.

"Humans will never be able to make a robot that walks like an insect,
because insect legs can repair themselves."

(BTW, people at CMU have already done it.)

I've said this repeatedly, btw.

I'd make it #7, vis:

7) Donaldson repeatedly handwaves about "self repair", and in message
after message I have said "what does that have to do with anything",
but thus far he has never answered.

> the fact that neuroscientists are still studying behavior of
> individual neurons suggests that we cannot simply decide that they
> can be realistically modelled by some simple set of electrical
> devices.

Lets say it is a complicated set of computer programs. Big deal. We
know enough about them already to know the general outline of how they 
work, and it isn't magical. It is in fact, fairly comprehensable, and
there is no evidence to date of anything we can't model to within the
tolerances I specify in point 4).

"Can we go home now, ma?"

> Sure, we could
> no doubt write a program which would, for a while, act like a neuron.
> It's far from obvious that this program could emulate a neuron for
> a long time (chaos again: it's turned out that only the simplest
> phenomena don't show chaos). 

See #4.

You really are boring.

> Furthermore, an argument that we can use digital devices to emulate
> ANYTHING requires more than just the observation that (say) it
> occurs in a discrete set of states.

Nope. Something doesn't need to be in a discrete set of states EVER to 
be modeled successfully.

See #4. Also see #6, since it appears you are still in the "the thing
modeled must be like a computer" fallacy.

[Speaking of digital images...]
> Yes, we could be fooled by such a scene.

You claimed we COULDN'T. How odd that you've changed your tune.

> But those oodles of pixels start to become a severe computational
> burden if the scene ceases to be static and starts to move.

I see you haven't been at SIGGRAPH lately.

I also see you haven't been to the movies lately, either. See
"Jurrasic Park" as an example of what was possible years ago. We've
gotten better since. We continue to get better. MUCH better.

(In fact, years ago when "Toy Story" was made, there was considerable
concern that the models not look too realistic so that they had an
"animated" look, and substantial work went in to making the output
look fake. I am not kidding you.)

> As for counterexamples, they remain important. The use of a counterexample
> is that it shows that one way of thinking has a problem in it. And a 
> counterexample providing a machine (not a Turing machine) capable of doing
> calculations which Turing machines cannot,

Mr. Donaldson, as I've noted, theoretical machines more powerful than
a Turing Machine do not constitute a "counterexample" to the
Church-Turing Thesis. The first such machines were thought about a
good fourty or fourty five years ago. Please figure out WHY people
care what a theoretical construct like a TM can do before you bother
us with "lets suppose we made a computer that stored a bezier spline
in each slot instead of a finite symbol" crud. See the beginning of
this message.

Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=9326