X-Message-Number: 11887
From: Thomas Donaldson <>
Subject: For Bob Ettinger and others: computers, symbols, and reality
Date: Sat, 5 Jun 1999 00:24:40 +1000 (EST)

Hi to everyone!

About computers and symbolic computation: one subtle point is easy to
forget. No, the lowest level operations of our brain are not symbolic
in any meaningful sense... although we certainly operate with symbols when
we think (most people think in a mixture of images and words).

The key here is the definition of "symbol". A symbol is an ARBITRARY 
thing or event which is associated with some other real thing or event.
When our brain works at the lowest level (ie. before we get into language)
it is not working with symbols because the brain events are not arbitrary.
Our DNA, for instance, is not symbolic. It consists of a string of 
biochemicals with a relationship to other systems defined by chemistry,
and that relationship does not hold for arbitrary sets of biochemicals.
As for our thinking, 

I distinguish the thinking of which we are conscious from that which
actually goes on --- and sometimes finishes even before we become
conscious of it. After all, consciousness is sequential and our
brain is very highly parallel --- more parallel than any computer yet
built. A lot of our thinking about visual scenes and even relations
between things, animals, and people does not happen on a symbolic
level. We simply have systems which respond to events because of their
structure, not their components ... and that structure most certainly 
cannot be arbitrary. And if we think about how language REALLY works,
ultimately there is an association of a symbol with a thing, event, etc
which is NOT learned simply through a set of definitions of symbols.

This is why neural nets provide a much better foundation for "intelligent"
(and perhaps even conscious) robots than do ordinary computers. Ordinary
computers merely operate with symbols; we human beings, using those 
computers, decide on what their results mean. An ordinary computer
cannot do this, no matter how powerful. Neural nets, however, come much
closer to linking the operations of the neural net computer to those of
the world in a way which is NOT symbolic. And yes, because ordinary
computers can be programmed to tell us things in ordinary language it is
easy to think that their language works the same way as ours --- which is
quite false.

The one issue which Searle does not discuss, and which remains an open
question, is the one I've discussed before: Can a symbolic system be 
sufficiently involved and complex that it can have ONLY ONE possible
interpretation in the world? Basically that would make the symbolic
system NOT arbitrary. I personally strongly doubt that. But to
prove that this is possible or impossible requires much more than the kind
of handwaving that normally happens --- and has not yet been done
either way.

Basically, I am saying that Searle had the beginning of a key point. More
understanding of how brains actually work emphasizes this more and more.
For those who want devices which don't work like ordinary computers, but
which ARE intelligent, this simply means that they should not use ordinary
computers to make their devices. (Even with lots of add-ons they will 
still fall down, basically because NO system of symbols can ever come
close to the real world). And if we want these devices to approach
the abilities of brains we may even have to use neural nets unlike those
now used: NNs that grow and change constantly, even at the level of adding
new neurons (a side comment: our brains probably make new neurons, just
like those of many other animals, in two regions AT A MINIMUM: the dentate
gyrus of our hippocampus and the neurons of our olfactory system. The new
neurons come not by division of old neurons, but from stem cells which
grow into neurons).

And for computer people there is an extra kink with neural nets: it's
far from obvious that they can be imitated by Turing machines. The reason
I suggest this is simple: classical Turing machines respond to symbols
on a tape by moving the tape and changing the symbols. Those neural nets
able to learn on their own (not all neural nets can do this) DEPEND on
sensors telling them about the world, which is not a tape to be moved
or written upon. 

I expect that some readers of this will answer by providing plans by which 
a Turing machine might get inputs (the inputs to a Turing machine ie. what
is on the tape at the start, are not normally specified) and copy a neural
net in its behavior. I raise a suggestion here, not provide any kind of
proof; but I will point out that those inputs to a computer trying to
simulate a neural net will almost inevitably be symbols rather than 
real events (the Turing machine has a tape, not eyes or ears).

			And best and long long life to everyone,

				Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11887