X-Message-Number: 8622
Date: Sun, 21 Sep 1997 09:58:49 -0700
From: Peter Merel <>
Subject: Smoking Rope

I wrote,

>If there are finite bounds on each dimension, then what you have is not in
>NP but P - so what?

Whoops, I meant NP-complete, not NP. P is a subset of NP, with NP-complete
being its complement. Sorry, it's been a while ...

Thomas Donaldson raises some interesting points in his latest:

>A neural net LEARNS, it isn't programmed. 

Let's examine this distinction more closely. When I build a "traditional" 
computer system, it may have a persistent store to modify its range of
responses. When I build an NN-program, it does likewise. I've built
pattern-recognition systems that appeared to learn, to an extent, within
a limited domain. Though there were no "neurons" within these systems,
I'd have a hard time proving that there were no mechanisms you'd consider
equivalent to neurons. But NN-programs are also restricted to learning 
within limited domains, so I wonder if there is really a qualitative 
difference here, and if so how it might be characterized?

>And since our consciousness is quite sequential, it's clear that all 
>the activity in our brain's neural nets goes on outside our consciousness. 

I notice no limitation to sequence in my thinking; I often think by 
juggling components within a mental space, considering many different
assemblages and dynamic arrangements at once. In spoken expressions of
course I notice sequential limitations, but then it's straightforward
for people to "deserialize" and recollect expressions out of order when 
that suits them.

It may be that you're applying a more rigorous definition of "consciousness"
here than I do; on the other hand, it may be that there are qualitative 
differences between the mechanisms of our different minds. Should we assume
that the mind of Van Gogh and the mind of Picasso bore similarity when 
their paintings don't? That both minds were built of neurons tells us 
no more than that joints and ropes are both built of hemp.

>any computer not equipped with neural nets will fail
>that test: it knows only the verbal definitions of the words it uses, and
>might be trapped by getting it to fail to recognize just what a described
>object or feeling might be, without the tester giving it a name for it.

Are neural nets the only mechanism by which this may be achieved? For that
matter, perhaps neural nets are not an optimal embodiment of this mechanism
at all; certainly it is straightforward to trick them into failures.
The neural nets in your eyes, for example, are vulnerable to all manner
of optical illusions, sleight of hand and trompe l'oeil, repeatably failing
to recognize well-described objects. Let's not get hung up on implementations
here; we don't know what the range of vehicles for intelligence might be, and
our other engineering successes strongly suggest we expect, one day, our
creations will outperform what evolution has afforded us.

>BUT if you consider such a device a computer, then 
>you're very very close to deciding that our brains are also computers,
>whereupon the Turing Test totally loses its meaning. Tell me then just
>what is a computer...)

The accepted definition in CS is that any process that can be emulated 
by a Turing Machine is a computer. There are no known examples of tasks 
that brains can perform that could not, in principle, be performed by a 
Turing Machine, so I should say that the burden of proof is on those who 
suggest that the brain is NOT a computer.

Peter Merel.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8622