X-Message-Number: 11749
From: 
Date: Fri, 14 May 1999 13:09:47 EDT
Subject: Once more, with feeling--zombies etc.

Reiterating his case for consciousness in computers, Daniel Crevier writes
(in part):  

>We said that we could replace a piece of the >brain by a simulation with
the same input-output   >properties. It's pretty hard to argue against
that, since what happens in >any part of the brain are physical processes,
and a computer can simulate >any physical process. Now if we can do it for
a piece of the brain, we can >do it for the whole brain: just do it for
all the pieces, connect the >resulting simulations together, and voilą.   


>The resulting simulated brain would therefore have the same input-output
>properties as the real brain. For a brain, the input is what the senses
>tell it, and the output is motor signals determining the person's words
>and deeds. The simulated person would thus react in exactly the same way
>as the real person. In particular, if you ask her whether she is  
>conscious, she will answer yes.    

We have been over this countless times, going back years, but let's try
yet  again, since apparently I still fail to convey my points clearly.
Let's  separate it into two parts.  

1. Could there be "zombies"--systems that, seen from the outside, behave
just  like people would, and in particular converse just like people
would, but  nevertheless have no consciousness and are not people? If
there could be  zombies, would any "paradoxes" or philosophical problems
arise?  

Certainly there could be zombies. This is just the Turing Test revisited.
A  sufficiently sophisticated scanner-cum-computer could analyze a person
and  predict his actions, and then program a robot to follow that script.
To a  very limited extent, THIS HAS ALREADY BEEN DONE. Conversational
programs  already exist, and fool some of the people some of the time, yet
no one  claims that these programs (or the computers in which they run)
are  conscious--and making them more elaborate and more accurate will
change  nothing in principle.  

No unique philosophical problems arise, only practical ones. Certainly you
might worry, and could err, if you suspect that someone is a zombie--so
what?  Either you find some way to decide (see below), or you live with
the  uncertainty.  

Mr. Crevier's position reminds us of the Behaviorist school in 
psychology--remember B.F. Skinner? They thought we had to regard people as
black boxes, with only inputs and outputs as the allowable variables, 
internal structure off limits. This again reminds us of the Positivist
school  of philosophy of science, which (roughly) held that, if you can't
get an  answer to a question by performing an experiment, then the
question is  meaningless. This school also is in decline; remember that,
at a particular  time in a particular context, what you "can" or cannot do
in principle is  generally not at all clear.   

(Right now, for example, some people take the view that the mathematical 
postulates of quantum mechanics represent the last word, and it is 
meaningless to ask whether those postulates can be explained by, or might 
result from, deeper underlying realities, such as hidden variables. And I 
might add that the "many worlds" or "multiverse" interpretation is really 
just a form of hidden variables, or a metaphor for one possible system of 
hidden variables.)    

2. What is proven by thought experiments involving gradual replacement of 
brain parts by computer simulations?   

First, I repeat that thought experiments are useful only if the premises
are  sound. Mr. Crevier assumes, as a premise, that if brain parts are
gradually  removed, replaced by electromagnetic and chemical
inputs/outputs generated by  a computer, the subject will notice nothing
at any stage. But this is not  obvious at all. If indeed--as I
surmise--consciousness requires time binding  and space binding activity
in the brain, then the computer could not supply  that, and subjectivity
would be lacking. If the person's "self circuit" were  removed or shut
off, then the person would lose consciousness.  

The key question then becomes, would or could the subject nevertheless
REPORT  that he was still conscious, and if so are we obliged to consider
only that  report and nothing else?  

If the computer can simulate the details of the self circuit, and if it is
so  fast that it doesn't need to be parallel or time-binding or
space-binding,  but can produce the same input/output signals as the self
circuit, then we  would indeed have a zombie, with consequences discussed
above. The subject  would lose consciousness, but would report that he was
still conscious.  

But we are NOT reduced to considering the report and nothing else. If a
decoy  looks like a duck and walks like a duck and quacks like a duck
(these  actually exist), we can still demand to feel its feathers, x-ray
its innards,  etc., before admitting that maybe it really is a duck. We
cannot yet do the  equivalent for consciousness in a person, because we do
not yet know the  anatomy/physiology of consciousness. But when we do
learn the innards of  consciousness in mammals, we will be justified in
skepticism about the  consciousness of an artifact that lacks those
features.   

Another reminder: If a computer can be conscious, so can a book. If 
isomorphism is everything--and this is indeed the real, basic premise of
Mr.  Crevier's school of thought--then the principle of isomorphism should
apply  to time as well as space and matter. If nothing is essential except
eventual  generation of the right sets of numbers, then the pages of a
book can  correspond to the successive states of a Turing computer, and
therefore to a  living person. I think this is a valid
reductio-ad-absurdum.   

Finally, I said there are no "unique" philosophical problems arising from
the  "zombie" question. But there are still the usual problems of
duplication and  continuity, which also exist for example with a
beam-me-up machine; these are  still open questions, and anyone who thinks
he knows the answers is kidding  himself.  

And finally-finally, in the face of the uncertainties, the common-sense 
course of action is to try to have ourselves cryopreserved if we die, with
minimal change, and not be side-tracked by the much more remote
possibilities  of uploading.   

Robert Ettinger Cryonics Institute Immortalist Society
http://www.cryonics.org  

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11749