X-Message-Number: 11724
Date: Mon, 10 May 1999 18:49:36 -0400
From: Daniel Crevier <>
Subject: thought experiment on uploading.

To Thomas Donaldson: thanks for answering  my post, even if my name 
is neither David nor Robert : ). You write:  

>One quite critical issue which your little story slides over is that of
>just how well we will ever be able to simulate a world. I am doubting
>that our simulations will do very well; sure, I believe in using them 
>for various kinds of training, etc --- but the idea of living in a
>simulation, given the weakness of such simulations compared to the 
>real world, seems quite ridiculous.

The question we are addressing is: can a mind simulated in a computer
be conscious? Now, I can look at a lousy simulation of the world, and
be conscious of it. I believe the answer to the question lies much more 
in how well you simulate the mind than on how you simulate the
world. Likewise, I believe that your objection about SHRDLU not
getting around in the real world misses the point. If SHRDLU had
very little consciousness, which I readily concede, it was because its
internal structure was too simplistic, not because of the quality of 
the world simulation it lived in.  

To Robert Ettinger: I had asked what you would think of 
the thought experiment if the circuits emulating the brains were
replaced by a serial computer. You replied 

>This is not completely clear, but I take it to mean that, instead of
>the  robot surgeon gradually replacing brain parts with inorganic 
>substitutes, the robot removes the brain parts and at the input and 
>output ends sends signals from the remaining brain to the computer and
>from the computer to the remaining brain.

Right. However, the brain parts wouldn't be taken away before the
subject had had a chance to  evaluate the quality of the simulation.
Further, as the process continues, the terminals of the simulation
would move inwards in the brain, so that the original simulated
piece would become part of a larger simulation.

>Well, first of all, the signals in the brain are not all 
>electronic; some are chemical, and the computer cannot produce chemical 
>signals except indirectly, which would require an ersatz brain part 
>after all. 

Well, this objection also applies to the electronic circuit version
of the thought experiment. So by that reckoning the resulting circuit 
would  not be an emulation after all. 

But in fact the presence of neurotranmitters in the brain does not 
affect my point. What does matter, I believe, is that the
simulating circuit, or program, be able to behave, from the outside, 
like a web of neurons using neurotransmitters to influence
each other.  That should be possible because the neurons are 
physical objects that a computer can simulate. In order to preserve 
the gradual nature of the experiment, we can assume that at any stage,
the terminals between the rest of the brain and the simulation are 
able to generate neurotranmitters, so that the simulation can 
communicate with the rest of the brain. This kind of interfacing is, 
I believe, an ongoing research topic in neuroscience.
 
>More generally, if in the end nothing is left but a computer, it
>probably fails because it cannot bind time and space the way a 
>physical brain can. The "information paradigm" is only a conjecture,
>not a proven principle. 

Here you are rejecting my conclusion out of general principles: the 
simulation has to fail because it's a computer. But we said that at
any stage of the experiment, the subject could verify the integrity of
his/her consciousness. In this case, how could he/she be unconscious at
the end? In order to justify that conclusion, you have to assume that
consciousness is gradually lost during  the experiment. Yet,
if at every step the added simulation performs exactly the same 
functions as the removed brain material, and if the subject therefore
reacts exactly as before, how can that be?

Daniel Crevier

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11724