X-Message-Number: 11713
From: 
Date: Sun, 9 May 1999 14:42:58 EDT
Subject: proper posts; thought experiments

First another word about the propriety of posting messages of doubtful 
interest or marginal relevance. The justification is, first, that no one is 
compelled to read anything; culling and skimming are expected. Second, such 
questions as the correct criteria of survival, including uploading and 
emulation etc., do indeed pertain directly to the main interest of most 
readers here, namely personal survival or life extension. Not everyone wants 
to discuss "impractical" things, but those interested in the "big picture" 
are far from uncommon here.

Now a couple of brief comments on recent posts:

Daniel Crevier reminds us of the thought experiment in which brain parts are 
gradually replaced with ersatz ones, and the brain interacts either with the 
real world or with a virtual reality generator. He said that after 
substitution is complete, you would be a simulation. I replied that--if it 
worked or appeared to work--you would be a functional duplicate or analog, 
not a simulation. He then said:

>To Robert Ettinger: You replied in message #11656 that building 
>circuits in this way doesn't count because it constitutes a duplication
>and not a simulation. What would you say if, instead of building
>circuits, the robot installed terminals connected to a serial computer
>that simulates the circuits? In this way, the brain would end up as
>a numerical simulation in the computer: it would be software, not hard-
>ware. Would you still consider this a duplication?

This is not completely clear, but I take it to mean that, instead of the 
robot surgeon gradually replacing brain parts with inorganic substitutes, the 
robot removes the brain parts and at the input and output ends sends signals 
from the remaining brain to the computer and from the computer to the 
remaining brain. Well, first of all, the signals in the brain are not all 
electronic; some are chemical, and the computer cannot produce chemical 
signals except indirectly, which would require an ersatz brain part after 
all. 

More generally, if in the end nothing is left but a computer, it probably 
fails because it cannot bind time and space the way a physical brain can. The 
"information paradigm" is only a conjecture, not a proven principle. 

In another post, Dr. Crevier writes:

>The classical
>position in philosophy holds that consciousness implies the ability to
>represent and reason about one's own mental states. For example, a
>conscious being should be able to explain the reasons for its actions.
>A classical example of this in computer science is the program  SHRDLU,
>written by Patrick Winston at MIT in the early 1970's. It manipulated 
>simulated geometric objects at the request of a human user, and  could 
>answer questions about its motivations, 

No, the essence of consciousness is not in representation or in reasoning--it 
is in feeling, qualia, subjectivity. A dog cannot reason about its mental 
states, but it is certainly conscious. Conversation is not the criterion, and 
passing the Turing Test is neither necessary nor sufficient.

Robert Ettinger
Cryonics Institute
Immortalist Society
http://www.cryonics.org

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11713