X-Message-Number: 14675
Date: Thu, 12 Oct 2000 13:09:53 +0200
From: Henri Kluytmans <>
Subject: Re: sentience in other media

David Pizer wrote :

>One thing that (I believe) most serious cryonicsts on this forum are
>concerned about is not only what *is* sentience, but how can one be sure
>that if thoughts and other things are someday transfered to a new device (a
>new brain, a computer or whatever) and then the original brain is destroyed,
<snip>
>has the original person has really survived, or is there just this new
>person-thing with the original person's memories? 

It seems logical that sane people will only have their minds transferred 
to a new device, when the used method of transferring the mind has already 
been "proven" to work first. I suppose every sane rational person would 
object otherwise.

(By the way, it duoesnt seem likely to me that frozen people will 
be chosen and used as test subjects for the development of a such 
procedure.)

Any technology of mind transfer, or uploading, will be developed 
in a gradual way. First it will be tested on simple organisms and 
gradually moving on to more complex organisms, right up to the 
primates. And finally it will then be tried on human minds. 

The method of determination of success will probably be the 
comparison of behavior of the uploaded versus the original mind.

Lets presume that a non-destructive method of mind-scanning will 
be available. (And this seems escpecially likely when probes 
created by a mature molecular nanotechnology can be used.) 

Then the original person can communicate with the uploaded 
version to establish if the uploaded version is an exact 
functional copy of the original person's mind. For me personally 
this criteria would be sufficient as "proof" of success. 

I.e. in a conversation with a copy of myself, I think I would 
be able to determine if that copy is a sufficiently exact 
functional copy of myself. Because I can ask the questions 
(and know the answers) that only I, or an exact copy will 
be able to answer correctly.

Now, in discussions like this, the "copy-paradox" problem 
always seems to pop up, because both the original and the 
uploaded version are "running".

However, for me, using an informational viewpoint of identity, 
there seems to be no paradox!  When non-destructively uploaded, 
both, my copy and my original are having the same identity. At 
least, at the point in time when the transfer has been completed, 
after this, the two copies will diverge. (Im neglecting the time 
the transfer itself took.)

If you look at this identity problem from an information point 
of view, then suddenly this whole "paradox" will vanish. Then 
it will seem just as silly a question as asking "Which 
program is the real one, the one on the original diskette, 
or the one copied onto the other diskette."
(Ehh, ... this is just a philosophical analogy.)

Furthermore, using an informational viewpoint of identity, 
the question of identity doesnt need to be answered by 
yes or no, but can be a fractional answer. The whole 
issue of identity seems to be a gradual one, anyhow.

Everyone has to decide for himself how much loss of his 
identity is acceptable to him or to her. 

For example, if I would be involved in a car accident, 
and I would get into a coma and awake again, and I 
would have lost the last four days of memories, then 
I would consider this not a big deal.

Loosing the last 5 years of memories would be quite 
more dramatic. :(

>I feel that if you save everything and if thefuture animators 

I presume with *everything* you mean all the matter (i.e. all 
the original atoms and molecules, and the molecular structure, 
of the frozen brain).

>reanimate the original stuff - ALL the original neurons
>and original connections in place, then the original person 
>probably has survived.  

Hmm, Im curious what your answer will be... Let's consider 
some hypothetical scenarios. Your body is frozen (in this 
hypothetical example it will be a perfect vitrification, 
no repairs will be necessary). 

Case 1:

Your body is taken apart atom by atom. All the atoms 
are labeled when they are stored away. The locations 
of every atom are stored in a database. Then the body 
is build up again, atom by atom, to its original state. 
Every original atom is put in its old place. The body 
is reanimated.

Would you mind ?

Case 2:

Your body is taken apart atom by atom. The atoms 
are not labeled when they are stored away. The locations 
of every type of atom are stored in a database. Then 
the body is build up again, atom by atom, using the same 
bunch of atoms. But atoms of the same type are thus 
interchanged. (However according to physics this doesnt 
matter.) The body is reanimated.

Would you mind ?

===================

I wrote :

>Assuming that the fundamental functional basis of our brain/mind is 
>only the neural network (lets neglect details like the hormonal 
>system, etc..), then its sufficient to understand how the building 
>blocks (i.e. the neurons) work exactly. In principle its then 
>sufficient to replace all the building blocks by articifical 
>functional equivalents and keep the same interconnectional 
>structure. So in this case a sentience could be transferred 
>without knowing how it works.

David Pizer replied :

>It seems to me that to try to define sentience is to ask "what it is" and,
>just as important, "who it is."  In other words, sentience may be a general
>thing, and/or, perhaps, each sentience is only one specific thing. 

What I was trying to make clear, is that it's not required to 
understand how sentience works in order to be able to create 
sentience in an artificial medium.

>So by using the word "replace" above it seems that we are assuming that
>replacing things from the original into a new entity recreates the *same*
>original person.  If that were true (and I grant that it may be ...but..)
>then if this replacement process was done somehow without destroying the
>original and the original was sitting there looking at, and talking with,
>the duplicate, I think the original, at least, would not agree that he/she
>was the other person (the duplicate).  

Hmm, this is the "copy-paradox". However, as you inferred, I would 
agree that the other person would have the same identity. 

>>Just like a programmer can write an emulator for an other computer 
>>system. He then can run any software written for that computer 
>>system on the emulator. And he doesnt need to understand how those 
>>programs function. The same should hold for the biological "computing 
>>system" the human mind "runs" in.

>I am not comfortable with computer analogies that explain human behavior
>because we don't have computers (yet?) that are exactly like people. 

Hmm, this statement was not made with computers as an analogy to 
the human brain. It was as an example analogy to the issue of 
transferring an (information) process to another device without 
understanding the process itself.

>>The chief basis of sentience is FEELING or qualia, 
>>Is not feeling just an information process...

>Maybe.  If it is an information process, my guess is that the "feeling"
>information process is very different from the "memory" information
>process.  

OK

>Isn't there consensus among neuro-scientists that all the aspects of 
>our mind emerge only from the functioning of the neural networks 
>inside our brain.

>Conceerning the statement: "... all the aspects of *our* mind ... inside
>*our* brain."  That's why I have doubts that if a future scientist were to
>put aspects of *my* mind into a duplicate's brain the duplicate would be
>me.  One of the things that make my mind mine, may not just be sentience,
>but the particular sentience of my one and only brain.

I will continue on this issue, after knowing your answers to my 
two hypothetical scenarios. :-)

Grtz,
>Hkl

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=14675