X-Message-Number: 14647
Date: Mon, 09 Oct 2000 17:08:35 -0500
From: david pizer <>
Subject: Paul, don't read this. :=)

Very interesting thinking.  Some additional questions from David Pizer.
Mr. kluytmans, you are obviously very bright and have some ideas worth
exploring.  I hope you (and others) will enjoy some questions and comments
and reflect more on this subject.  

In replying to Ettinger, Henri Kluytmans said:

snip

>However probably we will be able to "prove" it in the future, even 
>if we still dont understand what sentience is exactly.

One thing that (I believe) most serious cryonicsts on this forum are
concerned about is not only what *is* sentience, but how can one be sure
that if thoughts and other things are someday transfered to a new device (a
new brain, a computer or whatever) and then the original brain is destroyed
(perhaps by the very process of reanimating the frozen person, for instance
if the technique chosen - now talked about a lot - is to take the old brain
apart while taking measurements at the atomic level and then construction a
new thing that supposes to have the same memory), then the question is - IS
the original person now the new thing or to put it more specifically, has
the original person has really survived, or is there just this new
person-thing with the original person's memories? 

>Of course, presumed that a consensus can be reached about a 
>definition for the presence of sentience...

The main life or death question of personal survival for people trying to
survive through cryonics if they are allowed to instruct people of the
future as to what types of possible reanimation techniques they will accept
and what type they don't want used on them, is related to this. (I think a
cryoncist could put certain instructions in their file now.  Most of us
havn't done this because we don't know enough yet to have formed an opinion
on what various potential, future reanimation or duplication techniques
would NOT be survival.  I feel that if you save everything and if the
future animators reanimate the original stuff - ALL the original neurons
and original connections in place, then the original person probably has
survived.  Any technique less than this is questionable.  But that type of
"save it all - reanimate all the exact original stuff" procedure may not
ever be available - so we have to choose from what is/will be available).

>Assuming that the fundamental functional basis of our brain/mind is 
>only the neural network (lets neglect details like the hormonal 
>system, etc..), then its sufficient to understand how the building 
>blocks (i.e. the neurons) work exactly. In principle its then 
>sufficient to replace all the building blocks by articifical 
>functional equivalents and keep the same interconnectional 
>structure. So in this case a sentience could be transferred 
>without knowing how it works.

It seems to me that to try to define sentience is to ask "what it is" and,
just as important, "who it is."  In other words, sentience may be a general
thing, and/or, perhaps, each sentience is only one specific thing. 

 So by using the word "replace" above it seems that we are assuming that
replacing things from the original into a new entity recreates the *same*
original person.  If that were true (and I grant that it may be ...but..)
then if this replacement process was done somehow without destroying the
original and the original was sitting there looking at, and talking with,
the duplicate, I think the original, at least, would not agree that he/she
was the other person (the duplicate).  If this last example is true, then
we may be begging the question to use the word "replace" in such a fashion.

>Just like a programmer can write an emulator for an other computer 
>system. He then can run any software written for that computer 
>system on the emulator. And he doesnt need to understand how those 
>programs function. The same should hold for the biological "computing 
>system" the human mind "runs" in.

I am not comfortable with computer analogies that explain human behavior
because we don't have computers (yet?) that are exactly like people. So we
may not really *know* if what happens in any present computor is anything
at all like what happens in a human mind.  On the other hand computer
analogies (I have to use them too) are the closest thing we may have to use
for now until we can rise beyond inductive logic and find some deductive
evidence to explain our minds and how to have them survive death.

>Another option could be that artificial brains with sentience 
>can be evolved, and sub-systems constructed of neural nets 
>can be trained. 
>>So its definitely not required to know what sentience is to 
>be able to create one. After all, didn't nature/evolution 
>create one without such understanding...

Again, this seems true only if sentience is not related to individuality in
some fundamental sense and you are trying to just (:=)) create a new human
being.  But if one is trying to move himself/herself from his/her original
brain into some other vehicle then it seems that sentience has everything
to do with being a particular person or a particular sentient being. 

>>The chief basis of sentience is FEELING or qualia, 
>>Is not feeling just an information process...

Maybe.  If it is an information process, my guess is that the "feeling"
information process is very different from the "memory" information
process.  I don't have enough information to explain that as well as I
should to make this comment. But, I don't think that information to fully
explain the subject exists anywhere yet.  So we explain the best we can
with the limited knowledge we have.   In any case, it seems that memory is
what makes you feel that you are you when you do "feel"; and
*whatever/whoever* is "feeling" the memory is what actually makes you you.

>Isn't there consensus among neuro-scientists that all the aspects of 
>our mind emerge only from the functioning of the neural networks 
>inside our brain.

Conceerning the statement: "... all the aspects of *our* mind ... inside
*our* brain."  That's why I have doubts that if a future scientist were to
put aspects of *my* mind into a duplicate's brain the duplicate would be
me.  One of the things that make my mind mine, may not just be sentience,
but the particular sentience of my one and only brain.

> And that it looks like no further (still-unknown) 
>mechanisms (e.g. quantum coherence in the micro-tubules) are 
>needed to explain certain aspects of our mind.
>>Grtz,
>>Hkl

David Pizer

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=14647