X-Message-Number: 27633
From: "Valera Retyunin" <>
Subject: Lengthy, dull, but non-offensive reply to Daniel Crevier,
Date: Sun, 19 Feb 2006 08:59:27 +0300

>Actually, Plato's mentor Socrates was the one who had it right. He did not 
>state, as later thinkers came to believe, that horses are horses because 
>they are copies of an ideal form. He recognized that category making was a 
>pragmatic activity.... Socrates thought it made sense to define a "horse" 
>category because its members could be easily identified, and had 
>interesting properties: they ran fast, and could be domesticated.... The 
>problem of identity is one of categories: we are trying to draw the line 
>between the category of possible beings that are, say, me, and those that 
>aren't. In some respects, there are neat articulations where to carve, like 
>the obvious cleavage between "me" and "you".

I hope you would agree with me on the following two points:

(1) Your assignment of a category to an object does not physically 
affect that object (at least directly).
Example: You can say that an exact biological copy of a horse *is* 
that horse for pragmatic reasons (it does what you expect from it), 
but such category assignment cannot physically *make* the copy *that* 
horse as they are two different physical entities.
(2) Your pragmatic goals strictly limit the choice of reasonable 
category assignments.
Example: I could say that I am you, but such category assignment 
wouldn't make any sense given my pragmatic goals. On the other hand, 
an ogre could reasonably say I were you if he had you for breakfast 
yesterday and me today - we would both fall into his category of a 
morning meal.

Likewise, you can say that your copy (recreated or uploaded) is you, 
but this category assignment would only be reasonable if your goal was 
to preserve someone or something with your memory and behavioural 
patterns - for future generations to enjoy. If your pragmatic goal is 
to preserve you-the-one-who-is-now-reading-these-words, you cannot 
reasonably consider your copy - a different physical entity - you.

>However, in discussing possible technological enhancements or replications, 
>we are faced with a continuum of possibilities. Most people would agree 
>that me 5 years from now would still be me, even if all my atoms will have 
>been replaced by metabolic activity. 

The fact that you are a dynamic physical system does not make your 
survival dependent on category assignment, that is, your survival in 5 
years does not depend on your or other people's decision to assign or 
not assign the category "Daniel Crevier" to you at any time during the 
5-year period. You will be the same person in 5 years if your physical 
properties, albeit changing, have values within certain limits at all 
times during the 5 years. That's how your identity is defined by 
materialists. You could now decide, for some pragmatic reasons, that 
you have not survived the past 5 years - such a decision would not 
change the fact of your survival.

I'm sure you know the changes in you that are consistent with you 
remaining you are very limited - they have to be small enough, be 
happening slowly enough etc. We don't know where exactly the limits 
lie, but we know for certain they are very strict - we know that tiny 
changes in the brain can be fatal. There's a small chance that these 
limits may extend somewhat beyond the live states of the brain. If 
that is true, you would still remain you even after death if the dead 
states of your brain always stayed within limits - close enough to the 
live states. There's a small chance that good cryopreservation keeps 
the brain states within the limits - that's why cryonics is worth 
serious consideration. But to think that the limits extend as far as 
to include other physical entities with similar brain patterns is 
equivalent to religious belief.

>What about Kirk, who gets disassembled in one spot, and reassembled 
>elsewhere with other atoms? Some would say we're dealing with a different 
>Kirk.
	
You don't need an understanding of future technologies to conclude 
that Kirk does not survive disassembly, and you can reasonably assign 
the category "Kirk" to the newly created entity only if your pragmatic 
goal is to have someone who can perform Kirk's duties.

>But what if he were reassembled in the same spot, with the same atoms? What 
>if only part of him were disassembled, reassembled and reconnected to the 
>rest? How large a part would that have to be for there to be a different 
>Kirk? What if part of his brain were replaced by electronic components? How 
>much of it would have to be replaced?"

By disassembling Kirk you bring him to a state where he stops being 
Kirk, that is, where his physical properties have values well outside 
limits that define him as being Kirk. Anything else created afterwards 
is a different entity. Even if you reassemble a "Kirk" in the same 
spot, with the same atoms.

If only part of Kirk were disassembled, he would most likely be dead. 
If it were a substantial part, he would definitely be dead. Again, we 
know from experience that you don't need a big change in the brain to 
kill it, and it's irrational to suggest that *your* brain states 
extend far beyond the live states.

As for the replacement of Kirk's neurons with artificial components, 
it seems possible in principle if done slowly enough so that Kirk 
never stops being Kirk. Whether a natural neuron can ever be 
substituted with an artificial one is a big question though. I share 
RBR's doubt that a purely electronic component can do the trick - 
chemical activity seems to play a crucial role in brain functions 
(including consciousness).

Now, patternists usually challenge the notion of brain continuity by a 
certain hypothesis. They say that continuity is just an illusion, a 
macroscopic approximation. They say that, according to the hypothesis, 
all atoms constantly disappear and reappear umpteen times a second, 
which we simply perceive as continuity due to the limitations of our 
senses.

Patternists jump to the conclusion that, since atoms may disappear and 
reappear in an umpteenth of a second without any harm to you, they can 
also reappear in a second, or even umpteen seconds, and even in a 
different place, and you will still remain you. That's absolute 
rubbish. We know from experience that, even if the atoms comprising 
the brain disappear and reappear at microscopic time intervals, we 
don't even notice it, while the very same brain can be easily killed 
by a very small change in a macroscopic time interval.

>When there are no articulations, s.o.p., if we must carve, is to do it 
>where it's convenient.

You cannot change physical reality by drawing a line where it's 
convenient. You cannot make another physical entity you by a mere 
decision. The only thing your convenient decision can do is make you 
stop worrying and ease your fear of death.

>This is what legislators do: the legal limit for abortion is an integer 
>number of months, speed limits are round numbers, and so on.

Poor analogy. Legislators cannot change physical reality by their 
decisions, at least directly. They cannot, for example, change the way 
a foetus develops.

You could legislate yourself into survival, but it wouldn't have any 
direct physical effect on you. Whatever the legislative decision about 
your identity, it cannot change the fact of you waking up or not 
waking up tomorrow. If you die during sleep, you will never wake up in 
another person or artificial construct, regardless of any decision in 
this respect.

>I believe the convenient way to carve in the above continuum is to use an 
>empirical criterion: if it remembers like Kirk, and behaves like Kirk, then 
>it's probably Kirk.

Again, you can assign categories as you please, but it won't change 
the fact of Kirk's copy being a new, different entity. When Kirk is 
disassembled, he loses consciousness and never regains it. Your 
convenient empirical criterion will not save him.

>Please note that this post, as well as others I did, is written in neutral 
>language, designed neither to offend nor deride. I would ask anyone who 
>wants to reply to do it in the same way. Doing otherwise generates more 
>heat than light.

I never meant to offend you. I would like to generate some heat 
though. I am seriously concerned about the association of cryonics 
with uploading. In my view, this association discredits cryonics as 
uploading is just another New Age religion.

>Would Kirk die a thousand deaths in the teleporter?

No, just one death. The other 999 deaths would be of his copies.

>Would an uploaded copy of him executing in a computer still be him?

No. If such a copy was "sentient" software running on dead hardware, 
it wouldn't even have a physical existence. In other words, it would 
be no-one. A computer simulation will remain a computer simulation on 
the most powerful of supercomputers. However realistic it may be, a 
simulation only exists as what it simulates in your imagination. It 
cannot have its inner world, sense of self etc.

To create an artificial sentient physical entity, you need to build 
sentient hardware - sentient irrespective of anybody's interpretation, 
not just appearing sentient to you because you loaded into it some 
sentience-imitating software. It seems possible, at least in 
principal. I don't think it likely that a silicon construct can have 
the same type of sentience as humans do, but even if you managed to 
build one that behaved exactly like you and remembered everything you 
remember, it wouldn't help you survive. It would have its own, 
independent existence. You couldn't just pop up in the construct's 
silicon skull (or its equivalent) and start living there.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=27633