X-Message-Number: 17016
Date: Wed, 18 Jul 2001 22:51:28 -0700
From: Mike Perry <>
Subject: Reanimation from Clone, Enlightened Self-Interest

Olaf Henny, #17011, writes
>...
>I would conjecture, that the data derived from stored memorabilia
>would be completely insufficient in complexity and detail to serve as
>ersatz memories, akin to a 8 Kb description of a 8 Mb image.

For me, if I ended up as a cell sample plus stored memorabilia (or less), I 
would want the missing information filled out by the best possible 
guesswork. Make a "reasonable approximation" of what was there before, 
rather than a person with massive amnesia. Invent memories as necessary, 
always being careful to invent as little as necessary, and never contradict 
any known historical facts. (There are some other requirements too, 
outlined in my book, all of which should be feasible in principle.) Based 
on my multiverse views, you will not thereby be creating a fantasy 
individual, but reinstantiating someone who really existed, an authentic 
version of me. (But of course there is more than one authentic version.) 
Now, granted, I don't think this is as good as bringing someone back from a 
good cryosuspension (or other adequate preservation) with memories intact. 
That's why I remain a staunch advocate of biostasis. But nevertheless it 
supports the conclusion that death is not an absolute, something I find 
essential.

Next, I'll comment on Lee Corbin's remark in #16984:

 >The fact that we have SOME concern for the interests (e.g. feelings)
 >of some animals is what is significant.  It is unpersuasive to claim
 >that our efforts to treat these animals humanely arises entirely from
 >our self interest, unless one starts down the idiotic slope of claiming
 >that everything that anyone does is for a selfish reason.

Well, I don't think it idiotic to maintain that all interests can be 
interpreted selfishly (agreeing with Robert Ettinger). However, in this 
particular case I don't think you need to go to great lengths at all to 
defend benevolence on self-interested grounds, *if* you think in terms of 
*enlightened* self-interest, and that from an *immortalist* perspective. As 
I see it, humans are not simply doomed to remain at their present level 
indefinitely, and neither are non-human creatures. Benevolence takes on a 
new meaning if you imagine each being as living far longer than once was 
possible (hopefully forever), *and* always, eventually, developing into 
something greater than it was, along with yourself. And I grant that 
sometimes you have to choose the lesser evil which is still an evil, and 
you can't rationally benefit all creatures all the time. But the prospect 
of becoming immortal will, I think, go far in resolving the issue of 
selfishness versus altruism.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=17016