X-Message-Number: 25135
Date: Mon, 22 Nov 2004 22:52:52 -0700
From: Mike Perry <>
Subject: Pattern view, etc.

Robert Ettinger writes in part:

>In any case, the pattern view (especially, but not only, as a computer
>program) has at least five basic defects:
>
>   First, if it relies on the "identity of indiscernibles," there are no
>indiscernibles. Any "two" systems necessarily differ in some 
>way--otherwise we
>would not say "two."

The important differences as I see it would relate to the subjective 
experiences of the person(s) being "run" based on (or starting from) their 
description or "pattern." Two or more cases of these could be the same, at 
least for a time, even though other differences may exist that are not 
discernible to said participants. (This might especially follow if the 
starting descriptions were the same and the devices running the persons 
were similar, but could also occur under more general conditions.) In such 
a case I'd say you had multiple instantiations of one individual rather 
than separate persons. Duplicate consciousness, which would occur here, I 
would take as shared consciousness. This is, of course, a "mere assertion" 
as some would say, but I see no way it could be disproved scientifically, 
and adopting it is a choice I make about what I consider important versus 
unimportant, so I think I am justified in making this choice.

>   Secondly, you can't have it both ways on the quantitative issue. If the
>"same" people or systems must be absolutely identical in all respects, 
>then no
>two systems can ever be the same.

Fortunately, it is not necessary to go that far.

>Third, if the pattern is capable of more than one interpretation, then it
>is a mere mathematical metaphor--which a computer program basically is in all
>known cases--and cannot be given a presumption of life.

I think chunks of information can be constructed so as to be reasonably 
self-descriptive and unambiguous in their meaning, albeit with some loss in 
efficiency of representation. A recent posting described this in more 
detail (encoding a movie so space aliens could guess the format and see it 
for what it was worth. A movie might also describe how to build or create 
certain hardware devices, not excepting biological organisms with specified 
imprinting, which could further serve to enlighten. Other, self-descriptive 
formats besides the movie could also be imagined.)


>   Fourth, for the foreseeable future, every computer program will be 
> KNOWN to
>be unrealistic.

No argument there.

>   Fifth, if my hypothesis of the "self circuit" and the nature of qualia is
>correct, then a person IN PRINCIPLE cannot be computerized--a restatement of
>the old, "The map is  not the territory." No representation or description 
>of a
>physical system can capture or embody ALL of the elements of the physical 
>system.

But again, we have to ask what about the *important* elements versus 
unimportant. In particular, we have to ask if a simulated something is, in 
fact, an actual instance of that something. A simulated rainstorm in a 
computer is not a rainstorm. A simulated computation is a computation, 
however. I can be more definite and say that a simulated computation of a 
specific thing, mathematically speaking (multiplying 6 by 5, say), is that 
very thing. What about a simulated consciousness? Here opinions will vary. 
Suppose the simulation extends down to the atomic level and fully and 
accurately models what is going on in the brain from that level on up 
(should be possible in principle). The simulated brain may reside in a 
robot body that can see and hear and otherwise sense its surroundings, and 
is capable of voluntary movement. So the brain is tied to the world outside 
as part of a system resembling a natural organism. Then I for one would 
tend to give the benefit of doubt and accept the simulation as true 
consciousness. This follows because (1) I see no way of ever refuting the 
claim scientifically, and (2) it reflects a choice I make about what I 
consider important. It might be objected that the modeling, however 
accurate, would probably deviate in some way from what happens in the real 
world. So the simulated brain, over a very long time, may differ somewhat 
in its functioning from a natural one. I don't see this as a major 
obstacle, though (no two biological brains will function exactly alike 
anyway), and would still be inclined to say the consciousness is real. To 
me this is an exciting thought.

>   The basic question is seldom addressed--viz, what OUGHT we to want, or 
> what
>survival criteria ought to satisfy us? What are the criteria for criteria?
>
>   As far as I can see, my remote predecesors (a fish ancestor, "myself" as a
>one-day embryo, or "myself" as a one-year infant) and remote successors
>are--and ought to be--of very little interest to me.

What ought to interest us? This is a good question to raise. I certainly 
don't have a definitive answer, but will offer some brief remarks from a 
personal perspective. I am interested in helping "build Heaven" through 
scientific means, the only means I see as having a chance of actually 
working. An interest in the immortality of the inhabitants is of course an 
important part of this, as is making sure I am one of them, in some 
reasonable sense, which means I am also interested in the more distant 
future as well as the near-term and present. But there are other interests 
that seem closely associated and thus also important, including the history 
and welfare of sentient beings extending even to remote prehistory, and 
covering the different stages of each individual life, not excepting my 
own. In short, to aspire to an exalted, deathless state, which I think is 
reasonable and desirable, we must strive for interests and good intentions 
that would reasonably be associated with enlightened self-interest, 
extrapolated to the scale of eternity.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=25135