X-Message-Number: 22374
Date: Mon, 18 Aug 2003 00:59:12 -0700
From: Mike Perry <>
Subject: "Simulations", Time Isomorphism

Robert Ettinger, #22364, and my comments:

>Getting back to descriptions and simulations--a description IS a simulation,
>and a simulation IS (merely) a description.

It would be good to clarify what is really meant by a "simulation." If 
people were clockwork machines and perfectly predictable, it might be 
practical to have an exact simulation, say in different materials, or as a 
running computer program that evolved a succession of state descriptions 
over time. (Or we might just have a static record of the successive 
states.) But people--and processes more generally--are unpredictable, so 
what I think is mainly of interest would be systems that exhibit certain 
similarities in their functioning to people, modeling the brain and its 
functioning for example. (For now I limit consideration to actively running 
systems rather than static records.) The modeled brain in this case would 
not, except in very unlikely circumstances, correspond to any real brain, 
but would perhaps correspond to a "typical" hypothetical brain. You could 
say the modeling is a "simulation" but in general how would you conclude 
even that? Supposing for a moment that it did exactly model some organic 
brain, you could say the organic brain simulated *it*. Why not? The organic 
brain could actually be constructed after the other system is built, as a 
simulation of it. Again, I doubt if anything approaching a true, 
event-for-event simulation would occur. But you could imagine making an 
artificial brain first, not an attempt to copy any existing brain but just 
to make something very like existing brains in certain basic ways. Then you 
make an organic brain that models the artificial brain as closely as you 
can. The organic brain is very like other organic brains but is not a twin 
of any. But it is a twin of what the inorganic brain models. So which is 
the "original" and which the "simulation"?

>Dr. Badger and others reject time
>isomorphism, but I know of no good reason to reject isomorphism for time 
>while
>accepting it for other purposes.

I have expressed what I think is a good way to deal with the time 
isomorphism problem, and others have made their tries. Here I will try 
again. But first I have to assume an isomorphism is possible.

>Also, as I have said many times with many
>examples, a running computer simulation is NOT fully isomorphic to a 
>person, and
>cannot be.

I'm not sure why in principle this could not be so. It seems that 
significant events in the real world are discrete, just as they are in a 
computer, and are Turing computable at the description level. (Certain 
complications need to be dealt with such as unpredictability but I think 
they can be.) This should be enough for any finite time's worth of full 
isomorphism. Present physical theories may be inadequate but something that 
rests on discreteness should be adequate. As long as the significant, 
discrete events (finite in number) are all modeled in their appropriate 
interrelationships, we could say our isomorphism is complete. (Of course 
this is only in a thought experiment and far beyond the abilities of 
present day computers.)

Let's go back now to the time isomorphism problem. Imagine you have a 
static description of happenings at the subatomic, discrete-events level. 
All significant events are covered (all particle interactions that is). The 
description extends over a large volume of space-time, and includes many 
years' worth of the life of some conscious, human agent. The agent, that 
is, and the world he is part of, are embedded in your description. This 
world, really a history of a hypothetical world, becomes the "frame of 
reference" for that agent, and it is reasonable to say (in my view) that he 
is conscious relative to that frame of reference. In particular we could 
tell what that agent is doing, sensing, thinking, and feeling at different 
times by inspecting the record. We could observe that certain parts of the 
agent's brain are active at certain times, and so on, just as we would 
expect with a human in our world. Needless to say, this would have to be a 
very big record and we could imagine it is very far away from anything we 
can now observe, yet still extant somewhere. (Maybe you would have to tweak 
cosmology a fair amount for this, but let's allow it for the sake of 
argument.) Our own world, then, is clearly separate from this static-record 
world. To say that the agent is conscious relative to the static world does 
not have to imply he is conscious relative to our world. On the other hand, 
we can assume that there is some straightforward modeling of time in the 
static record, in the form of saying that event such and such has the 
following space-time coordinates (with respect to some chosen coordinate 
system) so time is represented isomorphically. So here we don't reject time 
isomorphism but it does not force us to conclude that the modeled agent is 
conscious, as we usually understand consciousness.

To further clarify, we can now imagine an *active* system somewhere, in 
which events unfold in time rather than just being statically represented. 
So time is represented as we understand time, perhaps with a slowdown or 
speedup factor, but nothing beyond that. Again a world is modeled and our 
agent is conscious relative to it. Perhaps again this world is very far 
away but maybe we can reach it in Star-Trek fashion. At this point we 
should be able to interact with the agent, carry on a conversation and 
such. So this agent's frame of reference really is now our own. If in fact 
the "simulation" is being done in a certain way, we might immediately 
conclude the agent is conscious, as when the brain is organic and very 
similar to known cases of human brains. But even otherwise, if we accept 
the information paradigm, we would conclude on the basis of the isomorphism 
*and* the essential coincidence of the frames of reference, that the agent 
is conscious as we usually understand the term. In other words, when you 
can talk to the guy and he seems to be conscious by various tests, you have 
good ground to accept his consciousness as real.

If, on the other hand, a space voyage uncovered the big static record 
instead, it still would not be possible to talk to the agent embedded in 
the description, and his frame of reference would still, by reasonable 
criteria, be different from and incongruent to ours. So again you would not 
have to consider him conscious as we usually understand the term, though 
still conscious in some other sense.

>Once more, the assertion that a simulation would "be" a person is
>nothing but dogma, with nothing whatever to back it up except the perceived
>elegance of the concept.

I don't see it that way, at least if we allow for possible future 
developments. It seems reasonable that a future, advanced but still 
inorganic robot could *seem* very conscious and human, both in its behavior 
and in the way its artificial brain was carefully modeling an organic 
counterpart at a deep and detailed level. In such a case I see no way in 
principle of establishing, through observational tests, that the robot was 
*not* conscious and really experiencing the feelings it seemed to have. So 
I would have no trouble accepting it as a person.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=22374