X-Message-Number: 15016
Date: Fri, 24 Nov 2000 21:34:18 -0700
From: Mike Perry <>
Subject: Consciousness and Context

Robert Ettinger, in #15003, again raises the issue of "permissible and
impermissible uploading scenarios" and thinks that some of us, myself
included, "draw an arbitrary and ultimately indefensible line" between the
two. I can't speak for the others (Lee Corbin is mentioned) but will address
this issue from my own perspective. I think it is a worthwhile issue but
that it has a fairly simple resolution that may have so far been overlooked,
despite some effort on my part.

The problem is illustrated by two scenarios. In one we have a robot of the
future with a brain consisting of a programmable device. The brain is
non-protoplasmic, but its software models a real human brain very carefully.
So carefully we could imagine a real human brain (even if none meeting the
exact requirements is presently extant in our universe) that functions
isomorphically to the artificial one. Suppose the robot also looks human and
acts human in all respects.  So we uploaders say it's a human, only
expressed in a different medium than the one we of today are familiar with,
that is to say, biological tissue. This robot, we would say, has
consciousness and feeling, just like a human, and we would base that
conclusion on the implied isomorphism with a biological brain which, if it
existed, we would also say is conscious and has feeling. The absence of
biological tissue, we say, is no problem, so long as the isomorphism holds:
it's the bits not the atoms that count.

In the second scenario we are simply presented with a detailed record of the
behavior of the above system, as a function of time, recording all the
successive states or internal configruations of the artifical brain. This
record, say, is stored in a big book (again, "Turing Tome" is a reasonably
apt term) in which  individual pages correspond to specific points in time
or very short time intervals, so that the brain's recorded activity is
completely represented. (We may assume our Tome pages altogether cover a
sizable interval of time, say several decades.) Again we can establish an
isomorphism with a living, protoplasmic brain, by mapping the pages to the
condition of the (hypothetical) meat brain at the corresponding points in
time. If "isomorphism is everything" as we uploaders seem forced to
conclude, we must then regard this static record as conscious too--which
most of us (myself for one) find untenable. So here's how I would resolve
this problem.

Consciousness, we might say, is a kind of mental motion. Just like its
physical counterpart, it must be defined with respect to a frame of
reference or context. It doesn't exist in total isolation. An isomorphism,
on the other hand can involve a system that does exist in total isolation;
for example, we could establish an isomorphism between an active system in
the real world and a mathematical abstraction giving its description. (Note
that this abstraction is not the same as a static record, which in turn is a
physical entity, that is, a printed copy in some form. The abstraction
itself, like the number 5 or the cosine function, is not a physical entity.)
Here the non-physical entity (description) has no context or frame of
reference. I would call it "not conscious" but in saying that I realize I
have a specific context in mind (my own). So, relative to my world this
abstraction is not conscious, and that seems a reasonable judgment, even if
an isomorphism should exist with something that *is* conscious.

In dealing with a static record we can apply the same standards. Here we do
have a physical entity but it is still reasonable to deny that it is
conscious or actively expresses consciousness. It's interesting that here,
unlike the case of the abstraction, there is an implied context, the
surrounding world, within which the record is "static": In effect the
context becomes part of the definition of the record. 

In a similar way we can rule out other entities being conscious in the
context of our world as we usually understand it, even when they are
isomorphic to systems we would consider conscious. This, for example, might
cover the case of many computers spaced light years apart, which thus are
causally disconnected, that collectively produce the activity of a conscious
being over a substantial time but individually only do rudimentary things
such as flash a single image on a monitor screen. You couldn't talk to or
otherwise interact with such a being, so it is not conscious in our frame of
reference.

On the other hand, a system might be so structured that it is reasonable to
say that within it there are beings that are conscious relative to a context
established by the system itself. It is easy to see how this could happen
with a static record, if we assume it not only contains the brain states of
some particular individual but a description of the surrounding environment,
other beings, and so on. So, relative to the happenings depicted, these
beings would experience consciousness, though not relative to us. 

So now we *seem* to have reached the point where consciousness itself must
be considered a relative phenomenon entirely, something that will certainly
seem counterintuitive if you think about it. ("I *know* I'm conscious, no
matter what 'context' I may be in.") So we may ask, isn't there a more
absolute notion of consciousness, that is not context-dependent? Here I must
confess I don't have the full answer, but two thoughts stand out. One is
that sometimes a context is implied, as in the static record we just
considered, that rules out consciousness.  The other is that, if one accepts
the idea of the multiverse (as I do) very many scenarios must have a real
existence somewhere. We can say  (in very many cases at least) that when we
have something that is isomorphic to a *possible* conscious system but one
that doesn't actually exist in our universe, it does exist somewhere; thus
it is "real" even if not in our world. So we might say that consciousness
truly happens, independently of context, whenever it happens, in a relative
sense, for *some* universe in the multiverse. But this does not force us to
conclude that a static record (once again, by implication, present in and
static from our frame of reference), or an abstraction (lacking any real
context), is conscious. 

Ettinger in his posting says, "I still think that the isomorphism postulate
leads to reductio ad absurdum, even if at first we tentatively allow it."
The foregoing to me seems a good starting point for resolving the problem,
though in some cases perhaps much more would need to be said. However, I
would now like to (again) bring up a problem for the non-uploaders that to
me seems to have no reasonable resolution from their point of view. We
imagine that some property has been discovered for protoplasmic brains,
maybe a standing wave pattern or something else, that always correlates with
consciousness. As a last resort we could simply note that conscious,
protoplasmic brains are always made of protoplasm, a property that will not
be shared by non-protoplasmic entities whether conscious or not. In our
robot above, then, some property of biological brains is lacking. So is the
robot conscious or just imitating it? I see no good way out of this, from a
non-uploader's point of view. To simply say, by fiat, that only protoplasmic
brains can be conscious won't do--it smacks of solipsism. So how in
principle would we determine if a system that looks and acts conscious is
not just faithfully faking it and really not conscious at all? What
empirical test would decide the question? None, as far as I can see.

Best to all,

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15016