X-Message-Number: 11684
Date: Thu, 06 May 1999 01:49:29 -0700
From: Mike Perry <>
Subject: Isomorphism, consciousness

In some recent postings I've been enthusiastic about hypothetical systems
that would be isomorphic to humans and contain corresponding working parts
at a deep level, including functioning parts of the brain. As a thought
experiment we might imagine such a system in the future being constructed
from very nonbiological components, yet still functioning in a way analogous
to a human, consequently, in my view, possessing feeling and consciousness
like a human. This could include such variants as fast sequential processing
replacing slower parallel processing, so long as, again, there was an
isomorphism between the functioning device and a flesh-and-blood human. I've
argued that space-binding of interior elements, of the type Bob Ettinger
imagines, should not be necessary for the device to exhibit true
consciousness, this being basically the position of strong AI advocates like
myself. However, it seems that the isomorphism idea, if we push it far
enough, would require us to recognize a static artifact as "conscious." For
example, we could record the successive internal states of an active device
we consider conscious, and mathematically, the unmoving record would be
isomorphic to the real behavior!

After thinking about this, I came to the conclusion that there are really
two issues here (at least): (1) consciousness as a property of active
constructs in our universe, and (2) consciousness in a more general sense,
supposing we allow for such possibilities as other universes besides ours,
mathematical "worlds," etc. In both cases the notion of an isomorphism
between functioning systems can provide useful insight.into what we should
regard as a system exhibiting consciousness. In either case we might start
with ourselves, which we accept as conscious (presumably!), and start
comparing ourselves with other systems. Others similar to ourselves, then,
are conscious too. Proceeding a little farther, I would accept that other
animate creatures that appear to have awareness really do (vertebrates,
mollusks, insects, etc.), though in varying amounts. Then finally, I think
that robots suitably designed and programmed have awareness, although this
is more controversial. But it seems to me that our smartest robot devices
are more or less up to the level of insects in processing power (if not a
bit beyond), and can also exhibit rudimentary "feelings" (e.g. a robot gets
"hungry" and recharges its batteries). If two robots seem equally
responsive, and their hardware/software is different but isomorphic in a
reasonable sense, e.g. one has a sequential processor and the other a
parallel device doing equivalent things, then I would accept both as equally
conscious. And it seems reasonable to generalize this principle, so long as
we retain the idea that time, basically is what it is to us in our universe.
So all robots and other constructs with awareness as we usually understand
it are running on time that is also as we usually understand it, not some
other ordered set that is, at least mathematically, isomorphic to time. This
could apply to robots/creatures that run more slowly or run faster than we
are used to, so long as, again, their time is basically our time too.

Now, if we want to go beyond this, and consider systems whose time component
does not correspond to our actual time, well, I think this is possible too,
but then we are in the realm of systems whose consciousness is not
"consciousness" in *our* world. A static record is like this. If you think
of the record as simply a model of happenings over time, with time itself
modeled by "page number" or some other reasonable subdivision--fine. It
would, under appropriate conditions, be reasonable to regard some recorded
entity as "conscious" relative to the domain in which it is expressed, say a
human whose brain activity is captured in detailed fashion in some recorded
form. The existence of such a record, however, would not require us to
consider the recorded human as conscious in *our* world. So I think that the
notion of extending consciousness through isomorphism can be defended,
provided we recognize that, if pushed far enough, we will have to recognize
a certain relativity principle too. In the most general sense, in deciding
whether a system should be considered conscious, we have to consider *in
what domain* it might be conscious, which involves a frame of reference, and
how time is being modeled.

But going back to *our* world, and functioning systems within it, I base my
judgment on basic intuition. If a creature or construct *behaves* in a way
that seems conscious, I am inclined to accept it as conscious and *then*
consider reasons why it should not be so, if there are any. So if, somehow,
I could meet someone who seems like a person with feelings, but eventually
find that this "person" is a robot with, say, a superfast sequential
processor inside that maintains, but only isomorphically, a fully
functioning, normal human brain, I would give the benefit of doubt. The
"person" really is that, because I have no good reason for thinking
otherwise. If I were to call into question whether such a being had real
feelings, I could raise that issue about myself too. I also think, once
again, that running speed alone should not be relevant to whether
consciousness exists. If our our robot, then, is conscious, it would remain
so if speeded up or slowed down, other factors equal. Similarly, if we
represent our robot in another running system that is faster or slower, but
otherwise executes isomorphically in our time, that too would exhibit
consciousness relative to our world.  

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11684