X-Message-Number: 22308
Date: Fri, 08 Aug 2003 01:20:21 -0700
From: Mike Perry <>
Subject: Re: More on Simulation

Robert Ettinger, #22303, and my responses

>Mike Perry asks again how we could be sure a system isn't conscious (if it
>has functions isomorphic to those of feeling).
>
>This is just another way of asking: "How can we know that isomorphism isn't
>everything?"
>
>One answer that makes sense to me is that in other areas of life and thought
>we reject the idea that isomorphism (or partial isomorphism) is sufficient.
>For example, a mechanical analog computer can do definite integrals which 
>could
>be interpreted as the charge accumulated on a capacitor--but we do not
>therefore say that the mechanical computer "is" a capacitor accumulating 
>charge.

The reference here, if I understand it right (and I thank Hugh Hixon for 
clarifying this), is to *two* types of analog computer. The mechanical 
computer represents a numerical value (definite integral) as a position of 
a physical pointer--essentially, a physical distance, and the electrical 
computer as a charge on a capacitor.

>It seems rather odd that upmorphists are willing--even eager--to accept a
>simulation as a person, but not (for example) willing to accept a 
>disk-and-stylus
>analog computation as an accumulation of charge.

True, a physical distance is not the same as an electrical charge, but both 
computers are doing integration. Similarly, a suitable robot brain would be 
non-meat but both the robot and organic brain would at least *appear* to be 
conscious. Are consciousness and feeling basically computational in nature? 
If so, then an imitation of consciousness would itself be consciousness, in 
the same way that an imitation of a computation is itself a computation. It 
is yet to be demonstrated but plausible, I think, that consciousness at 
least could be well-imitated computationally--this would involve internal 
as well as external congruences with natural brains. If so, we would be 
confronted with an artificial system that seemed to be conscious with, I 
think, no way in principle to demonstrate that it was not conscious. 
Consciousness, in other words, is not a property that is testable in the 
same way as being a physical distance versus an electrical charge is testable.

>Or you could put the
>computation on an ordinary digital computer, with symbols for charge as 
>well as time
>and current and anything else you deem important. What it boils down to,
>again, is the ASSUMPTION by the upmorphists that information processing is
>everything. As I believe Mike has acknowledged, this is an untestable 
>assumption,
>hence many would regard it as meaningless.

That I survive sleep (rather than being replaced by another, similar 
individual) is an untestable assumption, but far from meaningless.

>As part of this, remember that only a fraction of the stored bits in the
>computer correspond to coordinates of the simulated system in phase space. 
>MOST of
>the bits relate to intermediate calculations, and which is which is
>arbitrary, a matter of labeling and understanding the labels. But this 
>understanding is
>in the mind of the beholder or programmer, who tells the computer which items
>he wants highlighted or displayed.

It may be that many simulations would be inefficient, as a price to be paid 
for accuracy. As a practical concern, though, I would ask about the 
feasibility of replacing certain parts of the brain with artificial 
components that do equivalent things but are more durable. I understand 
there is work being done on an artificial hippocampus right now. If 
successful it could cure a severe forgetting problem known as Korsakov's 
syndrome. More generally, the use of artificial brain components, or 
outright uploading of the personality elements into some sort of 
programmable device, could be a fast track to physical immortality, not to 
mention a means of facilitating cryonic resuscitation. Something to think 
about.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=22308