X-Message-Number: 8111
From: 
Date: Sat, 19 Apr 1997 22:54:08 -0400 (EDT)
Subject: CRYONICS more on values, emulation

1. I see I still haven't made my point about values clear to Mike Perry, and
therefore no doubt to many other people.

My contention is that VALID values are not arbitrary--in general, if you
start from solid premises (your own anatomy/physiology and history) and apply
logic and a sound knowledge of natural law (physics), then disregarding
borderline cases you will always find one value or set of values, or one goal
or set of goals, to be correct while the others are incorrect (or less
correct). The basic criterion is the desired maximization of personal
satisfaction over future time, with the obvious necessity of appropriately
discounting potential gain in the distant future relative to the near future.


Dr. Perry says that personal preference cannot always be shown to be right or
wrong, in part because risk/reward considerations (calculations of expected
satisfaction) themselves depend on personal preference. But the whole point
of the exercise is to determine (each individual for himself) whether in fact
his present values or preferences are the right ones, or whether he would be
better off trying to modify his values. Nothing new about trying to improve
yourself, including your attitudes and habits, which may be accidental,
inflicted, ill-chosen, or/and conflicting. 

Some of the most obvious questions will arise in connection with short-term
satisfaction vs. long term enlightened self interest; and in the (partly
overlapping) arena of "selfishness" vs. "altruism" or "social consciousness."
There are both evolutionary and personal reasons for "self-serving" and for
"self-sacrifice." Part of the task is to make sure you are not being
victimized by evolutionary or social or conditioning pressures that
subordinate your personal benefit; another part is to make sure you are not
sacrificing your own future to save yourself present discomfort. 

2. Perry Metzger (#8106) has so many things wrong that I'll just bother with
one clarification, mainly for the benefit of newcomers.

Metzger says my "self circuit" implies a Cartesian-Theater/homunculus
viewpoint, requiring an infinite regress of homunculi.

Nonsense.

I define the "self circuit" (or "subjective circuit") as the
portion(s)/aspect(s) of the anatomy/physiology of the brain giving rise to
feeling, consciousness, experience, qualia (related but not synonymous
terms). The aptness or usefulness of this term may be debatable, but the
reality of its referent is not--unless you doubt that feeling is in the
brain. 

What might be the physical nature of the self  circuit? I have only
conjectured, vaguely enough, that it might involve something like a standing
wave or a semihomeostatic, reverberating feed-back circuit. (At least one
similar conjecture has appeared in the literature, according to
neuroscientist Joe Strout.) 

When e.g. a sensory signal enters the brain (e.g. from the retina after
stimulation by photons), a series of intermediary signals may be formed and
modified by various parts of the brain, until finally we have a modulation of
the self circuit.  And the self circuit is the end of the line, buck stops
here, no regress. Note carefully: the modulation does not "represent" the
quale; it CONSTITUTES the quale. The modulation IS the feeling.

3. Emulations come in many varieties. At one extreme, you might (perhaps) in
principle be emulated by a super-beam-me-up-machine which would copy you atom
for atom, all configured precisely right. Whether this copy would be "you"
remains an open question, but it certainly would be a human being and could
live in our world.

At the other extreme, you might be emulated by a computer--even a Turing tape
or Chinese-Room-type setup. I think it unlikely that such a simulation would
be a person, and will again try to clarify some of the reasons.

In order for a computer simulation putatively to be you, and to carry on with
your life, the requirements are enormous and impossible in practice in any
foreseeable future, perhaps even impossible in principle, but the thought
experiment could still be useful.
It would be necessary to know the initial values of every variable or
parameter in your anatomy and physiology, and also in your environment--to
the desired degree of completeness and accuracy. The "you" simulated on a
Turing tape or in a Chinese Room of course could not live in our world, but
only (if at all) in a simulated world, with vastly scaled time (if indeed
time is scalable; that's another story). For the purposes of the thought
experiment we bypass the question of how to simulate the world when we lack
so much knowledge of the laws of nature; we'll assume we can get an
acceptable approximation. 

Now a computer (with its program and data store) just produces (after various
intermediary steps) a sequence of new numbers, which are to be associated
with the successive values of the variables or parameters in the person and
his environment. Probably most people will say right here that the info
proposition is silly--that a succession of numbers on a tape or on sheaves of
paper can't be a person, can't have subjective experiences for example. But
the info people have the courage to accept whatever they think is logical,
regardless of plausibility or intuition. As I have said many  times, the
information paradigm has NOT been proven, but some info folk seem willing to
accept it as an axiom, not seeing any acceptable alternative. So let's look
at another weak spot.

In our world, there is a physical separation between different kinds of
systems. For example, a person cannot say an incantation and suddenly
transform into a leopard, let alone change history or the laws of nature.

A simulated person, in a simulated world, might be able to do such things.
After all, in the simulated world, people and leopards and the laws of nature
and the data of history are all made of the same "stuff"--bits in the store
and program, or marks on paper. To be sure, the programmer for the simulation
would try to prevent such capabilities, but there is no guarantee he would be
successful. After all, the simulated people (as the programmer will be the
first to say, proudly) are just as smart as anybody, and will in time become
smarter. They will realize the possibililty of their situation. The program
necessarily allows for its own modification, and the simulated people may
find a way to make such modifications, despite attempted safeguards. After
all, they only need COMMUNICATE cleverly enough with the existing program and
stores! (Some types of hardware might be safer, but remember we are talking
about just a Turing Tape or a Chinese Room.)  

I wonder if this will shake anybody's confidence in the more extreme versions
of the  information paradigm?

Robert Ettinger

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8111