X-Message-Number: 3212.2
Date: 04 Oct 94 17:37:59 EDT
From: John de Rivaz <>
To: "Kevin Q. Brown" <>
Subject: Brain Backup Report part 2

schizo-world contains an infinite number of dimensions, if there is a
distance in this world, it forms a Hilbert space similar to the one of
quantum mechanics. Yes, the Martian astronef was only a topological defect
(a particle) in a quantum field! Next time ask for less dimensions, you'll
be in a less schizophrenic domain.

All of that take place in a single processor, there may be too some
information links between processors all over the world. This is another
kind of travel. Some processor pack could be put in inhospitable domains:
On the ocean bottom, in space, on another world, for example in the Jovian
atmosphere, on a comet-like asteroid at the limit of the solar system...
This could be the most cost effective and efficient way to colonise the
outer space. Space travel from a world to another would always be done at
the speed of light, without regard if "world" reverts to bounded uploaded
space or planetary object in infinite Euclidean three dimensional space.

Even interstellar travel could be envisioned with that technology. This is
definitively not a sub-class way of life and it would deserve more
attention than what it has got up to now. There then come the
possibilities with heavy emotional and philosophical charges:

An artificial brain can be built by merging many brains, the ultimate in
the matter of Communism. A brain can be built in taking selected parts in
many different brains. Do you wan to sell your soul? A brain may be built
from parts coming from different species, have you a chimeric friend? A
super brain can be built from many replicas of an original one, do you
know the guy with ten billions ant brains? Well, a brain can be copied in
many duplicates, have you meet my 3620th clone? All of them may be
independent individuals or mix at some time or continually exchanging
information on a more or less restricted basis, how many lives are you
living?

The sci-fi end?

The last side of the subject is the effect of the virtual world on the
Earth domain. There is so much possibilities in the uploaded domain than
most of the information processing must slip into it. Outside the primary
production task, everything in the civilisation would eventually go to the
new domain. Yesterday, the stock exchange worked on a some hour scale,
today, it run at the speed of computer, when there will be a way to
control it from uploaded world, any operator will be able to react one
million times faster than any human in the Earth domain. Any profit will
then reserved to "uploaders".


Because MRI scanning is not a destructive process, when the technology
will become available, it will turn out into a commodity of the life. Many
copies would both act as a back up in case of accident and make a living
in domains outside the reach of any Earth space being. What is the
professional activity of your guardian angel?

If there can be a reverse function on brain reading, a biological brain
would benefit from the computing power of the uploaded world. It could
gain the memory and mathematical capabilities of computers in the same way
as an uploaded personality. The question is then what is the use of a
biological brain? The answer could be none beyond the biological command
activity. The best species is then no more a mammal but something as a
dinosaur, a body with the minimum brain function. One body could be rented
for some time and then passed to another consciousness. How much for one
month of Earth life in a dragon zombie? Mammals are too much brainy to
admit that kind of action, but they could have their own upload/computer
system. Yes, my cat has got its fifth doctorate yesterday. Some people put
their pet on ice without thinking about what they really do. If you do
that with your parrot or your dog, don't be too unjust with it today, he
could recall that a century or so from now! How did you recover from
cryonic suspension? Who has paid for your recovery? No problem, my three
cats have paid for me.

Well, all of that could best be the basis of a set of sci-fi novels and
nobody can tell if there will be something as that one or two century from
now. The main purpose here, is to give a glimpse at a civilization far
more stranger than all we are accustomed to expect. The reality may be
even more stranger than that, probably more than less.


THE USE OF UPLOADING.

The biological link.

To devise a biological link between computers with or without neural net
and uploading, looks promising for many technologies. The first
application would to link an electronic prosthesis to the central nervous
system. High on the list is the artificial eye with an electronic retina.
The medical use for blind people is plain. On a longer time scale, the
same device could give to anybody a visual link with the information
world.

If there was such a link, computers could be used to supplement the memory
or anything outside mere intelligent activity (if that can be defined). To
build a link between brains goes well beyond a technological
materialization of telepathy, it would give the first possibility to
communicate with other species from "inside". The social implications are
certainly far more reaching than giving sight to some blind people.

The capability to work continuously with a computer-brain link would blur
the location of the personality, one part of it could spill on the
computer. This one could even be programmed so it can accumulate the most
of information from the brain and simulate its function. This would be a
continuous uploading process, a possibility near the one envisioned by M.
V. Soloviov in 1990. In a society where that technology is a common tool,
the idea of full uploading would certainly be best received than today.

Even without a direct brain link, the information gathering could start
with a camera taking for example one or two frame per minute. That system
with the necessary memory and (fractal?) data compression, could fit into
a small package no bigger than a match box. A continuous record of all
life experience would permit to build back a large part of the personality
or personalities if we admit there is a continuous personality shift. New
Scientist has published some papers on such work in progress.

After that first examples of brain supplementation by an exterior memory,
would come the true brain link. On an experimental basis or for medical
use, a chirurgical approach seems the best possibility. The main component
could be a comb whose each teeth would look like very small hypodermic
needles. At one end, some neuron dentrites would pick up an electric
signal, at the other, a small pocket would retain some cells producing a
chemical attractor for neuron connections. The needles could be made from
metalized glass, an electronic transducer would receive, decode and
distribute the incoming information on the needle array. Energy could
comes from an induction coil. Put in contact with a brain area, this
system would quickly establish some link with the nearby neurons.

On the technology side, pocket of secretion cells have been studied
extensively as cures for some illnesses. The making of microneedles may
seem difficult. It is not, the author has made some of them as part of a
biochemical training course. With the right glass, it is really not
difficult. In fact, every element seem at hand to make such a device with
some hundreds of electrodes. Sight restoration of good quality may ask for
more, but no technology can start fully matured.

A more evolved system would use micrometer scale receptors with some
molecular coating so that they could end glued at a specific neuron kind.
They would work as local antennas for an exterior signal source. Different
receptors would react to different signals so that the broadcasting needs
nothing as a cell diameter precision beaming. That communication system
could be implanted without any surgery.

Today, uploading is largely rejected because it stands as the ultimate
tool in broking privacy. A system able to read a brain can access anything
in it, this is a kind of technological doomsday. For the general
population, the people living today have little hope in seeing the result
of uploading development. At the moment this is so totally an abstract
subject. For the coming generations, uploading will permeate slowly into
the social system and there will be no shock:

First will come a monitor system for supplementing the memory (do you
recall minute per minute your holidays ten years ago?.. And the training
to repair the car?... The surgery course?... With this added memory,
anybody could turn into an expert in many practical fields. No doubt there
would be a market for this product.

Why only register information?  Why not work them in an artificial
personality continuously checking its behaviour against the one of the
original person? This shadow personality could find itself many
applications.

>From here, the idea of a true uploading of a brain content on a computer
system must pose no psychological problem.

There is nevertheless a population fringe where problems may and even must
arise: It is in the cryonics community. There, 20th century person are
projected from one second in our word to another in a new world after,
say, a cardiac arrest or a car accident. There is may be the main interest
of that kind of paper: to give a taste of what could be a world with
uploading technology fully developed. The schizophrenic world in many
dimensions and its strange properties could be only some years from now,
no more than our life expectancy.

One possible start to create the basis of the uploaded world, would to
develop the neural network technology. That cannot be done with an
exclusive look at far reaching applications. The ideal approach would seek
a near term application. For example, highly parallel supercomputers are
cheap for the power they deliver. On the bad side, ordinary software does
not run on them and rewriting all of them for the new computers is a
gigantic task. What if each part of an application was not rewritten, but
simply simulated with a small neural net? A application would then look
like a pile of neural nets, the whole process of cutting, pasting,
simulating each part could be automated. Such a translator programme would
be both very useful an a good money maker in the short term. On a longer
basis, it would promote the use of supercomputers and the implementation
of neural nets on them. Artificial Metempsychosis.

A Search of Methodology for Personality Simulation

by Dr Michael V. Soloviov
St Petersburg, Russia, 1990

Abstract

A new conceptual approach (called Artificial Metempsychosis) for the
investigations carried out on the border between psychology and artificial
intelligence is introduced. The kernel of this approach is the
transmitting of the particular features of a concrete personality into
computer. The methodology for this approach is discussed.

1. Basic conceptions

Artificial Metempsychosis (AM) is a conception similar to Artificial
Intelligence (AI). The both AM and AI explorate the human intelligence to
incarnate its essential features in the machine. The main difference
between them is that AI deals with the common features of the
intelligence, but AM deals with the particular ones. It's important to try
to explore namely the particular features as 
(1) there are no appreciable results from AI studies towards
    understanding and realization of the human intelligence; 
(2) AM realizes an approach which is complementary to the AI - an attempt
    to resolve the problem from the opposite end - understanding how a
    concrete person decides a concrete task could sooner lead toward an
    understanding of the essence of intelligence than attempting to recover
    common principles of task-solving by humans.

The author assumes that in the exploration of the particular in the human
intelligence the main research method has to be computer simulation with
correction feedback from the person to be tested. Because
  (1) it's very probable that there is no other way besides simulation for
      a prediction of behavior of such a system as personality because of
      its very complexity; 
  (2) simulation gives us the best understanding of a problem because in
      simulation we are forced to detail (analysis) a problem (system) - so
      we can better understand all its elements (that's simpler to do than
      all the system) - and then we must reconstruct (synthesis) the system -
      it gives us better understanding of system organization; 
  (3) simulation allows us to use the computer as a mirror for personality
      reflection with means to correct this reflection. Before simulation
      starts it is nessary to create a model and, before this - to get data
      about personality structure from a tested person. Resuming, AM
      investigations should consist of three stages:
      (1) getting data about personality structure;
      (2) creation of a computer model of personality;
      (3) correction of the model by feedback from a simulated person. 
Note, it is essential that the AM has to simulate the integral personality
opposite to AI models usually deals with separate features of personality.
But because of insufficient development in computer technology and
psychological investigation AM doesn't take the personality intact, but
with partial senso-motor deprivation, called the "reduced personality".
The possible levels of personality reduction are shown in the table below.

Level     Accessible experience     Possible realization
0     Introspective Personal     Super Computer
1         + Speech + Speech Processing Device
2         + Visual + Image Processing Device
3         + Manipulative + Manipulator
4         + Motive + Mobility
5    All human expierence     Mobile Integral Robot

Level 0 is called "minimal personality".

From the above it is possible to point out that AI approaches lay in the
bounds of so called "computer metaphor". The extracts about computer
metaphor and its connection with introspection are cited below to clarify
the AI approaches.

"Computer metaphor - it is the analogy between cognitive processes and
information processing in universal computation machine." (Velichkovsky,
Kapitsa, 1987). "That which this metaphor has to reflect - the central
nervous system and its functions is a biological large system. That which
this metephor uses as an image - computer, computational processes, data
base is a techogenic large system. Metaphorization as itself is a
dinamical, creative and long-term activity of comparing, fitting and
estimating our ramified and precise knowledge about tecnological
information processing to biological processes. Any intermediate result of
this activity may be (and as a rule will be) refused. But during this
activity the new knowledge is slowly crystallizing." "The ideology of this
short essay is linking introspection and computer metaphor. Introspection
is our sole direct evidence about only available consciousness for us -
our own one; computer metaphor is the sole source of really complex and
dynamic conceptual models of psychics which are able for self-development
and, in the future, for comparsion with neurophysiological data. Using this
analogy we don't try to answer the question "what is consciousness?", but
we only try to recover its features which are essential." (Manin, 1987).

2. Aquiring data about personality

It is possible to get data about personality in three main ways: (1)
introspection; (2) self-description; (3) personal and intelligence
testing. As for the third way it can be wholly standard (using such test
as MMPI, IQ etc.), for the first and second ways there may be some
problems discussed in short below.

2.1. Introspection

2.1.1. Uncertainty principle

Mainly introspection should be used to recover "physical" (e.g. size) and
semantic (e.g. time sequence) characterisrics of visual and verbal
patterns developing in a person's brain as a result of its mental
processes in order to understand mechanisms underlying them. Usually the
psychotechnics is used to investigate personality by introspection. But it
is not the best way because it requires teaching self-tested persons and
hence it changes their thinking. In a common case 
(1) any attempt to concentrate yourself on your internal world distorts
    considerably the development of its processes and
(2) concentrating on some selected internal world process you miss the
    others - it resembles the uncertainty principle in physics.

The author's preliminary experiments were carried out in such a way:
concentrating (with closed eyes and in silence) on his own internal world
for few minutes followed by reconstruction of remembered internal events
(consisted from visual pattern and word chains). For more precise
recovering of internal world processes many such introcpective sessions
should be carried out for a long period of time. So that new and new
features will be differentiated.

Also, to recover the introspective expierence it is possible to use
questionnaires (Gostev, 1986) and pharmacological drugs (Spivak, 1986).
But of course uncertainty principle is also valid there.

2.1.2. Gestalt-logic dichotomy

This dichotomy begins from perception. In the theory of image recognition
(Gleser, 1985) a visual image is analysed and recognized structurally
(logically) and statistically (holistically). Another dichotomy is
vision-hearing (hearing - as speech perception and analysis). Furthermore
the thinking is divided into imagical and logical. "Explanation" of the
thinking to the consciousness (i.e. introspective experience appeared as a
necessity to explain self to other inividuals (Piaget, 1950)) also
consisits of verbal and visual components. One can therefore see the
dichotomy at all levels of human information processing: perception -
thinking (subconsciousness) - explanation (consciousness, introspective
expierience). Here the author holds the hypothesis that all processes of
thinking are located in the subconsiousness and the consciousness is only
a result of the explanation mechanism working, and the feedback influence
of the consciousness to the subconsciousness is an illusion of
introspective expierience. 

On the other hand parapsychological experiments (thought reading)
demonstrate that a thought could be read irrelative of its representation
(visual or verbal) and the language used. This means that the deep
mechanisms of knowledge reperesentation and processing in human psychics
are essentially homogenous hence, as a structure for knowledge
reperesentation in computer model it is worth using homogeneous
structures, e.g. semantic networks. In semantic networks the gestalt-logic
dichotomy could be reflected in some kind of conjunctive connections. In
the author's opinion this dichotomy reflects only the difference in
genesis (speech/vision perception, logical/associative thinking) of the
identical concepts (i.e. it is the homological concepts to be conjugated).

Another open question is the discretability of the consciosness flow.
Perception is essentially discrete, but could the internal world flow be
divided into separate "shots"? Related to this the next computer metaphor
could be of interest: gestalt thinking is an analogue process, and a
logical one is a discrete, "digital" process. Using this metaphor
Nalimov's hypothesis (Nalimov, 1974) about continual consciousness flows
(thinking of an individual person is a part of such united flows) and
about translation of them by logic (verbal, discrete) thinking could be
interpreted in the following way: the structure of brain processes (e.g.
the dynamics of neural electromagnetic fields) is isomorphic to the
structure of the matter in its deep levels (e.g. superstrings), and
because of it the brain processes at low energy expenditures can change
this deep structure of the matter (this phenomenon might be called psychic
catalysis - an analogy with biochemical catalysis). In such a way various
paraphenomena could be exp
lained. In other words an analogue part of a brain supercomputer is used
for interaction (perception, processing, generation) with continual
consciousness flows, and its discrete part (they may be structurally
identical - the same structure (or process) can take part in the both
analogue and discrete computations) is used for the interpretation of
these processes, for supporting communication between persons, for
providing processes which have to be independent from continual
consciousness flows, for processing information represented in discrete
form.

2.2. Self-description

Below several concepts are introduced:

(1)  personality description: information, using which it is possible to
recreate personality; this information can be divided to two components:
structural or passive (memory about expierience) and functional or active
(mechanisms, based on perception and memory, to organize personality
behaviour); 
(2)  personality reconstruction: recreating personality using its
description; 
(3)  knowledge about personality (in the sense of knowledge
representation): ordered passive component of personality description;
information about personality could be rendered in amorphous
representation (e.g. autobiography) or in predefined scheme (based on a
hypothesis on memory organization) - as filling slots in a frame or
answering questionnaire; 
(4)  inference mechanism: an active component of personality description;
an inference mechanism is strongly dependent upon memory organization; 
(5)  personality verification: correspondence evaluation between
reconstructed personality and its prototype. 
(The following brain experiments could be accounted as evidence of the
high dynamics of pesonality and fuzziness of its borders: "Try to evaluate
how you are similar to the person you were 5, 10, 15, 20 years ago.
Greater than 50%?. What'll remain of your present self after 10, 20 years?
What could happen if you were duplicated and two identical copies were
placed in different environments? - After some time would these copies be
quite different personalities or very similar ones? Could you be yourself
(keeping your self) if you lost (e.g. in accident) half (75%, 90%) of your
memory, your motor skills? Where is the border separating self from
not-self?" These experiments illustrate the hypothesis: personality is
defined by some kernel (regions of memory, individual features of
inference mechanisms), and personality parts outside this kernel can vary
greatly - and for personality reconstruction it is necessary to render
this kernel correctly (one of the AM goals is an attempt to recover this
kernel).

To the author's mind, personality description could be achieved by an
immediate way: reading it directly from brain using methods of biocontrol,
thermovision, tomography (Ivanov-Muromskiy, 1983) or future achievements
in nanotechnology, but for today it is rather fantastic. Another way is
getting this information by indirect ways, for example: methods based on
neuropharmacology or parapsychology, working out methods based on integral
aura registration at the moment of death, transmission of sacred texts in
the Indian culture by personality transferring from a teacher to its
student (Sementsov, 1988). There are proposed indirect, psychology based
methods using combination of the following approaches:

(1)  amorphous - account information about yourself by non-structural
methods; for example, diary, autobiographical novel or film; 
(2)  structural - self-description by filling-in special forms; 
(3)  test - testing by test batteries (questionnaires) to recover
personality structure and inference mechanisms;
(4)  introspective - reconstruction of personality structure based on
description of introspective expierience;
(5)  simulation - updating information about personality by simulation
with feedback from the tested person.

3. Computer model of personality

3.1. Personality conceptualization

It is possible to represent personality as consisting of two components:
personality structure and an inference mechanisms working over it.
Personality structure could be described by a semantic net. There would be
the following inference mechanisms:

(1)  simple mechanisms for working with large memory (specimen search,
     association etc.) to realize memory based reasoning (Waltz, 1987); 
(2)  production mechanisms to realize heuristics reasoning; 
(3)  mechanisms to realize analogy based reasoning (Waltz, 1987, Lenat, 1984); 
(4)  mechanisms to realize simulation based reasoning; 
(5)  metamechanisms to control concurrent work of other mechanisms.
In addition to the long-term memory (semantic net) there should be a
short-term or working memory.

3.2. A model for neuronet computation

First of all, a model for neuronet computation should reflect basic known
facts about neocortex organization:

(1)  computer neocortex consists of 10,000-100,000 modules connected "each
with each" (Nth module is connected with Mth by a different number - it is
defined by commutative channel scheme for neocortex modules); 
(2)  each module is an elementary processing unit with 1000-10,000 inputs
and outputs and consists of upto a million nodes; 
(3)  each node gets 2 inputs from other nodes (from which, it is defined
by commutation scheme for module nodes), posesses a small piece of memory,
and performs a number of simple operations: logical, ariphmetical, table
transformations, memory read/write.

A model should also allow the embedding of the semantic net into the
neural net and to realize the mechanism of knowledge activation. In
addition, of course, a model should allow the realization of the concepts
of brain functioning at high levels.

This model could be properly realized by digital (or combined
digital-analog) optical processors designed currently at many laboratories
around the world. 

4. Conceptual scheme for feedback simulation of minimal personality

Firstly, it is necessary to create a model of minimal personality
(prototype model). The creation process should include the next stage: 
(1)  generation of hypothesis about personality structure and inference
mechanisms; 
(2)  creation a computer model for neural net computation; 
(3)  computer realization hypothesis (1) by model
(2)  and designing of proper user interface.
Secondly, computer program for the recovery of personality structure and
individual features of inference mechanisms by various tests should be
worked out - work of such the program will result in filling the prototype
model with contents of the concrete personality (generation of animated
model).

And thirdly, the feedback simulation system for on-line correction the
animated model by a tested person and experimenter-mediated correction of
the prototype model should be created.

References

Gleser V.D. Vision and thinking. Nauka, Lenigrad, 1985 (in Russian) Gostev
  A.A. Individual features of mental images: results, problems and
  perspectives. In: Cognitive psychology. Nauka, Moscow, 1986, p.121-131
  (in Russian)
Ivanov-Muromsky K.A. Neuroelectronic, brain, organizm. Naukova Dumka,
  Kiev, 1983 (in Russian)
Lenat D.B., Brown J.S. Why AM and EURISCO appear to work. Artificial
  Intelligence, 1984, vol.23, p.269-294
Manin Yu. I. To the problem of early stages of speech and consciousness
  (phylogenesis). In: Intellectual processes and simulation of them. Nauka,
  Moscow, 1987, p.154-178 (in Russian)
Nalimov V.V. The probabilistic model of language. Nauka, Moscow, 1974 (in
  Russian)
Piaget J. The psychology of intelligence. Routledge-Paul, London, 1950
  Sementsov V.S. The problem of traditional culture translation in example
  of the Bhadavatgita. In: East-West. Researches. Translations.
  Publications. Nauka, Moscow, 1988, p.5-32 (in Russian)
Spivak D.L. The linguistics of altered states of cosciousness. Nauka,
  Lenigrad, 1986 (in Russian)
Velichkovsky B.M., Kapitsa M.S. Psychological problems of intelligence
  investigation. In: Intellectual processes and simulation of them. Nauka,
  Moscow, 1987, p.120-141 (in Russian)
Waltz D.L. Applications of the Connection Machine. Computer, 1987, vol.
  20, p.85-97

Metamorphosis
An Alternative To Uploading

by Thomas Donaldson
reprinted from Cryonics May 1990 by kind permission of the author

This article presents some thoughts based on the growing, but still
incomplete, understanding of human thinking now being developed by
neuroscientists. It's all tentative. What I aim to do is more to focus on
experimental issues involved in this question. The answers seem to me to
move over slowly into the statement of the title; but after all,
everything has turned around more than once and we won't really know until
the game is over.

Yet the notion of uploading incorporates a complete metaphor about how we
think, remember, and exist. The idea is that we are (very complex)
computer programs, running in more or less identical machines. This is not
an unreasonable idea, and it's had a lot of use. And in fact it would
imply that we can take this program and run it on more powerful machines.

Yet even scrutinizing the computer program metaphor, any honest hacker
would raise problems to an easy porting of any arbitrary program. We
can't, after all, simply take the very same Macintosh program in 68020
code, load it onto a DOS 3.3 80386 machine and expect it to run.

It doesn't even follow that programs written for 80386's will run on every
machine using that chip. That means we can't move it without changing it.
If we try to move it to more exotic computers, say from an 80386 over to
an Ncube, the needed change becomes far more violent. Any honest hacker
would wonder if it remained the "same" program in any useful sense at all.

In some of these exotic cases, porting isn't really even a serious
problem. Some of them use (but very differently) the same kinds of chips
we have in our own computers. They may even run special versions of UNIX.
Fine, so we can move the program. But then we smash into a second issue:
so near and yet so far. Sure, we can run the program on this machine. It
doesn't run any better than it did before, though, because it's Quite
incapable of using ANY of the extra power. (Apple people, by which I mean
not Mac but Apple, see this everyday. An Apple lIGS will run every Apple
11+ program ever written. If you were a 11+ program wanting to see the
world in high-resolution colour, this would be cold comfort. I'm sure the
Mac world sees the same problem).

Even this consequence of the analogy should tell us something important.
Our minds are adapted to run in one particular computer, with a particular
speed and peripherals. It's not enough just to make it run in another
computer; it might even fail to work if we simply increase the speed.
(Game programs give a simple common example). If programs (or minds) are
ported, they often have to go through extensive changes. The more
resources available to the target computer, the more changes needed. 

Many people in cryonics, and (if you allow longer time spans) even myself,
think one kind of technology or another will someday let us achieve things
people only dream of now. Yet we do and will learn that some things are
impossible: just like a technological optimist of 1790, firmly convinced
that someday everyone will buy and use bottled phlogiston. Uploading may
very well end up like phlogiston. 

So far we have accepted the program metaphor. And someone could always
say: well, what about upgraded versions of programs? Isn't it reasonable
to say that they are developments from the original parent, at least as
much identical as you now and you when you were ten years old?

Yet in some very important senses we may not be programs at all. One
fascinating fact about brains is that they change, all the time.

Neuroscientists have examined individual neurons in living (animal)
brains, and seen their dendrites and axon move about within the brain.
Some major genes activated with adult learning are those activated during
growth and development. One major question about memory, still unanswered
(basically because we just haven't worked up to it yet) is that of really
long term memory. Forget LTP (long-term potentiation). LIP, involving
chemical changes to synapses, with structural changes following on closely
as a consequence, very likely does encode memory; the question comes from
the obvious fact that we have no reason to think that these changes will
last for more than a few months at most. How is it, then, that I can still
remember playing in the snow at age 10? Or again, in terms of skills, I
haven't ridden a bicycle for three years but have no doubt that I could
ride one immediately if I wanted.

The implication (I don't want to say this is fact because it hasn't
reached that status and may never) is that our learning itself is a kind
of development, continuous with What we went through as children. That is,
something grows and changes. That would mean that at some level what we
learn affects our brain anatomy. It is because of this effect that the
memory stays with us so long. This would mean, of course, that we would
all differ from one another quite significantly in our wiring, looked at
closely enough. I could not think your thoughts because I am only hooked
up to think my own thoughts. 

If learning and processing change our actual anatomy, and our anatomy
affects how we respond to learning and processing, the fundamental idea of
a program vanishes like an ancient genie. The fundamental idea is that the
program is separable from the machine on which it runs. We have a
computer, and then on this computer there is a program, which could
certainly run just as well on another computer of the same kind. Suppose
though that the program itself, from the moment it began, started rewiring
(and changing chips on!) the computer on which it was running, in response
to its input data so that it would work better and better on the incoming
data. It would very soon happen that each such system becomes quite
incompatible, even if they had begun as twins in infancy.

Some computer languages, such as LISP, don't enforce a strong distinction
between the program and the data, so computers are writing self-modifying
programs right now. It's not even surprising that such programs can start
to show a glimmer of intelligence, even if only a faint flicker. Yet
programs that physically rewire the computers in which they are running
take this self-modification off into another dimension. The suggestion
(still only a suggestion) about brains is that this is the way they work.
And brains with biological circuitry certainly show a kind of machine
which could very well work this way. That is, the hardware to build such a
computer certainly exists, even if it turns out after all that these
capacities aren't used in our brains.

Uploading ourselves into another more powerful computer assumes, just like
the idea of a program itself, that we are separable from our brains. If we
use this rewiring in any essential way, conceivably even if we only use it
in core areas, any simple ideas about uploading find themselves in severe
trouble. You are your brain, you're not just a program running on your
brain.

I don't want any mistakes here. It remains clearly possible, and someday
it must even become easy, to store the complete structural information
about a brain in a computer. The issue in uploading is not that, but of
somehow making a functioning real version in that computer. Storage
encoded in some kind of media, in multiple copies, will someday become an
ultimate form for cryonic suspension.

I would like to spend the rest of this article raising some ideas about
how we can respond to this. For after all, when somebody wants to "upload
they have aims in mind that uploading seems to them a way to achieve. I do
not intend this article at all to argue against these aims, which I share
myself. My arguments so far only mean to raise problems with some methods
proposed. Certainly it is right and proper to want to grow, dealing
mentally and physically with more and more of the world, and with deeper
and deeper understanding. 


Please understand: we come from a long evolution, which has pressed us to
optimize ourselves for our current way of life (I don't mean Palaeolithic,
I mean now. Evolution didn't stop when we just became human; bone shapes
and strengths have changed between Palaeolithic men and ourselves). The
same evolution acting on Homo Erectus acts on us now. Evolution (and
economics) will both apply to immortal superbeings. And this evolution
works regardless of the origin of the changes on which it acts. But it is
NOT static. We don't live now even as people did 200 years ago. (We don't
die as soon, among other differences!). One way to see immortalism itself
is that we are trying to use technology to hasten our adaptation to the
new way of life we've already adopted.

How could we do this? One way might come from using ideas from
nanotechnology to allow us to expand the number of processors in our
brains. The idea, of course, would be to miniaturize the processing and
wiring still more, possibly to allow multiplication of neurons too. The
process would move by an extension of existing forms of growth and
development.

The advantage of miniaturization is that we can remain mobile in more or
less our present form. Clearly, though, the amount of brain power we can
keep inSide our skulls is limited. But we don't have to keep our brains
inside our skulls! Nothing keeps us from having peripherals. Just as our
eyes and ears are 10 ports, we might develop other kinds of 10 ports:
special senses to link to the pieces of our different brains. Perhaps
we'll migrate into these peripheral brains, with bodies like our own
turning into the peripherals. Perhaps not. Someday we will know how far
that may go and what kind of creatures we've become. And I propose an
alternative to uploading: metamorphosis.

Comment by Robert Ettinger:
(reprinted for The Immortalist June 1990, by kind permission of the
author)

An article by Thomas Donaldson in the May, 1990 issue of Cryonics (organ
of Alcor) deals in an interesting way with certain aspects of the
"information paradigm" -- the idea that everything important about us can
be represented as a store of information, suggesting (among other things)
that in principle we could be "uploaded" into almost any kind of computer,
and that running the appropriate program with the right data would
constitute new or continued life for a person.

One of the interesting things about Dr. Donaldson's comments is the
source: he is not only a long-time cryonicist, but a professional
mathematician presently working on advanced computer software, and one who
has studied the types of computers and programs believed most nearly to
approximate some aspects of human thinking. In addition, as our readers
know, he faces the possibility of relatively early cryostasis because of a
malignant brain tumour, and has a personal as well as academic interest in
the nature and survivability of the self.

In short, he does not buy the information paradigm.

What I want to try to do today is briefly indicate his reasons, as best I
understand them, and inquire whether they ought to impress the uploaders.

His main point seems to be that a person cannot be neatly divided into two
parts, "hardware" and "software;" and the "software" cannot be neatly
divided into a program store and a data store; and that, even when this
can be done (with ordinary, present-day systems), an old program will not
necessarily work on a new computer. The program must fit the computer, and
vice-versa.

It isn't just that, in some systems, the program is hard-wired into the
computer. It isn't just that some programs can modify themselves (and
their data stores) by feedback. In living systems, the "program" can
modify not only itself but also the rest of the hardware! Therefore -- for
example -- it is not obvious that a human mind could run at all in an
electronic computer, let alone at electronic speeds.

Now, what will the uploaders respond?

Their first reaction will be that there is only a language difficulty. No
matter how you label the parts and functions, it is still possible, in
principle, to understand the functioning of a brain and to describe this
functioning in complete detail -- if necessary by describing/predicting
every state of every particle and field in the system, under all
conditions of interaction with the environment. Then you can make
transitions from Computer State A to Computer State B, corresponding to
the transitions from Brain State A to Brain State B, by appropriate
manipulation of the symbols.

But -- if I may presume to speak for Thomas' viewpoint -- there are two
inadmissible assumptions in this response.

One assumption is the one I have endlessly argued against, involving what
we might call Turing's black box. The uploaders -- or the extremists among
them -- believe that if two black boxes have identical inputs and outputs,
they should be accepted as the "same" or equivalent. The most extreme of
these extremists think that, even if we limit input and output to digital
conversation, indistinguishable conversation means indistinguishable
"people." Anything that can mimic human conversation closely enough, in a
sufficiently wide range of circumstances, should be accepted as
essentially human, they claim. The implication is that internal states
have no meaning or importance beyond their input/output
symbol-manipulation function, which is patently absurd. Our internal
states constitute our existence.

The second inadmissible assumption of the uploaders is that a
super-computer brain or brain-surrogate is physically possible, one that
can do everything our brains can do (as well as much more) in real time
and space. Thomas didn't put it just this way, but if I read him correctly
he was making, in part, much the same point I have insisted upon -- that
we still lack a great deal of information about brain function, especially
feeling and consciousness, and cannot assume that the necessary states, or
successions of states, can be reproduced in an arbitrary medium.

Let me make still another effort to clarify this point. Maybe part of the
problem is insufficient attention to the meaning of "information
paradigm."

The uploaders think that only the "information" and its processing are
important. Furthermore, the processing procedure (algorithm) is itself
"information." Nothing matters except the appropriate manipulation of
symbols and numbers. The particular symbols used, and even the physical
mechanism of manipulation, are unimportant. Leaving aside the absurdities
this leads to (see e.g. Moravec's Mind Children), let's look at the hidden
questions.

Information? What information? And must we focus only on included
information, or also on excluded information? For the most obvious
absurdity, look again at Turing's black box. Only one type of information
is deemed all-important, or at least sufficient -- the digital language
output algorithm, as response to digital language input.

Surely unbiased people must agree that the internal information also
matters, or at the very least might matter. Is it not possible that two
different internal mechanisms can both produce the same external language
output, yet only one produce the internal states that constitute feeling?
And doesn't it matter what is going on inside when there is minimal
environment interface communication happening?

If we reject Turing's black box, it still is not clear what kinds of
internal information and processing are necessary or sufficient to
constitute a living (feeling) brain.

For those accustomed to thinking in terms of "isomorphism," the question
is whether it is possible, and whether it is necessary, to have a
one-to-one mapping of an organic brain onto an inorganic brain. In other
words, precisely what information (or succession of information states)
must be included, and what (if any) must be excluded? Are digital "frames"
adequate, or do we need continuous analogue dynamics? And so on.

I assert that it is humongously clear that these questions have not been
answered, and therefore any claims for uploading are premature.

In addition, I remind readers, there remains the related but separate
question of criteria of self and of survival. 

Reminder: The following are just notes and musings, not carefully crafted
essays. There will be considerable overlap and repetition, but this is
probably useful.

Adherents of the "information paradigm," I believe, are deceived in part
by glibness about "information" and hasty ways of looking at it.

One of the purest examples of "information" is the data store in a
computer. Yet even here the information must have a physical
representation, and is only accessible with appropriate hardware. Even
Turing's tape requires a gadget to make the marks, read the marks, and
move the tape along. This is part of what Dr. Donaldson referred to in his
comments.

A typical digital computer program store is also information -- again with
some specific physical representation. But is the execution of the program
"information?" We're not talking about a description of the execution, but
the execution itself. This has to be physically implemented with material
parts made up of particles/fields.

Apparently it needs to be said again and again: a description of a thing
or a process -- no matter how accurate and how nearly complete -- is not
the same as the thing or the process itself.

Let's look at it in a slightly different way. Occasionally a description
can be more compact, in some sense, than the thing described. Newton's
laws describe with extreme succinctness certain aspects of the behaviour
of matter and energy throughout the known universe. Certain concise
fractal formulae can generate results of amazing complexity. The
information in an acorn determines in many ways the anatomy and physiology
of the oak. Nevertheless, in general a description is much bulkier and
clumsier than the thing or process described.

For example, it is hard to envision any way in which everything about an
atom could be encoded in a space as small as the atom. In fact, we might
conjecture that any physical object or process represents the most compact
possible expression of all of the properties of that object or process.

Now, uploaders are fond of saying that, in principle, we could -- some day
-- describe a human brain in complete detail, then reproduce the relevant
parts and processes in another medium (with improvements), and this would
be the person still (or again). Their shorthand is the "information
paradigm"  -- meaning that everything important about us is encoded in
properties and relationships, and that these can be expressed arbitrarily,
if we maintain an isomorphism or close analogy.

But even though (for example) a computer program can in principle describe
or predict the behaviour of a water molecule in virtually all
circumstances, a water molecule for most purposes cannot be replaced by
its description or program. If you pile up 6.02 x i023 computers with
their programs, you will not have 18 grams of water, and you will have a
hard time drinking it or watering your plants.

We don't know yet which parts and processes in our brains are crucial (the
source of feeling). Even when we do know, it may turn out that these
particular parts and processes cannot be effectively emulated in other
media. It's as simple as that.

Isn't it?




Letter from Dr Thomas Donaldson:

I've consistently felt that talk about uploading bordered on the
superficial, considering that we presently have very little knowledge of
how our brains work or how our personalities and selves work either. (I
stress KNOWLEDGE here. The number of theories  on the subject is no more
than a sign of our ignorance).

I do want to emphasize, as I recall I did in the article, that I was not
arguing against making improvements. But again, until we know a good deal
more we'll not be able to come up with any but trivial improvements ---
which may well, in the end, turn out not to be improvements at all. (This
is again an argument for learning more, not an argument to do nothing).
Long long life,               Thomas

[ end of part 2 of 2 parts ]

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=3212.2