X-Message-Number: 3212.1
From att!compuserve.com!100431.3127 Tue Oct  4 17:49:02 1994 remote from
whscad1
Received: from att!compuserve.com by ig2.att.att.com id AA15352; Tue, 4
Oct 94 17:49:02 EDT
Received: by gw1.att.com; Tue Oct  4 17:48:32 EDT 1994
Received: from localhost by arl-img-1.compuserve.com (8.6.4/5.940406sam)
	id RAA04432; Tue, 4 Oct 1994 17:47:55 -0400
Date: 04 Oct 94 17:40:33 EDT
From: John de Rivaz <>
To: "Kevin Q. Brown" <>
Subject: Brain Backup Report part 1
Message-Id: <>
Status: RO

                            Brain Backup Report

                       Published by Longevity Books
         Westowan  Porthtowan  Truro TR4 8AX  Cornwall  United Kingdom
                          CompuServe 100431,3127

Contents:

Introduction                                                             2
Uploading                               Yvan Bozzonetti                  3
The Running of a Brain                  Yvan Bozzonetti                 13
Artificial Metempsychosis               Dr Michael V. Soloviov          21
A Search of Methodology for Personality Simulation  Dr Thomas Donaldson 26
Comment on Personality Simulation                 Robert Ettinger       29

Introduction

Mission Statement:

To debate the issues around the concept of backing up the human brain by
scanning and recording the program and data therein, including practical
techniques.

Introduction:

Also known (confusingly) as downloading or uploading this is regarded as a
means to immortality similar to cryonic suspension, permafrost burial,
morphostasis and similar processes. Adherents believe that they can
achieve personal immortality by scanning and recording their brains and
relying on future science to restore them in some way, either within a
computer or artificial body (prosthesis).

Cryonicists are disregarded by mainstream science, and similarly
downloading is disregarded by cryonicists. Therefore Brain Backup Report
will pay particular attention to the arguments between cryonicists and
this concept.

Call for articles and subscribers.

If this newsletter is to get beyond one issue, more articles, letters and
other material is required for future issues. One or more articles in any
one volume are rewarded by the following volume free of cost. Also we need
paying subscribers! Each volume of four 32 page issues will cost 20 per
year. All potential subscribers are asked to send cheques for 20 payable
to "RTL" or checks for $34 payable to "J. de Rivaz", leaving the date
blank. Cheques or checks will not be presented unless it has been decided
to proceed. This issue represents issue 1 of volume 1 and although it is
distributed free (like shareware) if you like it and want to support it
you are invited to pay for a subscription to issues 2, 3, 4 of volume 1.

Of particular interest would be computer experiments that readers can try
for themselves. No, I am not recommending connecting your brain to your
serial port! However programs and techniques of related ideas are
considered. An example would be software to analyse text to identify the
writer, and similar projects (you think of them!) would be welcome.
Another example would be a program to visualise text, that is to say when
fed a text file it produces a (pretty?) pattern and relies on the human
brain to correlate different patterns with particular authors.

The preferred format for submission of articles is email or MSDOS disk
sent by post, of plain text or WordPerfect readable text. Typed material
should be typed with a new ribbon and clean out "o" and "e"s etc so our
OCR can read it properly. Handwritten material will go to the bottom of
the pile and is unlikely to be accepted. All letters will be assumed for
publication unless stated otherwise.Uploading

                            by Yvan Bozzonetti

The "uploading" term refers to reading the brain content and its copy to
an artificial support, for example a computer. The brain copy is then run
on this artificial system to live an artificial life. This perspective is
rejected or seen with some fear by most cryonicists. Many false arguments
are put forward to justify that attitude. I have read for example than
brain reading would imply an X-ray pulse generating so much heat than the
brain would be incinerated, the only X-ray generator on the "market" would
be a nuclear bomb detonated in space ...

All of that demonstrates a lack of information on the subject (including
nuclear bombs). The objective of this paper is not to form the basis of a
research project on the subject, but simply to give some ground
information to anyone. In the coming months or years, this data base will
be updated more or less regularly.

The content is divided into four sections:

1) Introduction. 2) Brain reading. 3) Running of a brain. 4) The use of
uploading.

1. INTRODUCTION.

Reading a brain can be useful for different purposes: To repair it before
thawing in a cryonics process, to copy some of its parts for replacing
then in the original organ, to reconstruct it fully or to put it on an
artificial support, a computer or hard wired neural network. The
technology can be useful for communicating with other species, to run a
back up brain extension, to save periodically the brain content as a
safeguard against total destruction in an accident or as a life mode to
travel to inhospitable places. 

Brain reading can use short waves such X and gamma rays. The real problem
here is not thermal dissipation, but the production of lenses for the
optical system. For the X spectrum there are two ways: Fresnel lenses or
graphite ones. The gamma domain is even more complicated to tame.

Nuclear magnetic resonance (or magnetic resonance imaging -MRI-) is
another way: the use of helium 3 and nanocantilevers can solve the picture
sharpness requirement. Biphoton interferometry or intensity interferometry
is another potential approach.

Running a brain, or a part of it, on an artificial support may be done in
a great number of ways. The two to hand now are simulation on an
electronic computer and analogue electronic neurons. Superconducting
technology may get in the "market" in a few decades and optoelectronics
systems can outperform everything we know today.

The use of uploading is seen today as a bad way to get out of a state of
cryonic suspension, but there may be an entire way of life based on it,
from consciousness running multi-bodies to experimenting in virtual space.
It may be a solution to overcrowding for a large population with readily
available indefinite longevity. Biological life could be fragmented in
short periods of some centuries at each time on a global time span of many
millions of years.

If the brain is on an artificial support, the body linked to it needs not
carry itself a big brain. Biological life could use many species, some
big, some small, some produced by natural evolution, some specifically
engineered. All of that will not all come at the same time, some
possibilities will be effective many centuries after the first ones.

2. BRAIN READING.

The gamma ray way.

Gamma rays are very short wavelength electromagnetic radiations. A visible
light photon carries near 3 electron-volt of energy (The electron volt is
the energy gained by an electron accelerated by a potential difference of
one volt, in SI system it is 1.602 x 10 ^-19 joule or watt maintained
during one second). On the contrary, a gamma ray can pack one million of
electron volt (one mega ev, or 1 Mev for short). Because the wavelength is
inversely proportional to the energy, we go from half a micrometre with
visible light to near one picometre with the gamma ray: 1/100th the
diameter of an atom. The sharpest details we can get from far away with an
electromagnetic radiation is of the order of the wavelength. Clearly,
visible light is too coarse to see atoms and gamma ray is too sharp.

On the other side, gamma rays can produce three dimensional pictures
called holograms at relatively low cost. To produce an hologram needs a
coherent, monochromatic source. In a coherent source, all the photons have
their wave in phase, they are at the same point of their sinusoid at the
same time. This synchronisation is maintained for a given distance only.
It must be at least equal to the thickness of the object under scrutiny.
Gamma ray lasers (grasers) can, by their very technological basis produce
a very long coherence length, well beyond what a brain hologram needs. The
optical system reduces at a beam splitter and a mirror, both can be built
for gamma rays.

To make an hologram, a beam of light (or any coherent wave) is split into
two parts: One goes at the object to be holographed and the other at a
mirror sending it on a course crossing the first beam. At the beams'
crossing point, a photographic "film" records the interference pattern
produced. For gamma rays, many monomers polymerise readily under the gamma
beam and a polyester block get out with all the fine details of the
holographed object, down to the molecular level.

Soft (not too highly energetic) gamma ray are reflected by very smooth
grazing incidence mirrors. A mercury surface is one of the best and
cheapest way to build such a device.

The beam splitter is more challenging at first, one of the simplest is a
"Y" shaped metal monocrystal, anew with a mercury-wet surface. This is
nothing more than the optical fibre analogue in the gamma rays domain. 

The GRASER or gamma ray amplifier by stimulated emission of radiations
(GR-laser) is more subtle for this work. In a laser, excited atoms fall
back to the ground energetic level under the influence of a passing photon
of the right energy. Because the passing photon can as well be absorbed by
a ground state atom as stimulate an emission in an excited one, there must
be more atoms in the excited state than in the ground one. This is the so
called population inversion. The trick to get this is to use two excited
states: First, an energy source pumps up electrons in an upper level
readily attainable; that much instable state decays quickly to another
one. This may be the ground state, the energy is then lost, or an
intermediate state. The laser excitations are chosen so that that
intermediate state is both, favoured and hard to decay to the ground
state. There are then many electrons locked in this intermediate state and
the population inversion is formed. 

The more energetic the laser photon gets, the more unstable the
intermediates states become and the harder the inversion situation
evolves. That is why low energy infrared radiation produces many laser
beams and ultraviolet light very few. X-ray is very difficult and gamma
rays impossible by this channel. All of that is about electronic lasers,
where the emission process takes place in the linear electromagnetic
domain of orbital electrons. In the nucleus, proton transitions can too
produce a laser radiation, but the energy levels are heavily disturbed by
nuclear forces. The electromagnetic structure is then forced in the non
linear domain. Any transition generates not one, but a full bunch of
photons, all with the same energy and phase, but with different emission
directions. That kind of photon production is very hard to get and a
excited state remains such for a long period, from seconds to days or
more, not the some microseconds seen in common lasers working in the
visible spectrum.

This is that property that accounts for the technological possibility of
graser. If all the atoms in the graser can be lined up in the same
direction by a magnetic field, then the directions of emission in each
atom will match what happen elsewhere and a true laser beams system can be
built. Outside the graser source, a "sea urchin" of metallic crystals
channels the beams towards their target. The whole system can be made on a
bench top. There has been some tests but no more because there is no
declared market for that technology. Brain reading my be one of the first.

THE X-RAY WAY.

X-ray wavelengths get a better match at the atom size than gamma rays do.
That is to say, they are less damaging for a given level of information
recovery. Unfortunately, they fall under the low energy limit of nuclear
orbitals and remain firmly in the electronics laser realm. That is why
they are so hard to get in a coherent form suitable for holography.

Four technologies are envisioned: The Star War-like X-bomb, the giant
laser, the atom cluster laser and the micro bomb, the last may be the
best.

The less promising approach seems to be the giant laser: A short pulse of
infrared radiation accounting for many terawatts is focalised on an X-ray
lasing medium, for example an aluminium dust grain. There have been some
laser amplification by this way, but the system is not cost effective, to
say the least.

The next system is the cluster laser. Here, the laser radiation is not
targeted at individual atoms in a vaporized aluminium specie, but at a
group of atoms. The key is to produces a laser pulse so short that the
vaporization process has no more time to proceed. The energy is absorbed
by the atom group more efficiently, and allows its transmission to the
inner shell electrons where the X-rays production takes place. These works
are at the beginning, but the hope is to get a pulsed X-laser on a bench
top some years from now at an affordable price for many laboratories. The
drawback for uploading is the short life of the electron energy in the
atoms. That short life translates into ill defined X-ray frequency by the
channel of quantum uncertainty (the product of the uncertainty on time and
the one about energy cannot be smaller than Planck's constant h). Energy,
frequency and wavelength are different yard sticks for the same physical
quantity and a badly defined wavelength turns into low coherence length.
It is not possible to get an hologram of a thick object with this radiation.

The Star War X-bomb is more promising on this ground but unworkable in
practice for brain reading, mainly on cost grounds. It is nevertheless
interesting as a first approach of a more advanced device. It is itself
the last incarnation of the nuclear device family. To understand it, a
basic knowledge of nuke making is in order.

The first generation was the fission bomb, the most primitive was a hollow
sphere of plutonium with a tick covering of powerful explosive burning
(not detonating) from the exterior to the inner part. The reaction force
generated by the expanding gas compresses the inner plutonium until the
exterior surface get too small to allow the neutrons generated by
spontaneous fission to escape. That design was used in the Nagasaki bomb
in 1945 and later in some Chinese devices, one of them has killed many
people in a premature detonation.

In the next generation fission bomb, uranium or plutonium was covered by a
conducting aluminium blanket and an electromagnet with a tennis ball
weaving coil. An explosive produced a puff of hot gas going throughout a
magnetic field. The electrical current generated by this device was fed
into the electromagnet; the rapid surge of the magnetic field induced in
the aluminium coating a current whose the own associated magnetic field
cancelled the first inside the shell. Magnetic field generate a pressure
much like a gas, when there is a wall with a high magnetic field on a side
and none on the other, the wall undergoes a powerful push. This was the
system used to implode the fissile matter. Even today, French A bombs work
this way. US and Russian one have no magnetoexplosive electric generators,
the energy is stored in a special kind of condenser using surface effect.
(USSR has been the first to exploit this technology). 

The third generation is the so called fusion bomb or
fission-fusion-fission device (nuke makers never use the word "bomb"). The
3F system exploit the U238 fission for producing the bulk of its energy.
That nucleus needs fast neutrons generated by a fusion process to split.
The fusion reaction is produced in a mixture of deuterium-tritium (US) or
deuterium-lithium (old way USSR). When two atonic nucleus have their spin
lined up, there is a small probability they behave as a single nucleus in
a very excited state. A passing X-ray can then stimulates an emission of
radiation from the system. This laser-like process produces very good
coherent radiation with a long coherence length, but that is not the
objective in a weapon. After the X-ray emission, the atoms cannot find the
energy to get separated anew, they have no other choice than a completion
of the fusion process releasing the most sought after fast neutrons. In
the first experiment on the Bikini atoll-3, a powerful magnetic field was
imposed for many hours on solidified D-T at very low temperature. The
device was bulky and very heavy.

In military versions, a fission system produces a X-ray radiation in its
aluminium coating and a part of that radiation is channelled by a mirror
towards a metal cylinder containing the fusible product. The X-rays
evaporates the metal at high speed, the reaction force compresses the
fusion element at supersonic speed. The reduced volume would generate a
higher temperature, but the supersonic process do not allows to get the
equilibrium condition. The heating comes then from the thermal energy
associated with nuclear spin disorder and not from atom collision. The
result is matter with hot atoms and cold nucleus. Cold there means not
disordered, that is to say, nuclear spin gets lined up just in the way
needed to start fusion reactions. 

The fourth generation is a 2F system: the U 238 blanket is removed so that
neutrons can escape freely in the environment. The power is far less but
the radiation induced damage is enormous.

The 5th generation exploits a deuterium helium-3 combustible, because
helium is chemically unreactive these systems are limited to experimental
works at very low temperature. There is no neutrons, that clean system
produces only a powerful X-ray flash from the starting steps of the fusion
process. In the compression technology, not all spin gets lined up in the
same direction, there are many domains with homogenous spin direction, but
not a total order. On the contrary, the very low temperature associated
with the helium use allows us to go back to the technology of the first
fusion experiment with magnetically lined spins. There is only one
direction for all spins in all the charge. The multiphotons process,
characterising nuclear electromagnetic radiation, get out orderly along
well defined paths. For an outside observer, the radiation get out along a
number of beams with well defined limits. Each beam can be then guided by
a monocrystaline metal "light pipe". Because there is no mechanical fast
compression, a fission device is no more a requirement for X-ray flashers.
If a small x-ray laser could be built, it would suffice to give the first
starting radiation spark. Such a system could be exploited on Earth
without fallout or induced radioactivity. No radioactive products are used
or generated. It is sad than the work on these systems has now been
stopped.

The sixth and last generation is now only on the design board, it could be
nicknamed the micronuke. My information on the subject are much more
limited than for the other cases. I have learned most of the technology of
the 3F system in a article published in La Recherche, a scientific
magazine similar to Scientific American. The physics of spin cooling was
explained to me by a physics professor at the Arts and Metiers school in
Paris. Neutron nuke was the subject of yet another paper in La Recherche.
The X-system comes from a jig-saw of information in New Scientist, Nature
and Scientific American, the electromagnet compression system was
demonstrated in a TV release some years ago about the France strategic
forces. The 6th generation is mostly a personal reconstruction because
such researches must be kept secret up to now.

There was a publication in Nature on the use of UV laser for starting
fusion reactions (no mention of the required conditions). A Scientific
American brief was discussing some years ago the properties of
soliton-like phonons in long chains of deuterated polyethylene. The
solitons was said to orient some unspecified category of spins in atoms.
The phonon pulse was initiated by a laser discharge...

May be I will write a sci-fi novel, but I see the following when I put all
of that together: Long chains of polyethylene can be oriented in a given
direction by stretching them in a wire making process. An UV laser can
orient the atomic spins. This spin cooling is then destroyed by the
thermal energy associated with nuclear spins. In high energy physics, the
so-called polarized targets have their nuclear spin lined up by this
process transmitting spin order from atoms to nucleus. A second UV pulse
could then starts the fusion reaction. The system, a chemical laser
working with aluminium-fluor-hydrogen no more larger than a pill, could
detonate a bunch of polyethylene fibres. A pocket nuke of this kind would
be a "good" neutron device. It remains to be seen if helium could be
introduced in this device to get a mini X-rays explosive laser generator.
The cost would be only that of a bunch of plastic fibres. Put in a low
pressure chamber to suppress any shock wave, everything could fit in a
room and be used repetitively at nearly zero cost.

The radiation dose undergone by a brain holographed in this way would be
of the same order as what is given by present day tomography. This is far
from the incinerating effect of supposed space bombs. Nuke researches have
a bad press but they could be very useful for recovering most sought after
brain information. Even the 5th generation cryogenic system could be
miniaturized and give a good generation of X-rays lasers with long
coherence length. Even if it is not as cheap as the plastic version, it
could be interesting because it has worked in some experiments. Can a
charity be mounted to finance nuke research?

THE MAGNETIC RESONANCE IMAGING (MRI) WAY.

Atoms with an odd number of particles in the nucleus display a spin for
exterior observer. That spin can be oriented in a magnetic field, the
nucleus behaves then as a small top and has the possibility to precess
around the spin direction. That possibility can be effectively realised if
a radio wave at the natural precessing frequency is present. If the
magnetic field is inverted and the radio wave shut down, the atomic tops
are turning the wrong way, to comply with the new order they must lost
their rotational energy by a radio wave emission. The radiated frequency
depends upon the local magnetic fields intensity whose one part comes from
the applied field and another from the effect of nearby atoms. That
neighbour dependence is characteristic of a molecule: MRI is a chemical
analyzer at the molecular level. 

If the applied magnetic field contains some gradient, so that its
intensity get variable from place to place, then it becomes possible of
pinpoint the position of the emitting atom in space. There is a chemical
picture of the analysed object. True MRI systems use some more refinements
but that add nothing to the basic idea.

Current medical MRI systems have a pixel dimension somewhat under one
millimetre in diameter. To get a better definition calls for more
radiation, that constraint may be met in two ways: augment the scanning
duration or get more atoms in the polarized state. Not even one atom in
one thousand is polarized by the magnetic field, so there is some room for
progress. To gain an order of magnitude on the picture definition reduces
the pixel volume by 1,000, so the recovered signal get 1,000 times weaker.

With the most powerful superconductive electromagnets, the new MRI
scanners can go down to .1 mm. Magnet technology cannot give much hope. At
this level, the energy in the magnetic field starts to excite directly
some neuron and produces bizarre sensations in the subject. Going further
could be damageable of the brain structure, a major drawback for a
technology using no ionising radiation and no destructive process.

Observing time is limited at some ten of minutes for a living patient, in
the cryonics case, some days would be acceptable: the limit is set by the
availability of the apparatus. Stretching the things at a maximum, a
factor of 1,000 seems possible with a month long scanning. That put the
pixel dimension at .01 mm or 10 micron, approximately the dimension of a
typical brain cell.

What is called for in a brain reader may be divided in two levels: First
we want to recover the wiring geometry of the neurons and second the
biochemical state of the synaptic complexes at a resolution better than
their dimension, near .2 micron. The first step needs a mapping at the
micron level and the second a ten times better map. For a small object,
the magnetic gradient can be very high without requiring a macroscopic
giant field. Using that property, it has been possible to get a micron
sharp picture of some cells for some years now. If we accept cutting a
brain into fine slices, then the first brain reading step is at hand. The
full brain picture reconstruction on a computer is not a problem but then
we can not speak about a nondestructive information recovery technology.

In a thin slice, ultrasound moving to an fro the molecules at different
speed on different parts of a sample would open the way for the slice
technology towards the second step. That kind of experiment would work on
complete neural systems for small animals, for example in the insect case.
Learning to read a bee "brain" would be very interesting as a first step
into the human brain reading technology. So, we have the technology to
embark on an experimental track but not, at that level at least to
capacity to read without destruction bigger brains.

Big fields and big gradients giving a better polarization and localization
are not sufficient for a large brain. All MRI scanners look at the
hydrogen atoms in water molecules, this is not the best atom for that
technology nor the happiest choice for looking at protein and membrane
structure. Carbon and oxygen, in their most common form have an even
number of particles in their nucleus, the nuclear spin cannot then be
observed, that let only minor atomic species as a possible target for a
MRI system. Another possibility would to introduce in the organism a
dedicated atomic "tool" specifically chosen for its MRI properties.

The xenon 129 can be hyperpolarized in a special device so that it reacts
very strongly to MRI, the polarization holds for some minutes and allows
the xenon introduction by the lungs in simple breathing (for living
subjects). Helium 3 is more than one hundred as powerful, has no membrane
toxicity and a very high diffusion speed. With week long scanning,
hyperpolarized He3 would get without problem the .1 micron target of full
brain reading for human organ. That technology was tested with xenon only
in the first half of 1994 and was reported in Nature in the Summer 94
period. Works on He3 are at the very beginning.

Reading a brain has never been done but now we know how to do it with
current technology. The most important feature of the envisioned
technology is its nondestructive nature. Whatever the final objective, an
uploading process or an assessment of the freezing damages before a
nanomachine repair process, brain reading is the first step to undertake.
At the experimental level, it would be a definitively required capability
to discover how the brain works in its normal state. Without that
knowledge, any attempt to repair a damaged organ would be mere pie in the
sky. Putting in place that technology would seems command a first priority
ratting.

BIPHOTON INTERFEROMETRY.

If MRI with hyperpolarized He3 seems the best near term choice, it retains
nevertheless a big sensitivity drawback: The scanning time must expands
from days to weeks to get the required picture sharpness. Theoretically,
there is a far better solution: The intensity interferometer, or more
generally, the biphoton interferometer. Interference patterns stems from
wave interactions and it seems impossible with this constraint to get any
information on objects far smaller than the wavelength. This is true (with
some reservation) for single photon interferometers where what interferes
is the wave amplitude. In the intensity interferometer, what we look at is
the probability distribution of the squared amplitude. That property, for
systems not depending on time, is merely the wave energy. The
"interferences" are displayed by correlations between the arrival times of
two photons in the detector.

The first interferometer of this kind was built in 1943 at Jodrell Bank by
Hanbury-Brown for astronomical observations in the radio spectrum. An
optical counterpart was put in service shortly after World War 2. A larger
system has been exploited in Australia for many years. The most
extraordinary feature of this kind of detector is its insensitivity to the
wavelength used. The sharpness of the recovered information rests uniquely
on the time and spatial precision of the apparatus. From radio to gamma
rays, the picture quality remains the same. A radio wave system of this
kind used in microscope mode would map a brain in seconds or less. Even if
the interferometer remains costly, each scanning would be very cheap. As
in the MRI case that information recovery is not destructive and exploits
no ionising radiation. Its usefulness goes beyond the cryonics domain as
it could produce a real time movie of a living brain or be used to upload
repetitively a biological brain.


The Running Of A Brain

by Yvan Bozzonetti
                                     General outlook..

Depending on which scanning system is used to recover information, we are
provided with a hologram from gamma rays or X-rays, a computer file from a
MRI system or another kind of computer file from an interferometer. The
hologram could be good at recovering the geometrical structures of a
brain, but it remains to be seen if the molecular level information can be
recovered in this way without too much radiation damage.

The low mass atoms of living cells are a bad target for high energy
radiation, but they are good for looking at metal or any heavy materials.
That technology is so better suited for pinpointing the location of a
swarm of micromachines than to look at brain contents. If the information
must serve to upload the brain onto an artificial support medium, this is
not interesting. If the ultimate objective is rebuilding the brain with
the help of nanotechnology, then X-rays systems must be developed. That
leaves us with MRI as the sole runner in the race to brain reading in the
near future. Before looking at the technological prospects it could be
interesting to see why this effort would be worth doing.

All may be summarised as a matter of faith: If we believe word for word
the common main religions, there is no need to do anything. If we have
some doubts, we can think some day someone will be able to use time travel
to reconstitute the past or recover the biological and brain information
at the relevant period. With less faith we can turn to biological
preservation: permafrost is the cheapest and may be the most robust, if
there is in a far future both the technology and the will to recover hold
people... Freeze drying protects more information and the second life
period would come sooner. Cryonics is placing even less reliance on future
abilities and seeks the earliest revival ... with some faith in the good
will of specialized organizations and their long standing durability.

The uploading option requires the minimum amount of faith: It assumes no
progress in the conservation or freezing process, no breakthrough in brain
reading computer technology, no good will from anyone for giving a second
life.

Uploading is interesting because it asks for just a technological
development at an affordable cost when expanded on some tens of years. It
would be cheap to use and maintain, the uploaded person can do many things
in a virtual world, not the least interesting it can make a living with
information processing and so pay itself for the biological second life of
the stored body. The idea is simple: if you want live again, leave to
nobody the task of doing the necessary job to turn that eventuality into
reality. Uploading is not an end in itself, it is a step towards a final
objective. Recovering brain information has so two or even three
objectives: To assess the biological state of the brain so that action can
be taken to repair it when the technology allows it, to upload a brain
copy in an information processing system and to keep a copy as a
protection against local destruction of the body.

The Computer Solution.

The first step is to turn the data file produced by the MRI system into a
brain map describing all components with their information processing
capability. If current MRI systems are any hint, each neuron mapping may
asks for up to ten millions floating point operations (flop). In ten days
or so, a 100 Gigaflops (billions of operations per second) could do the
work. To put that in perspective, the graphic processor "GLINT" from Dlabs
can run at 2.5 billions operations per second, the supercalculator
processor R80001N from Mips Technologies process four instruction per
clock cycle, 300 millions times per second. A128 array (seven dimensional
hypercube architecture) would suffice to recover a brain map. GLINT is
sold at $150 apiece, the full computer could be built for some tens of
thousand of dollars right now. There are today some supercalculators
faster than that, but they cost far much more.

To run a brain on a computer asks for some 10,000 flops per neuron. With
something as ten billions of neurons, a brain would need a 10,000 Giga
flops system, or near 50,000 GLINTs. Thinking Machine has built highly
parallel computers with more than 65,000 processors - there is so no
technical difficulty. On the economical scale it would be wise to wait for
some time. If price continue to drop by half every 2.5 years, then 25
years would put the computer at the price of a new car today.

All of that assume a continuous progress in chip power. The commercial
incentives are there, the technological possibilities too. On a historical
perspective, the first processor generation worked out one instruction in
some tens of elementary clock cycles, this is the well known CISC
technology exploited by Intel in its X86 family. Graphic processors use
often long word instructions, in fact a word contains more than one
instruction, this is the superscalar architecture massively exploited on
the Intel's Pentium. Another approach, the so called RISC (reduced
instruction set) owes in fact its success to the pipeline organization: A
new instruction is started at each clock cycle and then passed for the
next step down an assembly line. The first instruction needs always some
tens of clock cycles to be completed but then each new clock "tic"
delivers a new completed task. Today, RISC processors goes down from the
stations to the basic PC with Power PC, (Motorola, Apple, IBM..) Alpha
(DEC)... The next revolution would to pack a vectorized processor in a chip.
Vector systems use one instruction with different data, each data block is
a component in a vector. Big computers can exploit vector with up to 200
components, all pipelined. There is too the array processor on wafer scale
integration.  That product is not commercialized today. It was worked out
for the SDIO, now the Ballistic Missile Defense Organization. BMDO uses
wafers with more than 100 elementary processors.

That short summary is mostly historical, even if it can plot the
commercial race for the years to come. Both, line drawing width reduction
and going from silicon to silicon-germanium alloys would boost the clock
frequency in the gigahertz band.

Outside the assures electronics evolution, there is the possibility of an
opto-electronics revolution. In opto-electronics, information is no longer
carried by electrons flowing in conductors, but jumps on a beam of light.
What is interesting in "optronics" holds in one word: multiplexing. When a
light wave travel in an optical fibre, it can propagate in different
polarization modes. The larger the fibre, the more there are possible
modes. When the fibre diameter expands beyond the half wavelength, the
number of modes expands very swiftly. Depending on the injection angle,
one mode or another can be selected, it never mix with other ones and can
carry its own cargo of information.

Experimentally, General Electric has produced an optronics processor able
to separate up to 2,000 modes. Each material device there can process with
the same instruction 2,000 different data, this is a vector processor with
2,000 dimensions. The so called multimode optical fibres can display up to
ten billions of modes, they are nevertheless exploited with only one
information channel because the technology do not allows today the
separation of such a great number of modes. The million figure is probably
in the technological range of the coming 25 years. A 10,000 to 100,000
modes processor could be on the market at the same period.

Optronics systems have another valuable quality: the commutation rapidity.
The best silicon-germanium switch can change its state in 1/100 billionth
of a second, light does ten to one hundred times better. An optronics
processor could work with a clock running between 10 and 10 GHz. With ten
billions of operations per second on one million channels would give the
power to run 100 human brains simultaneously. Stretching the technology at
the limit, with one billion modes and a 100 GHz clock allows to pile up
one million brains in a single processor. That will be a reality half a
century from now, uploading or not. If no brain is uploaded on such
systems, the question is how such devices can be controlled in a
meaningful way, when they outperform the capacities of their builders by
such a margin.

With optronics processors, not only cryonics would pose no overpopulation
problem but there would be plenty of room for expanding the brain
capabilities. A very complex virtual world could be run there, independent
of the uploaded brains. That is to say, the virtual world would not be a
simple input of information on the information gathering ends of the
brain, but a independent model computed in common for a number of brain.
That virtual reality would exist and evolve even without uploaded
consciousness.

Is this too much to be acceptable? Then look back fifty years ago when
there was only mechanical adding machines at the bureau of census. The
morphing software transforming a picture on a computer screen at ten
millions of operations per second would have not qualified then as too
much? Contrary to space travel for example, computers do not ask for large
quantity of energy, and a number of "operators" can create them. That is
why we can make prediction about that technology: If one maker won't do
that, another will.

The neuron way.

Putting one million brains on a computer is not cost effective. Computers
are a solution only for a staring technology of uploading, the true
solution is to build special analogue or digital neurons. At equivalent
technological level, that solution is up to one thousand times more
effective. On the bad side, it needs specially designed component and
cannot benefits from the mass produced chips of the computer industry.
When "brains on machines" becomes itself a large consumer of components,
it can afford to buy its own products specially designed. Going straight
to the limit, a neuron processor could run one billion brains in the
volume of a match box.

There have been some proposals for three dimensional systems far more
powerful, but it is no concern if the systems envisioned here are not the
limit of the optronic technology. The virtual world contemplated here is
nearly as complex and powerful as the real Earth biosphere. Living in one
or the other may not be very important. On a practical side, it is
certainly simpler and cheaper to expand the virtual world than to go in
space to find new worlds. That is not to say the two solutions can't or
must not be pursued simultaneously.

Uploaded world.

Strangely, it seems there is practically no science fiction novels on
uploaded worlds. Given the technological relatively near term possibility
of that society, this seems very strange. It seems that domain will comes
as the pocket calculator, the personal computer, the world network and
some other information technologies: without thinking about it in advance.

To foretell the information technology is fairly simple: RISC will
supersede CISC, vector processors will overcome pipeline RISC and
optronics multimode devices will take over electronics vector chips. From
there, uploading will becomes so cheap than its market will allows the
production of specially designed circuits, simulating directly neuron
functions. The computing power will then outperform any brain or set of
them, so much of the room will be allotted to an "uploaded space"
independent of any consciousness.

When that world gets big enough, inside communication becomes a problem.
It would be very interesting to have the possibility of instantaneous
travel, the so-called transpace concept. A neuron based space is a three
dimensional grid. From differential equation theory, this may be mapped on
a finite portion of Euclidean space. But looking at topology this is one
realisation of the projective plane. Another view of the projective plane
is a sphere surface where each point in the north hemisphere is associated
with another point in the southern part, the equator is a Moebius band. A
transpace is then simply a function going from one three dimensional
representation of the projective plane to a two dimensional one and back.
In the two dimensional situation, the pairing points effect produce the
instantaneous travel.

This function is an inbuilt natural function of a neuron space. If someday
you find you are in a somewhat bizarre world, ask for transpace. If you
get a ticket, you are in an uploaded space. If you end in a psychiatric
hospital, you are in an infinite Euclidean space.

If the space get divided into a number of domains, it is schizophrenic.
Beyond that, it can run its own transpace function independently on each
domain. Taken as a whole, that space has many projective planes and so a
very complicated geometry with new possibilities. These geometries are
associated with higher dimensional spaces, so the schizo-space has more
than three dimensions. Differential geometry defines on each space a
natural unit of length (The quantum mechanics Planck's constant h is the
action - energy x time - associated with that unit length in unbounded
three dimensional space). When the number of dimensions goes up, the unit
length goes down and everything shrink. More there are observable domains
in the space, more complicated becomes the geometry, the bigger the number
of dimensions and the smaller the objects become. Ask for going to the
small world, if you get to the psychiatrist you are not in a schizophrenic
upload world. A fully 

[ end of part 1 of 2 parts ]

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=3212.1