X-Message-Number: 0019.1
Subject: The Technical Feasibility of Cryonics; Part #1

  <<< For a more up-to-date version of this paper, see URL: >>>
  <<<       http://merkle.com/merkleDir/techFeas.html       >>>

Newsgroups: sci.cryonics
From:  (Ralph Merkle)
Subject: The Technical Feasibility of Cryonics; Part #1
Date: 22 Nov 92 21:12:17 GMT

The Technical Feasibility of Cryonics

PART 1 of 5.

by

Ralph C. Merkle
Xerox PARC
3333 Coyote Hill Road
Palo Alto, CA 94304


A shorter version of this article appeared in:
Medical Hypotheses (1992)  39, pages 6-16.

ABSTRACT

Cryonic suspension is a method of stabilizing the condition of someone 
who is terminally ill so that they can be transported to the medical 
care facilities that will be available in the late 21st or 22nd century.   
There is little dispute that the condition of a person stored at the 
temperature of liquid nitrogen is stable, but the process of freezing 
inflicts a level of damage which cannot be reversed by current medical 
technology.  Whether or not the damage inflicted by current methods can 
ever be reversed depends both on the level of damage and the ultimate 
limits of future medical technology.  The failure to reverse freezing 
injury with current methods does not imply that it can never be reversed 
in the future, just as the inability to build a personal computer in 
1890 did not imply that such machines would never be economically built.  
This paper considers the limits of what medical technology should 
eventually be able to achieve (based on the currently understood laws of 
chemistry and physics) and the kinds of damage caused by current methods 
of freezing.  It then considers whether methods of repairing the kinds 
of damage caused by current suspension techniques are likely to be 
achieved in the future.

INTRODUCTION

Tissue preserved in liquid nitrogen can survive centuries without 
deterioration[ft.1].  This simple fact provides an imperfect time 
machine that can transport us almost unchanged from the present to the 
future:  we need merely freeze ourselves in liquid nitrogen.  If 
freezing damage can someday be cured, then a form of time travel to the 
era when the cure is available would be possible.  While unappealing to 
the healthy this possibility is more attractive to the terminally ill, 
whose options are somewhat limited.   Far from being idle speculation, 
this option is in fact available to anyone who so chooses.  First 
seriously proposed in the 1960's by Ettinger[80] there are now three 
organizations in the U.S. that provide cryonic suspension services.

Perhaps the most important question in evaluating this option is its 
technical feasibility:  will it work?

Given the remarkable progress of science during the past few centuries, 
it is difficult to dismiss cryonics out of hand.  The structure of DNA 
was unknown prior to 1953;  the chemical (rather than "vitalistic") 
nature of living beings was not appreciated until early in the 20th 
century; it was not until 1864 that spontaneous generation was put to 
rest by Louis Pasteur, who demonstrated that no organisms emerged from 
heat-sterilized growth medium kept in sealed flasks; and Sir Isaac 
Newton's Principia established the laws of motion in 1687, just over 300 
years ago.  If progress of the same magnitude occurs in the next few 
centuries, then it becomes difficult to argue that the repair of frozen 
tissue is inherently and forever infeasible.

Hesitation to dismiss cryonics is not a ringing endorsement and still 
leaves the basic question in considerable doubt.  Perhaps a closer 
consideration of how future technologies might be applied to the repair 
of frozen tissue will let us draw stronger conclusions - in one 
direction or the other.    Ultimately, cryonics will either (a) work or 
(b) fail to work.  It would seem useful to know in advance which of 
these two outcomes to expect.  If it can be ruled out as infeasible, 
then we need not waste further time on it.  If it seems likely that it 
will be technically feasible, then a number of nontechnical issues 
should be addressed in order to obtain a good probability of overall 
success.

The reader interested in a general introduction to cryonics is referred 
to other sources[23, 24, 80].   Here, we focus on technical feasibility.

While many isolated tissues (and a few particularly hardy organs) have 
been successfully cooled to the temperature of liquid nitrogen and 
rewarmed[59], further successes have proven elusive.   While there is no 
particular reason to believe that a cure for freezing damage would 
violate any laws of physics (or is otherwise obviously infeasible),  it 
is likely that the damage done by freezing is beyond the self-repair and 
recovery capabilities of the tissue itself.  This does not imply that 
the damage cannot be repaired, only that significant elements of the 
repair process would have to be provided from an external source.   In 
deciding whether such externally provided repair will (or will not) 
eventually prove feasible, we must keep in mind that such repair 
techniques can quite literally take advantage of scientific advances 
made during the next few centuries.  Forecasting the capabilities of 
future technologies is therefore an integral component of determining 
the feasibility of cryonics.  Such a forecast should, in principle, be 
feasible.  The laws of physics and chemistry as they apply to biological 
structures are well understood and well defined.  Whether the repair of 
frozen tissue will (or will not) eventually prove feasible within the 
framework defined by those laws is a question which we should be able to 
answer based on what is known today.

Current research (outlined below) supports the idea that we will 
eventually be able to examine and manipulate structures molecule by 
molecule and even atom by atom.  Such a technical capability has very 
clear implications for the kinds of damage that can (and cannot) be 
repaired.  The most powerful repair capabilities that should eventually 
be possible can be defined with remarkable clarity.  The question we 
wish to answer is conceptually straightforwards:  will the most powerful 
repair capability that is likely to be developed in the long run 
(perhaps over several centuries) be adequate to repair tissue that is 
frozen using the best available current methods?[ft. 2]

The general purpose ability to manipulate structures with atomic 
precision and low cost is often called nanotechnology (other terms, such 
as molecular engineering, molecular manufacturing, molecular 
nanotechnology, etc. are also often applied).  There is widespread 
belief that such a capability will eventually be developed [1, 2, 3, 4, 
7, 8, 10, 19, 41, 47, 49, 83, 84, 85, 106] though exactly how long it 
will take is unclear.  The long storage times possible with cryonic 
suspension make the precise development time of such technologies 
noncritical.  Development any time during the next few centuries would 
be sufficient to save the lives of those suspended with current 
technology.

In this paper, we give a brief introduction to nanotechnology and then 
clarify the technical issues involved in applying it in the conceptually 
simplest and most powerful fashion to the repair of frozen tissue.



NANOTECHNOLOGY

Broadly speaking, the central thesis of nanotechnology is that almost 
any chemically stable structure that can be specified can in fact be 
built.  This possibility was first advanced by Richard Feynman in 1959 
[4] when he said: "The principles of physics, as far as I can see, do 
not speak against the possibility of maneuvering things atom by atom."  
(Feynman won the 1965 Nobel prize in physics).

This concept is receiving increasing attention in the research 
community.  There have been two international conferences directly on 
molecular nanotechnology[83,84] as well as a broad range of conferences 
on related subjects.  Science [47, page 26] said "The ability to design 
and manufacture devices that are only tens or hundreds of atoms across 
promises rich rewards in electronics, catalysis, and materials.  The 
scientific rewards should be just as great, as researchers approach an 
ultimate level of control - assembling matter one atom at a time."   
"Within the decade, [John] Foster [at IBM Almaden] or some other 
scientist is likely to learn how to piece together atoms and molecules 
one at a time using the STM [Scanning Tunnelling Microscope]."

Eigler and Schweizer[49] at IBM reported on "...the use of the STM at 
low temperatures (4 K) to position individual xenon atoms on a single-
crystal nickel surface with atomic precision.  This capacity has allowed 
us to fabricate rudimentary structures of our own design, atom by atom.  
The processes we describe are in principle applicable to molecules also.  
In view of the device-like characteristics reported for single atoms on 
surfaces [omitted references], the possibilities for perhaps the 
ultimate in device miniaturization are evident."

J. A. Armstrong, IBM Chief Scientist and Vice President for Science and 
Technology[106] said

I believe that nanoscience and nanotechnology will be central to 
the next epoch of the information age, and will be as 
revolutionary as science and technology at the micron scale have 
been since the early '70's....  Indeed, we will have the ability 
to make electronic and mechanical devices atom-by-atom when that 
is appropriate to the job at hand.

The New York Times said[107]:

Scientists are beginning to gain the ability to manipulate matter 
by its most basic components - molecule by molecule and even atom 
by atom.

That ability, while now very crude, might one day allow people to 
build almost unimaginably small electronic circuits and machines, 
producing, for example, a supercomputer invisible to the naked 
eye.  Some futurists even imagine building tiny robots that could 
travel through the body performing surgery on damaged cells.

Drexler[1,10,19,41,85] has proposed the assembler, a small device 
resembling an industrial robot which would be capable of holding and 
positioning reactive compounds in order to control the precise location 
at which chemical reactions take place.  This general approach should 
allow the construction of large atomically precise objects by a sequence 
of precisely controlled chemical reactions.

The foundational technical discussion of nanotechnology has recently been 
provided by Drexler[85].

     Ribosomes

The plausibility of this approach can be illustrated by the ribosome.   
Ribosomes manufacture all the proteins used in all living things on this 
planet.  A typical ribosome is relatively small (a few thousand cubic 
nanometers) and is capable of building almost any protein by stringing 
together amino acids (the building blocks of proteins) in a precise 
linear sequence.  To do this, the ribosome has a means of grasping a 
specific amino acid (more precisely, it has a means of selectively 
grasping a specific transfer RNA, which in turn is chemically bonded by 
a specific enzyme to a specific amino acid), of grasping the growing 
polypeptide, and of causing the specific amino acid to react with and be 
added to the end of the polypeptide[14].

The instructions that the ribosome follows in building a protein are 
provided by mRNA (messenger RNA).   This is a polymer formed from the 
four bases adenine, cytosine, guanine, and uracil.  A sequence of 
several hundred to a few thousand such bases codes for a specific 
protein.  The ribosome "reads" this "control tape" sequentially, and 
acts on the directions it provides. 

     Assemblers

In an analogous fashion, an assembler will build an arbitrary molecular 
structure following a sequence of instructions.  The assembler, however, 
will provide three-dimensional positional and full orientational control 
over the molecular component  (analogous to the individual amino acid) 
being added to a growing complex molecular structure (analogous to the 
growing polypeptide).  In addition, the assembler will be able to form 
any one of several different kinds of chemical bonds, not just the 
single kind (the peptide bond) that the ribosome makes.

Calculations indicate that an assembler need not inherently be very 
large.   Enzymes "typically" weigh about 10^5 amu (atomic mass units[ft. 
3]), while the ribosome itself is about 3 x 10^6 amu[14].  The smallest 
assembler might be a factor of ten or so larger than a ribosome.  
Current design ideas for an assembler are somewhat larger than this:  
cylindrical "arms" about 100 nanometers in length and 30 nanometers in 
diameter, rotary joints to allow arbitrary positioning of the tip of the 
arm, and a worst-case positional accuracy at the tip of perhaps 0.1 to 
0.2 nanometers, even in the presence of thermal noise[18].   Even a 
solid block of diamond as large as such an arm weighs only sixteen 
million amu, so we can safely conclude that a hollow arm of such 
dimensions would weigh less.  Six such arms would weigh less than 10^8 
amu.

     Molecular Computers

The assembler requires a detailed sequence of control signals, just as 
the ribosome requires mRNA to control its actions.  Such detailed 
control signals can be provided by a computer.  A feasible design for a 
molecular computer has been presented by Drexler[2,19].  This design is 
mechanical in nature, and is based on sliding rods that interact by 
blocking or unblocking each other at "locks."[ft. 4]  This design has a 
size of about 5 cubic nanometers per "lock" (roughly equivalent to a 
single logic gate).  Quadrupling this size to 20 cubic nanometers (to 
allow for power, interfaces, and the like) and assuming that we require 
a minimum of 10^4 "locks" to provide minimal control results in a volume 
of 2 x 10^5 cubic nanometers (.0002 cubic microns) for the computational 
element.  This many gates is sufficient to build a simple 4-bit or 8-bit 
general purpose computer.  For example, the 6502 8-bit microprocessor 
can be implemented in about 10,000 gates, while an individual 1-bit 
processor in the Connection Machine has about 3,000 gates.  Assuming 
that each cubic nanometer is occupied by roughly 100 atoms of carbon, 
this 2 x 10^5 cubic nanometer computer will have a mass of about 2 x 
10^8 amu.

An assembler might have a kilobyte of high speed (rod-logic based) RAM, 
(similar to the amount of RAM used in a modern one-chip computer) and 
100 kilobytes of slower but more dense "tape" storage - this tape 
storage would have a mass of 10^8 amu or less (roughly 10 atoms per bit 
- see below).  Some additional mass will be used for communications 
(sending and receiving signals from other computers) and power.  In 
addition, there will probably be a "toolkit" of interchangable tips that 
can be placed at the ends of the assembler's arms.  When everything is 
added up a small assembler, with arms, computer, "toolkit," etc. should 
weigh less than 10^9 amu.

Escherichia coli (a common bacterium) weigh about 10^12 amu[14, page 
123].  Thus, an assembler should be much larger than a ribosome, but 
much smaller than a bacterium.

     Self Replicating Systems

It is also interesting to compare Drexler's architecture for an 
assembler with the Von Neumann architecture for a self replicating 
device.  Von Neumann's "universal constructing automaton"[45] had both a 
universal Turing machine to control its functions and a "constructing 
arm" to build the "secondary automaton."  The constructing arm can be 
positioned in a two-dimensional plane, and the "head" at the end of the 
constructing arm is used to build the desired structure.  While Von 
Neumann's construction was theoretical (existing in a two dimensional 
cellular automata world), it still embodied many of the critical 
elements that now appear in the assembler.

Further work on self-replicating systems was done by NASA in 1980 in a 
report that considered the feasibility of implementing a self-
replicating lunar manufacturing facility with conventional 
technology[48].  One of their conclusions was that "The theoretical 
concept of machine duplication is well developed.  There are several 
alternative strategies by which machine self-replication can be carried 
out in a practical engineering setting."  They estimated it would 
require 20 years to develop such a system.  While they were considering 
the design of a macroscopic self-replicating system (the proposed "seed" 
was 100 tons) many of the concepts and problems involved in such systems 
are similar regardless of size.

     Positional Chemistry

Chemists have been remarkably successful at synthesizing a wide range of 
compounds with atomic precision.  Their successes, however, are usually 
small in size (with the notable exception of various polymers).  Thus, 
we know that a wide range of atomically precise structures with perhaps 
a few hundreds of atoms in them are quite feasible.  Larger atomically 
precise structures with complex three-dimensional shapes can be viewed 
as a connected sequence of small atomically precise structures.  While 
chemists have the ability to precisely sculpt small collections of atoms 
there is currently no ability to extend this capability in a general way 
to structures of larger size.  An obvious structure of considerable 
scientific and economic interest is the computer.  The ability to 
manufacture a computer from atomically precise logic elements of 
molecular size, and to position those logic elements into a three-
dimensional volume with a highly precise and intricate interconnection 
pattern would have revolutionary consequences for the computer industry.

A large atomically precise structure, however, can be viewed as simply a 
collection of small atomically precise objects which are then linked 
together.  To build a truly broad range of large atomically precise 
objects requires the ability to create highly specific positionally 
controlled bonds.  A variety of highly flexible synthetic techniques 
have been considered in [85].  We shall describe two such methods here 
to give the reader a feeling for the kind of methods that will 
eventually be feasible.

We assume that positional control is available and that all reactions 
take place in a hard vacuum.  The use of a hard vacuum allows highly 
reactive intermediate structures to be used, e.g., a variety of radicals 
with one or more dangling bonds.  Because the intermediates are in a 
vacuum, and because their position is controlled (as opposed to 
solutions, where the position and orientation of a molecule are largely 
random), such radicals will not react with the wrong thing for the very 
simple reason that they will not come into contact with the wrong thing.

Note that the requirement for hard vacuum can be met even when dealing 
with biological structures by keeping the temperature sufficiently low.

Normal solution-based chemistry offers a smaller range of controlled 
synthetic possibilities.  For example, highly reactive compounds in 
solution will promptly react with the solution.  In addition, because 
positional control is not provided, compounds randomly collide with 
other compounds.  Any reactive compound will collide randomly and react 
randomly with anything available.  Solution-based chemistry requires 
extremely careful selection of compounds that are reactive enough to 
participate in the desired reaction, but sufficiently non-reactive that 
they do not accidentally participate in an undesired side reaction.  
Synthesis under these conditions is somewhat like placing the parts of a 
radio into a box, shaking, and pulling out an assembled radio.  The 
ability of chemists to synthesize what they want under these conditions 
is amazing.

Much of current solution-based chemical synthesis is devoted to 
preventing unwanted reactions.  With assembler-based synthesis, such 
prevention is a virtually free by-product of positional control.

To illustrate positional synthesis in vacuum somewhat more concretely, 
let us suppose we wish to bond two compounds, A and B.  As a first step, 
we could utilize positional control to selectively abstract a specific 
hydrogen atom from compound A.  To do this, we would employ a radical 
that had two spatially distinct regions:  one region would have a high 
affinity for hydrogen while the other region could be built into a 
larger "tip" structure that would be subject to positional control.  A 
simple example would be the 1-propynyl radical, which consists of three 
co-linear carbon atoms and three hydrogen atoms bonded to the sp3 carbon 
at the "base" end.  The radical carbon at the radical end is triply 
bonded to the middle carbon, which in turn is singly bonded to the base 
carbon.   In a real abstraction tool, the base carbon would be bonded to 
other carbon atoms in a larger diamondoid structure which provides 
positional control, and the tip might be further stabilized by a 
surrounding "collar" of unreactive atoms attached near the base that 
would prevent lateral motions of the reactive tip.

The affinity of this structure for hydrogen is quite high.  Propyne (the 
same structure but with a hydrogen atom bonded to the "radical" carbon) 
has a hydrogen-carbon bond dissociation energy in the vicinity of 132 
kilocalories per mole.  As a consequence, a hydrogen atom will prefer 
being bonded to the 1-propynyl hydrogen abstraction tool in preference 
to being bonded to almost any other structure.  By positioning the 
hydrogen abstraction tool over a specific hydrogen atom on compound A, 
we can perform a site specific hydrogen abstraction reaction.  This 
requires positional accuracy of roughly a bond length (to prevent 
abstraction of an adjacent hydrogen).  Quantum chemical analysis of this 
reaction by Musgrave et. al.[108] show that the activation energy for 
this reaction is low, and that for the abstraction of hydrogen from the 
hydrogenated diamond (111) surface (modeled by isobutane) the barrier is 
very likely zero.

Having once abstracted a specific hydrogen atom from compound A, we can 
repeat the process for compound B.  We can now join compound A to 
compound B by positioning the two compounds so that the two dangling 
bonds are adjacent to each other, and allowing them to bond.

This illustrates a reaction using a single radical.  With positional 
control, we could also use two radicals simultaneously to achieve a 
specific objective.  Suppose, for example, that two atoms A1 and A2 
which are part of some larger molecule are bonded to each other.  If we 
were to position the two radicals X1 and X2 adjacent to A1 and A2, 
respectively, then a bonding structure of much lower free energy would 
be one in which the A1-A2 bond was broken, and two new bonds A1-X1 and 
A2-X2 were formed.  Because this reaction involves breaking one bond and 
making two bonds (i.e., the reaction product is not a radical and is 
chemically stable) the exact nature of the radicals is not critical.  
Breaking one bond to form two bonds is a favored reaction for a wide 
range of cases.  Thus, the positional control of two radicals can be 
used to break any of a wide range of bonds.

A range of other reactions involving a variety of reactive intermediate 
compounds (carbenes are among the more interesting ones) are proposed in 
[85], along with the results of semi-empirical and ab initio quantum 
calculations and the available experimental evidence.

Another general principle that can be employed with positional synthesis 
is the controlled use of force.  Activation energy, normally provided by 
thermal energy in conventional chemistry, can also be provided by 
mechanical means.  Pressures of 1.7 megabars have been achieved 
experimentally in macroscopic systems[30].  At the molecular level such 
pressure corresponds to forces that are a large fraction of the force 
required to break a chemical bond.  A molecular vise made of hard 
diamond-like material with a cavity designed with the same precision as 
the reactive site of an enzyme can provide activation energy by the 
extremely precise application of force, thus causing a highly specific 
reaction between two compounds.

To achieve the low activation energy needed in reactions involving 
radicals requires little force, allowing a wider range of reactions to 
be caused by simpler devices (e.g., devices that are able to generate 
only small force).  Further analysis is provided in [85].

Feynman said: "The problems of chemistry and biology can be greatly 
helped if our ability to see what we are doing, and to do things on an 
atomic level, is ultimately developed - a development which I think 
cannot be avoided."   Drexler has provided the substantive analysis 
required before this objective can be turned into a reality.  We are 
nearing an era when we will be able to build virtually any structure 
that is specified in atomic detail and which is consistent with the laws 
of chemistry and physics.  This has substantial implications for future 
medical technologies and capabilities.

     Repair Devices

A repair device is an assembler which is specialized for repair of 
tissue in general, and frozen tissue in particular.  We assume that a 
repair device has a mass of between 10^9 and 10^10 amu (e.g., we assume 
that  a repair device might be as much as a factor of 10 more 
complicated than a simple assembler).  This provides ample margin for 
increasing the capabilities of the repair device if this should prove 
necessary.

A single repair device of the kind described will not, by itself, have 
sufficient memory to store the programs required to perform all the 
repairs.  However, if it is connected to a network (in the same way that 
current computers can be connected into a local area network) then a 
single large "file server" can provide the needed information for all 
the repair devices on the network.  The file server can be dedicated to 
storing information: all the software and data that the repair devices 
will need.  Almost the entire mass of the file server can be dedicated 
to storage, it can service many repair devices, and can be many times 
the size of one device without greatly increasing system size.  
Combining these advantages implies the file server will have ample 
storage to hold whatever programs might be required during the course of 
repair.  In a similar fashion, if further computational resources are 
required they can be provided by "large" compute servers located on the 
network.

     Cost

One consequence of the existence of assemblers is that they are cheap.  
Because an assembler can be programmed to build almost any structure, it 
can in particular be programmed to build another assembler.  Thus, self 
reproducing assemblers should be feasible and in consequence the 
manufacturing costs of assemblers would be primarily the cost of the raw 
materials and energy required in their construction.  Eventually (after 
amortization of possibly quite high development costs), the price of 
assemblers (and of the objects they build) should be no higher than the 
price of other complex structures made by self-replicating systems.  
Potatoes - which have a staggering design complexity involving tens of 
thousands of different genes and different proteins directed by many 
megabits of genetic information - cost well under a dollar per pound.


DESCRIBING THE BRAIN AT THE MOLECULAR AND ATOMIC LEVEL

In principle we need only repair the frozen brain, for the brain is the 
most critical and important structure in the body.  Faithfully repairing 
the liver (or any other secondary tissue) molecule by molecule (or 
perhaps atom by atom) appears to offer no benefit over simpler 
techniques - such as replacement.   The calculations and discussions 
that follow are therefore based on the size and composition of the 
brain.   It should be clear that if repair of the brain is feasible, 
then the methods employed could (if we wished) be extended in the 
obvious way to the rest of the body.

The brain, like all the familiar matter in the world around us, is made 
of atoms.  It is the spatial arrangement of these atoms that 
distinguishes an arm from a leg, the head from the heart, and sickness 
from health.  This view of the brain is the framework for our problem, 
and it is within this framework that we must work.  Our problem, broadly 
stated, is that the atoms in a frozen brain are in the wrong places.  We 
must put them back where they belong (with perhaps some minor additions 
and removals, as well as just rearrangements) if we expect to restore 
the natural functions of this most wonderful organ.

In principle, the most that we could usefully know about the frozen 
brain would be the coordinates of each and every atom in it (though 
confer footnote 5).  This knowledge would put us in the best possible 
position to determine where each and every atom should go.  This 
knowledge, combined with a technology that allowed us to rearrange 
atomic structure in virtually any fashion consistent with the laws of 
chemistry and physics, would clearly let us restore the frozen structure 
to a fully functional and healthy state.

In short, we must answer three questions:

1.)     Where are the atoms?
2.)     Where should they go?
3.)     How do we move them from where they are to where they should be?

Regardless of the specific technical details involved, any method of 
restoring a person in suspension must answer these three questions, if 
only implicitly.  Current efforts to freeze and then thaw tissue (e.g., 
experimental work aimed at freezing and then reviving sperm, kidneys, 
etc) answer these three questions indirectly and implicitly.  
Ultimately, technical advances should allow us to answer these questions 
in a direct and explicit fashion.

Rather than directly consider these questions at once, we shall first 
consider a simpler problem:  how would we go about describing the 
position of every atom if somehow this information was known to us?  The 
answer to this question will let us better understand the harder 
questions.

     How Many Bits to Describe One Atom

Each atom has a location in three-space that we can represent with three 
coordinates:  X, Y, and Z.  Atoms are usually a few tenths of a 
nanometer apart.  If we could record the position of each atom to within 
0.01 nanometers, we would know its position accurately enough to know 
what chemicals it was a part of, what bonds it had formed, and so on.  
The brain is roughly .1 meters across, so .01 nanometers is about 1 part 
in 10^10.  That is, we would have to know the position of the atom in 
each coordinate to within one part in ten billion.  A number of this 
size can be represented with about 33 bits.  There are three 
coordinates, X, Y, and Z, each of which requires 33 bits to represent, 
so the position of an atom can be represented in 99 bits.  An additional 
few bits are needed to store the type of the atom (whether hydrogen, 
oxygen, carbon, etc.), bringing the total to slightly over 100 bits[ft. 
5].

Thus, if we could store 100 bits of information for every atom in the 
brain, we could fully describe its structure in as exacting and precise 
a manner as we could possibly need.  A memory device of this capacity 
should be quite literally possible.  To quote Feynman[4]: "Suppose, to 
be conservative, that a bit of information is going to require a little 
cube of atoms 5 x 5 x 5 - that is 125 atoms."  This is indeed 
conservative.  Single stranded DNA already stores a single bit in about 
16 atoms (excluding the water that it's in).  It seems likely we can 
reduce this to only a few atoms[1].  The work at IBM[49] suggests a 
rather obvious way in which the presence or absence of a single atom 
could be used to encode a single bit of information (although some sort 
of structure for the atom to rest upon and some method of sensing the 
presence or absence of the atom will still be required, so we would 
actually need more than one atom per bit in this case).   If we 
conservatively assume that the laws of chemistry inherently require 10 
atoms to store a single bit of information, we still find that the 100 
bits required to describe a single atom in the brain can be represented 
by about 1,000 atoms.  Put another way, the location of every atom in a 
frozen structure is (in a sense) already encoded in that structure in an 
analog format. If we convert from this analog encoding to a digital 
encoding, we will increase the space required to store the same amount 
of information.  That is, an atom in three-space encodes its own 
position in the analog value of its three spatial coordinates.  If we 
convert this spatial information from its analog format to a digital 
format, we inflate the number of atoms we need by perhaps as much as 
1,000.  If we digitally encoded the location of every atom in the brain, 
we would need 1,000 times as many atoms to hold this encoded data as 
there are atoms in the brain.  This means we would require roughly 1,000 
times the volume.  The brain is somewhat over one cubic decimeter, so it 
would require somewhat over one cubic meter of material to encode the 
location of each and every atom in the brain in a digital format 
suitable for examination and modification by a computer.

While this much memory is remarkable by today's standards, its 
construction clearly does not violate any laws of physics or chemistry.  
That is, it should literally be possible to store a digital description 
of each and every atom in the brain in a memory device that we will 
eventually be able to build.

     How Many Bits to Describe a Molecule

While such a feat is remarkable, it is also much more than we need.  
Chemists usually think of atoms in groups - called molecules.  For 
example, water is a molecule made of three atoms:  an oxygen and two 
hydrogens.  If we describe each atom separately, we will require 100 
bits per atom, or 300 bits total.  If, however, we give the position of 
the oxygen atom and give the orientation of the molecule, we need:  99 
bits for the location of the oxygen atom + 20 bits to describe the type 
of molecule ("water", in this case) and perhaps another 30 bits to give 
the orientation of the water molecule (10 bits for each of the three 
rotational axes).   This means we can store the description of a water 
molecule in only 150 bits, instead of the 300 bits required to describe 
the three atoms separately.  (The 20 bits used to describe the type of 
the molecule can describe up to 1,000,000 different molecules - many 
more than are present in the brain).

As the molecule we are describing gets larger and larger, the savings in 
storage gets bigger and bigger.  A whole protein molecule will still 
require only 150 bits to describe, even though it is made of thousands 
of atoms.  The canonical position of every atom in the molecule is 
specified once the type of the molecule (which occupies a mere 20 bits) 
is given.  A large molecule might adopt many configurations, so it might 
at first seem that we'd require many more bits to describe it.  However, 
biological macromolecules typically assume one favored configuration 
rather than a random configuration, and it is this favored configuration 
that we will describe[ft. 6].

We can do even better:  the molecules in the brain are packed in next to 
each other.  Having once described the position of one, we can describe 
the position of the next molecule as being such-and-such a distance from 
the first.  If we assume that two adjacent molecules are within 10 
nanometers of each other (a reasonable assumption) then we need only 
store 10 bits of "delta X," 10 bits of "delta Y," and 10 bits of "delta 
Z" rather than 33 bits of X, 33 bits of Y, and 33 bits of Z.  This means 
our molecule can be described in only 10+10+10+20+30 or 80 bits.

We can compress this further by using various other clever strategems 
(50 bits or less is quite achievable), but the essential point should be 
clear.  We are interested in molecules, and describing a molecule takes 
fewer bits than describing an atom.

     Do We Really Need to Describe Each Molecule?

A further point will be obvious to any biologist.  Describing the exact 
position and orientation of a hemoglobin molecule within a red blood 
cell is completely unnecessary.  Each hemoglobin molecule bounces around 
within the red blood cell in a random fashion, and it really doesn't 
matter exactly where it is, nor exactly which way it's pointing.  All we 
need do is say, "It's in that red blood cell!"  So, too, for any other 
molecule that is floating at random in a "cellular compartment:"  we 
need only say which compartment it's in.  Many other molecules, even 
though they do not diffuse freely within a cellular compartment, are 
still able to diffuse fairly freely over a signficant range.  The 
description of their position can be appropriately compressed.

While this reduces our storage requirements quite a bit, we could go 
much further.  Instead of describing molecules, we could describe entire 
sub-cellular organelles.  It seems excessive to describe a mitochondrion 
by describing each and every molecule in it.  It would be sufficient 
simply to note the location and perhaps the size of the mitochondrion, 
for all mitochondria perform the same function: they produce energy for 
the cell.  While there are indeed minor differences from mitochondrion 
to mitochondrion, these differences don't matter much and could 
reasonably be neglected.

We could go still further, and describe an entire cell with only a 
general description of the function it performs: this nerve cell has 
synapses of a certain type with that other cell, it has a certain shape, 
and so on.  We might even describe groups of cells in terms of their 
function: this group of cells in the retina performs a "center surround" 
computation, while that group of cells performs edge enhancement.  
Cherniak[115] said:  "On the usual assumption that the synapse is the 
necessary substrate of memory, supposing very roughly that (given 
anatomical and physiological 'noise') each synapse encodes about one 
binary bit of information, and a thousand synapses per neuron are 
available for this task: 10^10 cortical neurons x 10^3 synapses = 10^13 
bits of arbitrary information (1.25 terabytes) that could be stored in 
the cerebral cortex."

     How Many Bits Do We Really Need?

This kind of logic can be continued, but where does it stop?  What is 
the most compact description which captures all the essential 
information?  While many minor details of neural structure are 
irrelevant, our memories clearly matter.  Any method of describing the 
human brain which resulted in loss of long term memory has rather 
clearly gone too far.  When we examine this quantitatively, we find that 
preserving the information in our long term memory might require as 
little as 10^9 bits (somewhat over 100 megabytes)[37].  We can say 
rather confidently that it will take at least this much information to 
adequately describe an individual brain.  The gap between this lower 
bound and the molecule-by-molecule upper bound is rather large, and it 
is not immediately obvious where in this range the true answer falls.  
We shall not attempt to answer this question, but will instead 
(conservatively) simply adopt the upper bound.


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=0019.1