X-Message-Number: 7883
Date: Tue, 18 Mar 1997 17:35:31 -0800 (PST)
From: Joseph Strout <>
Subject: Re: misc. uploading notes [long]

I don't want to belabor uploading here, since it's not really the focus of
cryonet -- but it does relate, since there are some (like me) who are only
involved in cryonics because uploading suggests a way out of it on the
other end (not believing that nanotech is likely to be able to repair
biological brains so thoroughly damaged).

So, to respond to a flurry of interesting comments...


> Message #7875
> From:  (Thomas Donaldson)
> 	...
> 2. Computers, computers, what then are computers?...

This is a fair gripe.  I call the brain a computer because it computes --
that is, it takes does complex information processing.  These terms are
only slightly less vague than "computer" itself.  But it's still useful to
point out that this is basically what our brain does.  If you say a brain
can't be emulated by a computer, you're making some implicit assumptions
about what *kind* of computer you refer to.  In general, I place no
restrictions on what kind of computer an upload might use; it will
certainly be much more like the thing in your head than the thing on your
desk, however.

>     Since attempts to emulate nondigital phenomena such as
>    weather, or even the motion of the planets (in detail) with a digital
>    computer will go awry over time, there is one problem with uploading into
>    a DIGITAL machine.

I agree.  However, analog integrated circuitry appears to be on the
upswing.  See Mead et al.'s "neuromorphic engineering" artificial retinas
and such, for example.  An artificial brain will quite likely be an
analog/digital hybrid device.

>    Furthermore there is a serious question at the heart of the whole idea
>    of emulation or simulation. A simulated Model T takes you nowhere. It
>    fails in the basic purpose of a Model T.

That's because the basic purpose of a Model T is not processing
information.  Contrast this with, say, a simulated Macintosh (running on
some Unix supercomputer or whatever).  If the simulation is good enough,
then it's just as good as the "real" thing.

>    The simulation doesn't even think, it just simulates thought.

I think you have to justify that this dichotomy exists for thinking.  It
exists for "carrying people down the road", since that's a physical act.
Thinking is a nonphysical act; it is essentially information processing.
So it may not be sensical to say "it just simulates thought".

>    Not only that, but when we look at
>    its insides we find a computer program running the simulated neurons 
>    so that they simulate the action of neurons. We do not have a real 
>    brain but instead a very large computer program.

People raised similar objections to materialism itself.  If you look
inside the skull, where is the "you"-ness?  Surely there's more to us than
a bunch of neurons?  A neuron doesn't have any "selfhood", so how could a
bunch of them have it?

Another tack: suppose you look inside the upload, and instead of a serial
general-purpose computer running code, you find hardware neurons like
Carver Mead's silicon retina on a grand scale?  Would that satisfy you?
If not, what would?

>    Somehow this does not
>    look to me to be enough, even if (theoretically, remember!) such a
>    simulation were good enough to simulate YOU and fool me into believing
>    it was you. As it stands now, I think that this possibility is theoretical
>    only and will remain so indefinitely, for the reasons I've already 
>    given. And if it does not, then YOU can go first. 

I will.  ;)  This has been addressed by David Chalmers in a very
interesting thought experiment.  The upshot of it is that if one system is
functionally equivalent to another, then it must have the same conscious
experience as the other as well.  So you know an emulated you will feel
just like you; it will have the same conscious experience you have.

The only remaining question, then, is whether it is you, or a new person
with your memories.  And whether the latter option even makes any sense
depends on your definition of personal identity.  To me, it makes no
sense; any creature with your mental structure IS you.  That's what you
are; you're a being with a certain mental structure (and other attributes,
like shoe size, which are not important).

>    if that information is then used to BUILD another brain rather than
>    simulate it, it will revive you or me, close enough that I would not
>    complain.

Oops, I see you anticipated my question.  We are basically in agreement,
then.

> Message #7876
> From: 
> 
> First, when Joe says brains are computers, presumably he means that all the
> essential functions of the brain (outside of housekeeping) are computational.
> This is by no means self evident, and probably not true. In particular, it is
> not necessarily true that FEELING (or subjectivity) is just a kind of
> computation, any more than a temperature is just a computation.

As you say (cut for brevity), we've been over this before and there's not
much new to say.  But Chalmer's thought experiment applies here too; if
the upload is functionally equivalent to the original, its subjectivity is
the same too.  Perhaps we should dedicate a thread to this thought
experiment; I'll fetch the reference and post it in a few days if you
like.

> 1. Depending on how strictly you define "emulation," you may run into
> problems of real time vs. computer time.

True.  Today, a single fairly simple neuron runs thousands of times slower
than real time when simulated on a digital computer with reasonable
accuracy.  Dedicated, specialized hardware will certainly be needed.

> possible, in other words, that ONLY an organic substrate can support the
> necessary interdependency of functions. Thus--conceivably--a computer might
> be able to PREDICT or describe the detailed function of a brain, and still
> not BE a brain.

I think this is highly unlikely.  We already have programs which can
emulate neurons with high accuracy, and there is nothing in these
simulations that couldn't be implemented with more solid stuff.

> 2. Presumably we are talking about digital computers. We don't know whether
> the brain is (or can be) digital, which leaves many open questions.

I am sorry if it seemed I implied digital computers.  Analog computers are
far more powerful in some respects; the chief advantage of digital
computers is that they are easily configured to different tasks on the
fly.  A brain only has to do one task, i.e. be a brain.

> 3. Even if computer emulation were perfect, spacetimewise as well as
> otherwise, there would still remain the unresolved question of survival

Unresolved to whom?  ;)  This issue has been settled quite to my
satisfaction.  Your criteria for personal identity may differ -- but if
they do, experience suggests that you'll have a great deal of difficulty
defending them from logical attack.

> If a perfect organic duplicate of you were somehow created, at
> some arbitrary distance in spacetime, would that be you?

Yes.  Organic or not, if it has the same mental structure, it's you.

> If the duplicate were created in the future, would that constitute your
> survival or reincarnation? 

Survival, yes.

> What if duplication were imperfect?

Then it would MOSTLY be you, and you'd MOSTLY survive.


> What if the duplication were of you as you were at some earlier stage in life?
> Or as you would become at some later stage in life?

Same -- mostly you, mostly survival.

> These questions have many possible answers, but none that is agreed upon.

True, but no one has been able to find a whole in my reasoning, and the
intuitive off-the-cuff theories that people often come up with do not
withstand scrutiny.  I think the lack of agreement mostly constitutes poor
communication, or (in a few cases) poor logical skills.

> Would you bet your life on any of the answers, if you were not forced to
> do so? 

Yes.  As far as I can tell, fuzzy-memory theory (for lack of a better
name) is rock solid.  I'd bet my life on it (and I am, via cryonics,
though not until I have to!).

> Message #7879
> From: Arkady Elgort <>
>
> telephone stations.;)
> I have no problem with AI concept. It is conceivable (though not proven) 
> that AI could outperfom natural intelligence like a plane can outrun a bird.

This is a different situation, though.  Or rather, AI may be to brains
like a plane is to a bird, but then AI is not uploading.

> I'd like to read more on a concept, but your server didn't reply.
> Anyway, my feeling is that so far it's purely hypothetical.

Please try the server again -- it's busy sometimes.  And yes, this is
technology that won't be around for a century or so, but it's important to
consider it now.  This is because (1) it will encourage some people to
give cryonics a chance, and (2) it's obviously such a strange idea that
people need a lot of time to think about it.

> Message #7881
> From: Mike Perry <>
> 
> Some questions that can be raised are (1) whether a computer of
> the future will be powerful enough to carry out an emulation of the 
> brain in pure software as I've indicated, (2) if so, whether it can 
> do it in realtime or better, and (3) whether such an emulation, at 
> whatever speed, will be made so as to have "real" feeling or
> emotion. My gut feeling is a "yes" to all three.

These are interesting questions.  But probably academic; once uploading
approaches, it will be far more economical to build special purpose
artificial brains (which hopefully fit within something the size of an
artificial body) than to sell every customer a 3-ton general-purpose
supercomputer!

> Parting thought: the ENIAC became operational in 1946, just 51 
> years ago. We've come a long way in computers since then, but we 
> have a long way to go before any serious prospect of being able
> to upload ourselves could develop. Another 51 years?

A very interesting point, indeed.

> Message #7882
> From: John K Clark <>
>
> That's because we usually only have a picture of a very thin cross section 
> of a cell, if you could see the entire cell in 3D things would be different, 
> even if the resolution was no greater that found in a good visible light 

> microscope. The new popular visible light confocal microscope certainly helps
> with this depth of focus problem...

But even with a top-notch confocal (which I use all the time), you can't
make out individual synapses.  They're just too small to see with visible
light.  You can just barely make out spines under good conditions.

> I'll bet you could tell if a synapse had undergone LTP or not, you probably  
> could not tell how it was using neurotransmitters, but maybe we'd get lucky 
> and find there is a gross physical change.

Yes, I doubt you really need to know the chemical state of a synapse, but
you DO need to know where the synapses are and what's connected to what. 
You can't even do that well on a confocal with a 100 um slice.  It seems
unlikely you'll do better when imaging an entire brain from outside the
skull!

> Yes, that would be much better and I know that would work, but I was hoping 
> something could be found before we had Nanotechnology.

Reconstruction from serial sections doesn't require nanotech; it mainly
requires a LOT of tissue handling.  Microtechnology will be vital, but
nanotech shouldn't be necessary.

> If you had 200 nanometer resolution in 3D you could certainly tell what  
> synapse went where

Nope.  A confocal gives you that (ideally), and it's not good enough.

> In 1956 J.A. O'Keefe proposed a "scanning near field microscope" that would 
> break the Abbe barrier.

Right, if you go to scanning microscopies, there are a lot of options, but
they all require slicing the brain... might as well do serial EM.

,------------------------------------------------------------------.
|    Joseph J. Strout           Department of Neuroscience, UCSD   |
|               http://www-acs.ucsd.edu/~jstrout/  |
`------------------------------------------------------------------'


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=7883