X-Message-Number: 22300
Date: Tue, 05 Aug 2003 23:31:18 -0700
From: Mike Perry <>
Subject: Responses on Simulation

Robert Ettinger, #22297, and my responses:

>Mike Perry writes in part:
>
> > Well, let's suppose we find that human brains produce a standing wave when
> > and only when the subject is conscious. So we make a device that only
> > simulates this standing wave; it is not the real thing. But the simulation
> > results, as expected, in behavior that appears in all outward respects to
> > involve consciousness. Perhaps a robot with this device as its brain will
> > shout, "hey, I'm conscious!" and generally behave exactly as you'd expect
> > if it really was conscious. So how do you know it is *not* conscious?

I notice this question is not answered. It may be that the "simulation" I 
refer to is imperfect in some way, but the assumption I am making is that, 
at minimum, it is *good enough* to achieve the desired effects. So again, 
how do you know the system is *not* conscious?

>He also talks about simulation at the level of "elementary" particles such as
>electrons and neutrons.

And a simulation like that could well be a very good one--though I'll 
concede it is not guaranteed. In fact the great complexity it would have to 
have could fatally magnify small discrepancies with what is being 
simulated. But this we don't know.

>There are several possibilities for making objective observations of
>subjective conditions (or the lack of them), but for the moment I'll just 
>insert this
>reminder:
>
>No simulation of the near future is possible, and no simulation of the
>intermediate future is likely, that will fully and accurately represent a 
>real
>system, for the simple reason that we KNOW the current theories of physics 
>are
>incomplete and have uncertain domains of even approximate applicability. 
>Electrons
>and neutrons don't cut it. You may need strings or branes and 10 or 12
>dimensions including extra dimensions of time, blah blah blah.

Mainly, what you need is "good enough" rather than "perfect."

>And yet again the reminder that a computer is the realization of a formal
>system, with syntax but no semantics. No formal system stands on its own; it
>needs the context of a metasystem. Where is the metasystem for the simulation?

One possible response is that a formal system need not be simulating some 
other system, that is to say, it could support a universe all its own, as 
in Conway's Life or other a-life domains. (If determinism is a problem you 
could easily incorporate randomness.) You have creatures that interact and 
evolve, starting from possibly a very simple configuration of states and 
simple rules (syntax). In principle you could have intelligent, 
self-evolved creatures that practice everything from math to marriage to 
politics. Lots of places for "meaning"--semantics not just syntax. The 
system constitutes its own metasystem (if I understand the intended meaning 
here). Is it possible that reality as a whole is also a type of formal 
system, and clearly its own metasystem, only we just don't have all the 
rules yet? In any case the main issues about consciousness really don't 
concern simulations as such but behavior that in important ways corresponds 
to what we accept as consciousness. That is to say, we aren't interested in 
exactly, isomorphically duplicating the functioning of some particular 
brain, but in something "sufficiently similar" to what brains in general do.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=22300