X-Message-Number: 8198
From: 
Date: Sat, 10 May 1997 12:14:36 -0400 (EDT)
Subject: subsim speeds, bugs, etc.

Beore getting back to the subsim cascade question, another point or two
regarding Metzger's # 8192 etc.:

1. I have said that the information paradigm is only a postulate, not proven,
and we do not know whether an inorganic medium--let alone a computer
simulation--could support consciousness. Metzger says my consciousness is
only a postulate, not proven (to him). 

But his own consciousness IS proven to him, and since he knows people are
much alike biologically, it is nearly certain that other people are conscious
too. He says this reasoning "doesn't cut it." 'Nuff said.

2. I have supplied reasons why a putative simulated person in a simulated
world could not live out a life closely similar to what his life would have
been in the real world. Metzger answers:

a) I (Ettinger) have agreed that I could be a brain in a vat hooked up to a
VR system. 

No, I haven't agreed to that: I only agreed I would have a hard time proving
I am not.

Actually, his original question, as I recall, was how I could prove he hadn't
put my body in a vat last week and hooked it up etc. I can very easily prove
that didn't happen, just by noting that the technology didn't exist last
week, and doesn't now. 

But I was answering a more general question, and said that--right now--I
would have difficulty proving I was or was not in a vat; but gathering
evidence would not be impossible, as I indicated.

b) Metzger seems to agree that the "simulated scientist" issue does bear on
the question of someone living out his life as a simulation in a closely
similar way to what would have happened in the real world. But then he says
that, since I can't prove I'm not in a vat, I don't know if the real laws of
physics are the ones I have learned and observed.

This is clearly not germane. The hypothesis was that a SIMULATED life could
not closely parallel a REAL life. Whether I am in a vat in the real world, or
am a simulation in a simulated world, or whether I can prove it either way,
doesn't matter. My proposition remains true.

3. Now the question of time relationships and a subsimulation cascade. As
usual, this is dashed off with little attempt at optimal organization.

Thanks to Joe Strout (# 8191) and Mike Perry (# 8193) for their input.

a)  I am increasingly inclined to think that--short of Tipler's "Omega
Point"--it is not possible, even in principle, to simulate a world full of
people. At a minimum, it seems, one would need INSTANTANEOUSLY AND
SIMULTANEOUSLY to analyze every person (as well as much of the environment)
in great detail, and convey that information to the computer, in order to
create a simulation that would carry on all their lives as they would have in
the real world. This is not in the cards through any technology even remotely
foreshadowed today.

b) Let me restate my cascade proposition a little more clearly:

If a computer of limited speed is presented with work requests that increase
rapidly and without limit, then the backlog of unfilled work requests at some
point will increase rapidly and without limit. The average allocatable
computer time in the original computer, per unit of job backlog, will go
rapidly to zero.

c) Does that mean the "whole thing will effectively grind to a halt"?
 Yes--with certain qualifications.

When I said the subsimulation cascade would "overload" the original computer
and effectively stop the whole system, certainly that was not expressed with
enough qualifiers, although I did indicate at various points that the details
of the programming would affect the detailed outcome. I acknowledged, for
example, John Clark's comment that a computer could assign priorities among
tasks, stop some temporarily, etc.; and I also acknowledged that the physical
ticking of the original computer could continue as usual. But I think I can
show that these qualifiers are not important.

d) Joe Strout notes that successive subsimulations or nested simulations run
more and more slowly, as viewed from the real world, but that this would not
affect the subjective lives of the simulated people at any level. I think
that one problem with this reasoning arises from the fact that Joe refers to
simulations of PROGRAMS, like some currently in use, whereas in our case the
first simulation is not of a program but of the real world. 

First of all, remember that a simulation is not necessarily slower than the
system simulated. In fact, we often use simulations precisely because they
are much faster; and AI people often say that an electronically simulated
person could live a lifetime while a flesh and blood person is blowing his
nose. I suspect the question of how fast the first simulation would run,
relative to the real world, is complex and difficult and dependent on many
unknown factors. Perhaps it is not unreasonable to guess that the first
simulation could run at the same speed as the real world, or faster.

But if that is true, then the the first subsimulation must also run at least
as fast as the first simulation, its "parent." Why? Because the first
simulation (supposedly) DOESN'T KNOW it is a simulation, and therefore its
inhabitants reason that THEIRS is the real world, and a simulation (our
subsimulation) must run just as fast. If a simulated person lives faster than
a real person, then a subsimulated person must live faster than a simulated
person, etc.

Unless I have missed something, then, the successive subsimulations should
(to fulfill the requirements) NOT run more slowly. But the original computer
cannot keep up with this demand, and therefore the system breaks down--i.e.,
fails to maintain its intended function, even though the real hardware keeps
ticking away. I think "grinding to a halt" is close enough to express this
condition.

e) I left out the "bug" problem, but will briefly mention it here. Since bugs
exist in virtually all complex programs, it is pretty likely that there will
be bugs in the simworld program. Also, EVEN IF THERE WEREN'T any (first
level) bugs in the original program, since the program simulates real and
therefore fallible people, there WILL be bugs in at least some of the subsim
programs.....Real life is tough; simulated life is tougher.

With the possibility of two-way communication between levels, previously
discussed, it therefore seems possible also that a retro-bug could be
introduced into the original program from a sublevel, perhaps causing a total
failure of the entire system.

f) I also barely mentioned, and barely mention now, the self-reference
regression problem, which is IN ADDITION to other subsimulation problems.
Depending on the degree of fidelity demanded, the first simulation would have
to copy the real world INCLUDING the simulation program and the analysis and
communication gear, etc., like a map within a map within a map...

g) Mike Perry proposes a prioritizing and work allocation system that would
allow the original computer eventually to address any identified item of the
backlog of work, although with subsimulations proceeding more and more
slowly. (He does not deny that the backlog of work would then increase
without limit.) 

One problem with this might be (d) above. 

Another problem might be that the original programmers cannot control the
activities or choices of subsequent sublevel programmers. Each subsimulation
is different, with slightly different people sometimes making different
choices, and some of those choices could have catastrophic results.

Finally, Dr. Perry's proposal requires unlimited time in order for any
arbitrarily given item of work in any arbitrary subsim to be processed. Since
the universe may not have unlimited time, this would seem to be tantamount to
the process stopping (short of the Omega Point assumptions).

Robert Ettinger

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8198