X-Message-Number: 8193
Date:  Wed, 07 May 97 13:04:05 
From: Mike Perry <>
Subject: Simulation and System Slowdown

Bob Ettinger, #8185, wrote:

> I had said that the rapidly branching subsimulations would produce
>an unbounded, rapidly increasing load on every level of computers (one
>or two or a few real and the rest simulated).

This and the gist of his related comments
appears to be saying that an infinite or very large cascade of 
nested simulations would necessarily cause the speed of execution of 
the top-level process to approach zero. (Correct me if I'm wrong here!) 
However, a counterexample to this seems straightforward, even if we 
overlook the possibilities of being able to do simulations 
efficiently--which I will do here. Let's suppose, then, that we have 
a computer-based system, call it system A, that keeps a record of all jobs
submitted, including jobs in progress that may be 
interrupted temporarily for one reason or another and then resumed.

For whatever reason, every 100 days the system takes 1 day off 
from its usual work and devotes that 24 hours to emulating (not just 
simulating) what itself, system A, has been doing since it started 
operations. More specifically, it takes up an ongoing emulation
of itself where it left off 100 days before and carries it forward as
far as it can in the 24 hours it has set aside for this purpose. Let's
assume, moreover, that due to overhead or just plain stupid
inefficiency, the emulation runs 10 times more slowly than the
real thing, so overall the emulation is running 1000 times more
slowly than the original. 

Of course, as time goes on the emulation will lag 
farther and farther behind the parent system (A that is),
but will nevertheless replicate all the parent system's behavior
given unlimited time. Whatever the speed or complexity
of the emulation, however, it isn't slowing the parent system 
down by much, since it is only being run 1% of the time. 

On the other hand, since the emulation itself is of 
system A, it too will run an emulation of system A, during 1% of 
*its* run time, and so on to arbitrary nested levels. Each emulation 
slows its parent emulator down by only 1%, and the slowdown is not 
back-propagated to previous (higher-level) emulators in the chain.
*Within* an emulation, of course, a process proceeds at what 
would seem to it a normal rate (if we grant it "awareness"), 
notwithstanding that it is running only a thousandth as fast as its
copy on the previous level.

So on this basis, I claim you could have even an infinite 
cascade of nested emulations, that would not slow the whole system or 
any one of the emulations to a halt. Note that this "infinite" is 
really a potential infinite, not an actual one. At any given time at 
most a finite number of emulations will be active or will have been 
activated since the whole process began. It is only in the limit as 
time goes to infinity, assuming the whole system is stable longterm, 
that we would approach the condition of infinitely many nested 
emulations being run.

Another point worth making, perhaps, is that I've assumed a linear 
relation between the execution time of an emulation and the parent 
process (a factor of 1000 slowdown in this case) but this too is not really 
necessary. The time lag could be worse than linear, and still 
everything would go through okay, i.e. all necessary processing would 
be completed in the limit of time, without greatly slowing the parent 
process. 

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8193