X-Message-Number: 5529
Date:  Tue, 02 Jan 96 14:42:00 
From: mike <>
Subject: many-worlds, ETs

Bob Ettinger, #5519, says,

>1. Mike Perry (#5513) reiterates his interpretation of many-worlds theory,
>implying that "all possible" future continuers (of ourselves and of our
>present universe) will be realized as the "random" variations unfold; and
>(apparently) that this means that all possible variations of  ourselves as
>continuers will be realized. Nevertheless (in his past writings) he thinks
>good outcomes are more likely than bad, because of the motivations of our
>superhuman future selves or successors. I am still uncomfortable with the
>logic of some of these ideas.

I do think good outcomes (in the long run at least) more likely than 
bad. In a certain, restricted sense, "all possible" outcomes will be 
realized. In another, grander sense, they won't be--my opinion at 

least. I see I haven't been clear, elsewhere,  in making the distinction between
the "restricted" sense  and the "grander," so I'll have a go at it now.
The "restricted" sense refers to events 
taking place over a finite interval of time, while with the "grander" 
sense the interval of time can (and hopefully will) be infinite.
 
To illustrate the restricted sense, suppose I do an electronic coin-toss 
experiment ("electronic" so I can invoke "real" quantum randomness, 
though I don't think this is really necessary). I make a thousand 
tosses. According to many-worlds, I must split many times into 
different versions of myself that observe all the (conceptually and 
physically) possible outcomes, ranging from all heads to all tails. 
If we assume that heads and tails are equally likely, then all the 
outcomes are equally likely (though each one, individually, is not 
very likely). On the other hand, all outcomes will be possible even 
if, say, the chance of heads is 99.9999% and the chance of tails 
only 0.0001%. In each case I can calculate the nonzero probability 
of the given outcome, and that probability must be reflected in the 
results of the trials. Conceptually and physically possible outcomes
need not be equally likely, even if all occur. "Equipossible" is not
equivalent to "equiprobable."

By analogy with the coin-toss experiment, I think a reasonable 
argument can be made that "all possible" continuers are created in 
various parallel universes over a finite period of time. However, 
this will include only those continuers that *could* have resulted in 
the finite period, i.e. not what is "impossible." This then would
seem to preclude such outcomes as 
an infinitely advanced continuer, for which infinitely many quantum 
events would have to occur over finite time.
(And there is some complication here
because, according to such people as Frank Tipler, an infinite amount 
of time within a uiverse could correspond to a finite amount of time 
on the outside--but I think this conundrum can also be resolved.)

Of the possible outcomes involving continuers, not all would be 
expected to be equally likely (though of course even very unlikely 
outcomes must be realized in *some* of the parallel worlds). This, I 
think can serve as the basis of hope that our own actions in the 
course of our lives make a difference, and that good outcomes are 

realistic goals to work for. If, in particular, the notion of "outcome" is 
expanded 
to the "grander" sense I referred to, in which we consider infinite 
intervals of time (as immortality demands) then I find reason 
for optimism.

An eternally bad outcome for some sentient being, 
everlasting torture, for example, will, I conjecture, never 
happen--it has probability zero. Or in other words, the probability 
of experiencing an interval of torture should fall to zero with the length 
of time involved. This is not something that I can prove (and I'm 
sure it can't be "proved" or "disproved" in a mathematcal sense) but 
I think it would follow based on sentient beings pursuing enlightened 
self-interest over infinite time. If every being ultimately 
experiences a good, everlasting outcome, whatever the privations on a 
lesser scale, then we could reasonably say that good 
predominates over bad -- at leastin the limit of time. But this predominance
would apply to an infinite time scale, and would not preclude bad
sometimes outweighing good  over finite time scales. In 
particular, this means, again, that *it makes a difference* what you 
do here and now, even if you can expect good times to eventually 
follow. (If the choice is between the good times starting now or not 
until after 1 million years of bad times, it shouldn't be hard to choose!)

I'd now like to comment briefly on "randomness" and many-worlds. At 
the quantum level, we see events that to us are "random" and we often 
refer to them as such. An example would be the photon encountering a 
half-silvered mirror and "at random" either passing through or 
bouncing off. But true randomness is objectionable scientifically, 
because it suggests effects without causes, e.g. there is nothing to 
account for why a photon passes through a mirror rather than 
bouncing off. The answer of many-worlds is that there is no 
true randomness--we always know what is going to happen in advance. 
In this case, the observer and surroundings split into two. One 
observes the photon being transmitted, the other reflected. But to 
each observer individually, an apparently random event has occurred.
And so it is in general.

>[snip] 2. You want nightmares? I'll give you nightmares. [snip]

Bob is here invoking the Fermi paradox. We don't see evidence of 
other civilizations more advanced than ourselves (discounting some 
wild claims). Where are they? Does this speak optimistically about 
our own prospects for developing into happy, more-than-human 
immortals, who in particular will be benevolent and care about the 
fate of other sentient beings? Bob says,

>It is exceedingly difficult to account for the absence of such 
>superhuman interveners in any way consistent with BOTH optimism about 
>our future and the notion that intelligent and technologically 
>advanced races have preceded us. Unless we are the first, it seems 
>very hard to avoid pessimistic conclusions about the fate of 
>technologically advanced peoples.

To me it doesn't seem so difficult, though I'll acknowledge the Fermi 
paradox does pose a problem.To illustrate the problem, suppose 
we imagine that there are advanced extraterrestrials out there,
who are aware of our existence. We also 
take the optimistic view that advanced beings will tend toward 
benevolence, which should involve charity for others less fortunate
than themselves. (And this optimism about other civilizations is necessary,
in turn, if we hope to achieve a good outcome ourselves.) Then it 
would seem they should have contacted us and benefitted us by now. In 
particular, how could such beings *not* have cured our mortality--the 
hideous sentence of execution we are forced to endure in our earthly 
existence?

The two thoughts that come to mind is (1) indeed, we could be the 
first intelligent life-form, either in our present universe, or a 
large subset thereof, or (2) for resaons not hard to fathom, even 
very benevolent, advanced life-forms might choose not to intervene in 
our affairs or openly contact us--yet.

As for possibility (1), I submit we still don't have a good handle on 
how likely it is that intelligent life would evolve in a universe 
such as ours, and until we do, speculation about the likelihood of 

alien, intelligent life-forms (or exobiology in general) is certainly not to be 
taken as 
definitive. For instance there could be steps in the evolutionary 
process that were much "luckier" that we think.

If we think about the 
process of evolution on earth, there are several stages about which, I 
think, there is still much uncertainty as to the likelihood that they 
would happen over the span of time and conditions involved.
These include  (a) getting the 
whole process started in the first place, (b) procaryotic to 
eucaryotic cells, (c) single-celled to multicellular organisms, (d) 
non-sentinence to sentience, i.e. development of the central nervous 
system, (e) non-human to human sentience. The occurrence of even one 
"improbable" in the above could be enough to make the occurrence of 
intelligent life unlikely to happen more than once in a universe such 
as ours. On the other hand,  several successive steps 
are involved. Perhaps no individual step would be so unlikely, given 
the necessary preliminaries, as to be 
precluded from happening occasionally, but the coincidence of them 
all could be very unlikely. A third possibility is that the entire 
evolutionary sequence is not that unlikely, given the surrounding 
conditions, but those conditions themselves are unlikely. For 
example, the sun must have burned very steadily for billions of years, the 
earth's climate must have *always* been hospitable to life, etc. It may be 
too that catastrophes of just the right extent--not too much and not 
too little--were necessary to prevent stagnation of the evolutionary 
process, as (perhaps) in the KT boundary event 65 million years ago. 
In all the emergence of intelligence could amount to an incredibly 
unlikely throw of statistical dice.

Supposing, however, it *isn't* unlikely, i.e. turning now to 
possibility (2), I can see reasons why advanced, benevolent 

extraterrestrials might not contact us. To them, in only an eyeblink we will be
immortals too, or perhaps will have self-destructed. At present, they 
could *make* us into immortals instead, and solve the remaining 
problems (both technological and psychological) that we would 
otherwise have to solve ourselves. The outcome, in so doing, would be 
a kind of "hybrid"--a race of immortals (ourselves) partly shaped by our 
evolutionary process, partly created by our friendly ETs. Would they 
wish to create such a hybrid, or would they rather wait the eyeblink, 
and just let us evolve if we will? To me it seems entirely possible 
the ETs would just prefer to let us evolve. We have some significant 
transitions to go through, psychologically as well as technolgically. 
It's often said that we are mainly machines to perpetuate our genes, 
which, if you think about it, would seem to provide poor motives for 
the would-be immortal. The ETs could be waiting to see how we deal 
with this problem.

It might be argued that, if the 
ETs estimated our chances of self-destructing as large, they would 

intervene. Since they haven't done so (we think), we might take this as evidence
that our chances of self-destructing aren't large, though that 

is not the only possibility. A second is that the ETs are content with 
extracting 
information about our civilization so if we do slide down the tubes, all 
will not be lost. They thus have monitoring devices encrypted as 
familiar objects, which we do not recognize. A third possibility is 
that the ET's *will* intervene if things really get tough, but not 
unless. (And since, at that point, we will have been bad boys and 
girls, the intervention may not be that pleasant, though ultimately 
for a beneficial purpose.)

Supposing benevolent ETs, we still must 
contend with their level of advancement being, very likely, much 
greater than ours. They will know much more about the fabric of 
reality, and their values will not be the same in any case. What they
percieve as "benevolent" and hopefully what we too will ultimately
perceive, may differ considerably from 
our present point of view, even given the basic agreement that 
we all want to be sentient and happy, forever.


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=5529