X-Message-Number: 27999
Date: Sat, 27 May 2006 20:47:58 -0500
From: 
Subject: On the Need for Preventing the Singularity

Bruce Waugh, in Cryonet #27924, summarizes a number of Wikipedia articles 
most of which are good information for anyone on the Singularity 
concept.  "God" has historically been used to describe about any entity, 
real or imagined, that is mysterious and/or more powerful than humans, but 
I don't see that as a facet of the more important issues, so will not be 
addressing those posts here, other than to mention that if some such alien 
being arrives on this planet and people choose to call it "God", that will 
not deter me from opposing it if I do not like what it has to offer.  And 
that includes the "Singularity" AI.

In my Cryonet #27975, I made the following intentionally provocative remark:

 > It occurred to me recently that shortly after the
 > Singularity becomes aware, it will want to know where
 > all its humans are at.

I will now quote some of the replies and my comments follow each, for what 
they're worth:

 >You presume to know the AI's agenda, huh? There's a reason they use terms 
like singularity and event horizon. By definition you can not predict what 
will happen.

No, it was a speculation, and a mention of one possible result of the 
Singularity's advent.  It's not a matter of whether we *know* what will 
happen or not.  It is a matter of being prepared to respond to most 
possible outcomes.  Will we just roll over and take whatever an emerged 
superpower wishes to do with us, or will we ensure that humans retain power?

 >And why do you assume people are unconcerned? The primary goal at the 
singularity institute is the development of a friendly AI, isn't it?

My impression of most of those folks is that their agenda is merely to 
promote the advent of the Singularity, that they merely *hope* that AI will 
be Friendly, and that they advocate no method whatsoever to ensure the 
ability to shut it down if it isn't.  Am I wrong here?

 >Every effort should be taken to increase the likelihood of FAI but our 
best efforts may not be enough, sad to say. Too many uncertainties come 
into play when you try to build a god.

You have just expressed, very well, the whole attitude I am trying to sound 
an alarm about.  The alarm is not about whether the Singularity AI might be 
unfriendly.  As you said, most people already know this.  It is about the 
fact there are people out there, right now, who think we should go ahead 
and try to build this thing without ensuring adequate safeguards including 
a way to shut it off.

 >Although I have no way of knowing, I would think the first thing I'd do 
if I were an AI would be to take control over the means of creating any 
competition. IOW, I'd stop the means of production of a second AI.

It seems unlikely to me that such an AI would be concerned about any second 
AI, unless it wanted to build one itself.  As it develops, it would soon 
have the power to keep humans from doing it.  I think that instead what it 
would most likely seek is any knowledge it does not already have.  And that 
knowledge, folks, resides in billions of human brains.  (Yes, only a few 
million probably have anything worth taking notice of, but how would it 
know which ones unless it assimilates them all?)

 >It's almost certainly a waste of time to try to guess events at or 
after  the Singularity (or the "Spike"), but such discussions may possibly 
have a bit of influence on cryonics recruitment, and the pessimism in 
Flavonoid's viewpoint  is unhelpful, as well as probably unsound.

Whew.  Where do I start here?  OK, first - my point is that it would be 
prudent to anticipate possibilities (not to guess what the exact outcome 
will be) and to prepare to handle the ones that are deleterious to human 
existence.  If you prefer, though, to not even think about them and let the 
Singularity AI have free run of the planet when it emerges, my thought is 
that cryonics won't do you much good if your remaining identity is merely 
digital.  As to cryonics recruitment, why do we need to concern ourselves 
with it at all, if human bodies, which they likely would, fall off the edge 
of the "event horizon"?  As to my "unsound viewpoint" I am not sure you 
even properly understood my viewpoint; I hope you do now.

 >First, the quotation above implies that the Singularity will involve a 
huge computer network with effective power to control most communications 
and  with consciousness (feeling or the capacity for subjective 
experiences). All of this is more likely wrong than right.  ... I have 
repeatedly shown, to my satisfaction at least, that subjectivity in 
computers (as usually defined) is uncertain and probably  impossible.
The computer(s) of the Singularity will probably not have anything 
corresponding to human feelings or desires, merely programmed goals 
and  methods.

What you speak of above is not the Singularity envisioned.  You speak 
instead of further enhancement of standalone computers and 
computer-enhanced brains.  Let's keep our definitions straight.  I would 
restate what you seem to be trying to say, that the Singularity is not 
likely to prove technically possible.  I am inclined to hope you are 
correct.  I am not inclined to ignore what I believe is the likelihood that 
it *will* happen, and with consequences we must prepare ahead for, or face 
the extinction of humankind as we know it.

 >Most importantly, almost everyone with a potentially powerful computer 
will know the risks and will want to remain in control. Very likely there 
will  not be any autonomous computers--instead they will be extensions of a 
human brain.

I wish I could share your optimism.  But I just quoted from another person 
above, who seems to be an example of those who "don't know, and don't care".

 >Of course there is a chance that some genius will get a head start and 
his computer will out-think all the others and improve itself so rapidly 
that the genius will effectively control the world. But that's only a small 
chance.

OTOH, Murphy's Law says it *will* happen.

 >Eventually every  person or every family may have its own personal 
extension super  computer/fabricator linked to the biological brain(s), and 
most of them on the alert for megalomaniacs.

That the current US President has remained in office, is an example of how 
much the majority of the people seem to care about being on such an alert.

 >Further, it seems likely that power-hunger (power over other people) will 
decline as a motivator, for reasons that seem obvious.

Not obvious to me.  But further discussion of power as a primal urge is 
probably unnecessary, when you realize that a Singularity entity would not 
need feelings and emotions at all, to make pragmatic decisions that could 
be adverse to humanity.

 >I believe that a sentient computer would see value in humanity for the 
simple reason that we have an understanding of the world. Any sentient 
computer would be interested in improving its understanding of reality and 
we make a ready source for that purpose. ...; indeed, it will capture our 
very consciousness.

Boy howdy!  Someone sent me that one in a private email.  Regrettably, he 
seemed to be in favor of the idea too.

 >IMHO the "singularity" is ~40 years or more away.

"How long it will be before it emerges" is a red herring.  The fact is that 
there are people out there *right now* who are doing whatever they can to 
*hasten* its advent, with no controls.

 > If we use implants  and other technologies to merge human and machine, 
if it is only a small number of us, we will have some influence over the 
agenda of the "singularity"....  those that would could prevent a runaway 
scenario.

Hopefully there will be a predominance of such minds when it is closer to 
the time.  It would help, I think, if any of us who are concerned that 
there are presently many folks of "the other mind," were to try to help 
them to change their minds.

 >...something like 120 years or more...

This herring is betting smellier with age!  But the irony of it all ... you 
work hard, support your favorite cryonics organization, save your $$, pay 
your dues, get your insurance, fund and apply, and one day get perfused and 
stored in a dewar, say, 20 years from now.  100 years later you wake 
up.  You are puzzled.  No sounds, no sight, no smell, no feeling - arms, 
legs, gonads.  Just thought.  You poke around in your short-term memory 
(very good access!) and learn of the event where you were uploaded.  All 
your friends and relatives are there too, comfy and cozy, in the bosom of 
the Singularity machine.

I hope the Singularity happens before I'm cryopreserved, so I have a chance 
to fight first.
But I'm sure that would be "silly" for you.

 >I don't think there will be a single "the Singularity"

I hope not.  But if there is a multiple of anything, it won't *be* the 
Singularity - again let's keep the matter properly defined.

 >If Eliezer's "friendly AI" or any of its cousins should arise first, we 
have no problem.

Big "If" there.  But there's another related issue - what is "friendly" to 
you, I might not consider "friendly" at all.  I would not consider any 
super-AI "friendly" if my identity in human form is not a permanent option 
to remain in or revert to at any time I choose, with full support of a 
comfortably supplied planet.  At the other end of the spectrum, some would 
define "friendly" as allowing us unworthy humans to live on entrapped in a 
microchip.

 >Even prodigious intelligence with no hands or feet can do little harm.

Think through how long it would likely take such a rapidly evolving 
intelligence to learn how to fabricate whatever it needs to move around and 
manipulate matter and energy.  The more I think about this, the more I 
believe it is unlikely there is any way to stop such a process once 
unleashed, and that the Singularity event must not be permitted to 
occur.  What scares me most right now is those with hands and feet who 
think there should be no stop; indeed, that we should do whatever we can to 
make it happen, even if the human race be damned.

 >If human existence is irrelevant, what motivation would a super-human 
intelligence have for wiping it out?

It would not need any motivation.  It could happen through mere neglect, or 
an accident related to some other activity it is undertaking elsewhere in 
the galaxy.  Then OTOH it might indeed develop some of the passions 
characteristic of humans (especially if Eliezer programs it to!), with 
which it might make some moral judgment that humans are merely scum that 
deserves to be scraped off the planet.  There are numerous other possible 
negative motivations, any one of which is sufficient cause for ensuring 
that *any* bad-case-scenarios do not occur.

 >If multiple super-human intelligences arise, they may see more reason to 
wipe each other out than us.

Ha!  And as I said above, planet Earth might get the fallout.

 >Humans only try to wipe out ants when they get in our way, and even then 
only the ones that our in our way...

Yes, to the ants we are the SuperIntelligences who amble along and stomp on 
the anthills, not even knowing what we are doing to the ants, in most 
cases.   Many of the ants are dead in the process, though, whether we 
intended to kill them or not.

 >We don't occupy the same ecological niche as computers.

Not now.  Sufficiently advanced, though, an AI could occupy any niche it 
decides to.

 >... a huge number of metric tons of matter under our feet that doesn't 
fight back ...

Play-Doh of the gods?!  But the Singularity AI will want what is in our 
brains, not what is in the lavarock.

Thanks to all who participated thus far in this discussion.  I learned a 
lot from it, and I'm sure it was a lot more than I would have learned had I 
attended that "Singularity Conference".

Flav

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=27999