X-Message-Number: 30183
From: 
Date: Fri, 21 Dec 2007 00:43:06 -0500
Subject: more on possible (not alleged) AI threat

--_----------=_1198215786323351
Content-Disposition: inline

Kennita's latest message may have some insights; in fact, I don't find a
lot to disagree with in it, and I appreciate the valuable time she took
out of her busy schedule to write it.  But it misses the point ...

To Robert Ettinger:  both of you, along with numerous other writers in
the past, when hearing about the possible threat of a malevolent
Singularity AI, make frantic digressions into irrelevant subjects.  It is
not I who have missed the point; it is you who have changed the subject.

The subject is:  There is a *possibility* of a malevolent Singularity AI
emerging at some point (undefined) in the future, and the only chance we
have of deterring it if it does so progress, is placing adequate
safeguards at every step of future AI research.

There are, of course, avoidance techniques in human nature; intended or
unintended, programmed into us.  With regard to the possibility of a
malevolent Singularity AI, it usually takes its form as some
psychological denial statement like "there is a chance it will be a
friendly AI" or "we don't know that this will happen!" or "the
singularity has to be [insert your favorite number here, the larger the
better] years away; what me worry?"

Here is one such statement by Kennita, with which, of course, I
vehemently disagree:  "Awareness of "the problem" is most likely to
foment panic and the kind of useless safeguards you mention, which would
probably hinder research that we *want* to happen."   Nowhere do I state
there should not be research; nor should there be "useless safeguards". 
If useful safeguards cannot be developed, then, of course, there should
be no research in areas where there cannot be.

R.E. said  "No, the bear is not programmed in the way that a computer
is.  The computer
is language-based and digital, which is very different."

How do you know this?  Animal brains may very well be programmed similar
to computers.  And if they are not, then I would think that computers
will eventually have to become programmed more like animal brains, in
order to emulate their functions.  Isn't this what AI is all about? 
Regardless, I was making an analogy.  If that is in criticism, I will
just have to claim artistic license in that my bear is an imperfect
analogy to the Singularity AI, howbeit a useful one  :)

"You cannot program a  computer to "destroy humanity" or to "save
humanity" because no such algorithm  is possible."

Maybe you should go watch some adolescents playing some of the modern
video games available.  These game writers specialize in what you say is
impossible.  Usually, yes, the computer characters' goals are less global
than that, but I'm confident you could easily construct goals relating to
"humanity" given sufficiently sophisticated algorithms.  And besides, the
Singularity is pretty much understood to be a point in time when the
computers will become self-programming and be able to make these
decisions for themselves.

"... programming a pause whenever the advanced AI program called for an
action that might have a real-world result."

Again, a self-programming computer could merely remove the pauses for
external input.

"What we do know is that search engines--the closest thing we have to
language-based AI--are primitive, despite the heavy financial incentive
to improve them."

Have you compared Google search, with the pathetic search engines
available 10 years ago?  Progress in this area is exploding.  You must be
reading Mark Plus :)

"It seems highly unlikely that private efforts are underway that are far
ahead of the heavily financed search engines."

If and when (and it may already be happening) malevolent factors enter
the AI development research arena, their corner of the field will
necessarily be shrouded from the public, as exposure would bring them
unwanted confrontation.  That is in fact what Murphy's Law says will
happen, but we can at least try to make it not happen, which is about our
only chance of flesh and blood survival.

-- 
Got No Time? Shop Online for Great Gift Ideas!
http://mail.shopping.com/?linkin_id=8033174


--_----------=_1198215786323351
Content-Disposition: inline

 Content-Type: text/html; charset="iso-8859-1"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30183