X-Message-Number: 30189
From: 
Date: Fri, 21 Dec 2007 12:12:31 EST
Subject: a little more on evil AIs

Flavonoid writes in part:

>I'm  confident you could easily construct goals relating to
>"humanity" given  sufficiently sophisticated algorithms.  
This is an empty assertion and bypasses my point, which is that a  

language-based, digital algorithm cannot, as far as I can see, generate  any 
behavior 
corresponding to such vague generalities as "help people" or  "hurt people."
 

>And  besides, the
>Singularity is pretty much understood to be a point in  time when the
>computers will become self-programming and be able to  make these
>decisions for themselves.

Self-programming or self-modification can only proceed on the basis of some  
built-in guidance principles. (Otherwise, if random modification is allowed, 
the  program will almost instantly break down or freeze up.) Such guidance  
principles, as I said, could include a requirement to pause for human review  
before any "execute" order. 
 

>Again, a self-programming computer could merely remove the pauses  for
>external input.


Again, see above. Also, "could merely remove" appears to assume that  the 
computer somehow "wants" to free itself of this supervision. Yet again, an  
algorithmic computer doesn't "want" anything,  not even its own survival or  
"freedom."
 
A malevolent and powerful program, able to overcome all competing  benevolent 
 programs, is certainly conceivable, but does not seem to rank  high on the 
hazard list. There are many other more threatening doomsdays.
 
R.E.



**************************************See AOL's top rated recipes 
(http://food.aol.com/top-rated-recipes?NCID=aoltop00030000000004)


 Content-Type: text/html; charset="US-ASCII"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30189