X-Message-Number: 30177
From: 
Date: Thu, 20 Dec 2007 10:27:23 EST
Subject: Re: CryoNet #30169 - #30176

Keith Henson wrote in part:
 
[Ettinger]:

>It  [computer] doesn't want anything or fear anything. It can only have  
>programmed  goals,
>states to attempt to reach or to avoid,  which is very 
>different.  These goals must
>be very  explicit and unambiguous.



[Henson]:

Robert,  I am not even sure that will do it.  "Minimize human misery" 
is  explicit and unambiguous.  A machine without social motives might  
rationally conclude that killing the entire population of humans was  
the most effective way to reach the goal. "Maximize human happiness"  
could be equally lethal.


I strongly disagree that "minimize human misery" is explicit and  

unambiguous. It is extremely unclear. How could the machine possibly ascertain  
the 

current state of "human misery" (a subjective condition) and confidently  
predict 
the effects of its possible interventions? 
 
The fiction alluded to is interesting, but space and time preclude more  
comment here.
 
R.E.



**************************************See AOL's top rated recipes 
(http://food.aol.com/top-rated-recipes?NCID=aoltop00030000000004)


 Content-Type: text/html; charset="US-ASCII"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30177