X-Message-Number: 30170
Date: Wed, 19 Dec 2007 08:26:29 -0700
From: hkhenson <>
Subject: Machine personality was "evil" AI & consciousness
References: <>

At 03:00 AM 12/19/2007, Robert Ettinger wrote:

snip

>In other words, he questions the relevance of consciousness in computers to
>the potential dangers of powerful computers. Let my try to clarify:
>
>The potential danger of intelligent computers is that they might have
>motives resulting in choices inimical to us. My point is that a 
>system  without
>feeling, without subjectivity, cannot have motives in the sense that we  do.

Exactly stated.

As social animals we interact with other social animals with motives 
built into brain structures by genes.  One of the most important of 
these is to seek status in the eyes of those of our social 
group.  People or machines who do this will not be motivated to wipe 
out the people (and machines) who hold them in high regard.

>It doesn't want anything or fear anything. It can only have 
>programmed  goals,
>states to attempt to reach or to avoid, which is very 
>different.  These goals must
>be very explicit and unambiguous.

Robert, I am not even sure that will do it.  "Minimize human misery" 
is explicit and unambiguous.  A machine without social motives might 
rationally conclude that killing the entire population of humans was 
the most effective way to reach the goal. "Maximize human happiness" 
could be equally lethal.

>Any attempt by the programmer  to paint
>with a broad brush will inevitably result in freeze-up, and 
>trying  to foresee
>all future possibilities in detail is hopeless.

Indeed.

>In any case, to repeat myself, when some programmer thinks he is near  a
>super-intelligent program, he will build in safeguards, e.g. in certain
>situations requiring a pause for external input. That there is 
>little present  effort
>to do this simply reflects the fact that such programs are nowhere on the
>horizon.

I think the simulation of human brains is more likely to yield early 
useful results.  Certainly we are making progress in understanding 
what cause them to be the way they are.

Here is how I treated it in fiction:

"As soon as the seed finished the dish (after consulting its clock, 
its GPS location and the place of the sun), it aligned the dish on 
the African net communication transponder attached to the 
geosynchronous ring and asked for a permanently assigned address on 
the net.  Up to that point the clinic seed was a generic 
product.  The address it was assigned was just a string of 
hexadecimal numbers but it was a _unique number_!  The clinic's 
personality was human in that it could feel happy, even smug, about 
acquiring its very own _unique identification_.

"The clinic had other carefully selected human personality 
characteristics such as seeking the good opinion of its peers (humans 
and other of its kind alike).  It also had a few unhuman limits.

"Since humans have a hard time relating to groups of hexadecimal 
numbers, the seed also picked a name for itself.   It knew from 
Lothar and Mabo it had been exchanged for a monkey skull.  Susan had 
been the name of the leader of its psychological integration group . 
. . . insert one in the other, drop a few letters, test to see if the 
name was in use . . . Suskulan.  Suskulan had a choice of gender as 
well, male, female or neutral.  Depending on the culture, clinics 
were better accepted in some places as male, some as female, and some 
neutral.  The database for the Tamberma indicated it would be better 
accepted presenting itself as an old male spirit."

(Google "clinic seed" if you want to read the rest of it.)

An interesting fact is that nobody so far has picked up on the tragic 
aspects of the story.  Getting what you ask for can be lethal (or its equal).

Keith Henson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30170