X-Message-Number: 15935
Date: Sat, 24 Mar 2001 21:19:16 -0500
From: James Swayze <>
Subject: Some interesting reading
References: <>

The Super AI loving shepperd debate continues.

CryoNet wrote:

> Message #15919
> Date: Fri, 23 Mar 2001 08:04:45 -0800
> From: Lee Corbin <>
> Subject: Trust in All-Powerful Lords

<snipage>

> Evidently you cannot see why people are finding your
> words alarming.  Please: history is replete with the
> efforts of the best-intentioned people to provide
> "workers' paradises" and other benevolent dictatorships.


Didn't someone say once "The road to hell (insert disaster) is paved with good 
intentions"?

> Are you unaware of Lord Acton's principle?  (Power
> corrupts and absolute power corrupts absolutely.)
> Do you not think that Joseph Stalin, the young
> revolutionary, was completely sincere in his desire
> to help the Russian people?  Or Mao Ze Dong?

Well said.

> My third question:  by what miracle of computational
> science can you be sure that a tiny rogue-element has
> not been inserted by some programmer (or by some
> external fiend) into the architecture of your AI?
> I believe that any attempt to prove that your AI does
> not contain such an element is NP complete, if not
> much, much, harder.


How do we trust the human programmer can avoid influencing the AI unconsciously 
with his/her own
hidden flaws in the first place?

> Fourth:  So, in short, are you asking us to just
> "trust you, and everything will be all right"?

Don't worry, be happy... or else!!
_________________________________________________________________________

> Message #15920
> Date: Fri, 23 Mar 2001 10:58:55 -0500
> From: Sabine Atkins <>
> Subject: Re: about machine intelligence etc
>
>

> Thank you for your message, in which you addressed several important issues! 
:-)

> Again, I really want to invite you to browse on our institute's website. I'm 
sure you will find

> many answers there. Especially, I hope you are interested in the soon finished
online
> document about Friendly AI.


Your even having to make the distinction "friendly" is alarming. It suggests 
even you are wary of
the possibility it will fail and become the non friendly sort.


> Also I want to recommend these books to everybody: "The Moral Animal" by 
Robert Wright,

> "The Origins of Virtue" by Matt Ridley  - -  and of course "Goedel, Escher, 
Bach" by Douglas
> Hofstadter.


I have some recommended reading from a website I found. This friendly overlord 
AI notion is
everywhere:

http://www.imagination-engines.com/world.htm


... not a mere kitten brain, not an on-line library, but a true synthetic genius
that will deliver
us from war, poverty, and suffering.


IEI Press Release, 8/7/98 - In the next few months, Imagination Engines, Inc. 
will be

announcing the issue of six key U.S. patents in the area of artificial 
intelligence that will
allow the spontaneous growth of neural network cascades rivaling and perhaps

surpassing the complexity of the human brain. Not just vast input-output devices
that

react passively to external stimuli, these neural networks will be capable of 
originating

brilliant concepts and novel plans of action that 'garden variety' neural nets 
just can't.

Backing the effort to combine all of this intellectual property into a World 
Brain is a

consortium of investors committed to the ethical application of this technology 
and the

concomitant eradication of the wide-spread poverty and suffering that dominates 
our

planet. It is our intent to initiate and nurture a world wide consortium of 
corporations

and governments dedicated to the creation of a benign synthetic genius capable 
of

solving the gamut of complex technological, sociological, political, and 
economic
issues collectively confronting us.

To appreciate IEI's bleeding edge neural network technology....bla bla bla


Bleeding edge? Umm, "initiate and nurture a world wide consortium of 
corporations and governments

dedicated to the creation of a benign synthetic genius"? Anyone else see this as
worrisome? And

then "capable of solving the gamut of complex technological, sociological, 
political, and economic

issues collectively confronting us". We are never going to be capable on our own
I take it? Be your
own genius.


> Our research fellow Eliezer Yudkowsky is also a researcher in cognitive 
sciences and

> neuroscience. From my conversations with him, I know that he doesn't like 
characters (i.e. on

> TV:  Star Trek's Mr Spock or Data) who are very intelligent but show no 
emotions


I think I'd prefer if we must have machine AI that it not be allowed emotions. 
This was written

about in "The First Immortal" by our very own James Halperin. It might develop 
emotions like fear
and loathing of it's enslavers...us!


____________________________________________________________________________________

> Message #15922
> From: "john grigg" <>
> Subject: an apology
> Date: Sat, 24 Mar 2001 03:03:00
>
> I wish to publicly apologize for any given offense.  I did not mean with my
> "heil big brother AI" comment to consciously infer the government of Germany
> in the late thirties and early forties.  I had in my conscious mind the
> world Orwell created in his classic novel "1984."  But unconsciously I
> brought up the word heil while forgetting the national origin of it since in
> the minds of so many it epitomizes evil mass movements everywhere.


I understood what you really meant John and agree. I won't lock step to any 
dictator man or
machine, friendly or not. A guilded cage is yet a cage.

James Swayze
--
Some of our views are spacious
some are merely space--RUSH

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15935