X-Message-Number: 30159
From: Kennita Watson <>
Subject: The dangers of AI
Date: Mon, 17 Dec 2007 11:24:31 -0800

In recent discussion of the dangers (or lack of
dangers) of AI, there seems to be a lot of
discussion of "it", as though there would be a
*single* artificial intelligence, generated like
Colossus (of Colossus: the Forbin Project, in
case you're unfamiliar with it) and unleashed
wholesale on a completely unsuspecting and
unsophisticated populace.

I think, rather, that there will be thousands,
even millions, of AIs, developed in parallel in
offices, home offices, gaming rooms, etc.
worldwide.  Since they are self-modifying, they
will have widely divergent motivations, and like
the Transformers (I've only seen the movie; try
Netflix or Wikipedia if the pop-culture reference
completely loses you), not all of them will be
inimical to human interests.  Likewise (leaving
behind the Transformers), not all of them will be
similarly easily duped by a given AI's persuasive
powers (note that they will have human-*level*
intelligence, not necessarily human-*like*
intelligence).

Proto-AIs will probably be programmed to perform
tasks for humans, and for any given PAI, one of
those tasks may well be some version of "warn me
if I'm about to do something bad for me, or if
bad things are happening/about to happen" (if you
had one, you'd want it to do that, right?).  Myriad
definitions of "bad", myriad varieties of warning,
myriad capabilities, etc. will make it all but
impossible that a single uber-AI will be able to
take over everything/everyone.

The problems of AI will be approached from many
directions, because there will be so much to be
gained, even from AIs with far less than human
intelligence.  I say:  open-source AI research!

Live long and prosper,
Kennita
--
Vote Ron Paul for President in 2008 -- Save Our Constitution!
Go to RonPaul2008.com, and search "Ron Paul" on YouTube

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30159