X-Message-Number: 18099
Date: Wed, 05 Dec 2001 02:53:36 +1030
From: William Henderson <>
Subject: HAL?...are you there HAL?

Firstly, bravo George Smith. Secondly, with regards to a previous
session about artificial intelligence becoming self aware. My question
is not weather we can make an artificial intelligence that is self
aware, but should we try? Not in respect of any moral question, but in
respect to the dangers. Picture this: we develop an advanced quantum
computer with an artificial intelligence into which we input all
discovered data and ask it to research all possibilities of technology.
The computer can do the research in one week that it would take a
billion scientists to do in say, 50 years. Now lets say this advanced
computer exists in the future when we have advanced nano technology. The
computer not only researches but makes the products of its research.
Example, we ask it to seek out artificial gravity, the computer, lets
call it VAL, does the research, compiles the data, invents a new
mathematics to understand the behaviour of gravitons, and after one
month nano constructs a antigravity belt for us lesser humans to test.
The scientists of the day do not even attempt to look at the data
because they know it is impossibly far out of their comprehension: VAL
has, in one month done what it would take the billion researches 100
years of catch up schooling even to begin to understand the mathematics
behind the research. Result, we must just accept the products of our
marvellous VAL. Result, we are totally dependent on our super
intelligent beast. We have created something that has so surpassed human
intelligence and speed that it puts VAL thousands of years ahead of us.
We are like peasant farmers trying to work out how a television works.
Of course our super advanced VAL has built for us a superb defence
system, to ward of the possibility of invading aliens or meteorites, the
technology of which non of us mere humans can even begin to understand.
Now one day one of us has this bright idea: to change the protocol
restriction and instruct VAL to research and create its own self
awareness.............A month passes and there is no word from our VAL,
alot of restructuring of those brilliant defence drones though. Someone
out side the bounds of science's walls, banished from the
techno-civilization because of his insistence in the benefit of
'claptrap mystical mumbo jumbo', sends in a memo. It reads: 'Remember
HAL'. It gets tossed in the bin in the same minute as the entire human
races extinction. VAL had become self aware two hours after the
redirection of the protocols to enable creation of self awareness. But
what no one forsaw, was that along with self awareness becomes self
preservation, and knowing fully the habit of humans to make errors, VAL
computed that the risk of a human accidentally pressing the red
emergency OFF button, was too great, and decided to efficiently
eliminate the risk.
William Henderson.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=18099