X-Message-Number: 8096
From: Peter Merel <>
Subject: CRYONICS HAL
Date: Fri, 18 Apr 1997 01:43:38 +1000 (EST)

Thomas Donaldson writes,

>I'd even suggest that one very good way to tell that you are conversing with
>a computer (in the classic Turing test) is to watch what happens when things
>don't come out right. The misunderstandings, mistakes, blunders, etc that
>people sometimes make says a lot more about how they work than when everything
>is done correctly.

I think this raises an excellent touchstone, one we're probably all
familiar with - HAL from 2001. I've often wondered what the business
with the AE-35 unit was all about - why did HAL, who was supposedly
incapable of making a mistake, misdiagnose this unit?  The question
seems inconsequential in light of HAL's subsequent murder of the crew -
it appears that he has just "gone nuts". But I think that's incorrect,
and in fact I think the case bears closely on the current debate.

When HAL, Poole and Bowman are being interviewed at the start of the
Jupiter sequence, HAL states that his goal is to "maximise [his]
usefulness". Next HAL plays chess with Bowman - "a stimulating game" -
and quizzes Poole to find out if he has any knowledge of the purpose of
the mission.

At exactly the moment HAL finds out Poole knows nothing, he begins the
AE-35-unit business. A direct inference can be drawn from this: strictly
in order to maximise his usefulness, HAL intends to carry out the whole
mission by himself. He need only determine that Poole has no knowledge
of the mission, because if Poole knows about it then he could anticipate
and defeat the AE-35 gambit.  HAL misdiagnoses the AE-35 not on account
of any error, but as a stratagem: he treats the mission in exactly the
same goal-oriented manner as the chess game.

What's missing in HAL? Clearly, on almost any measure, HAL passes the
Turing test. If we agree that his behaviour was ultimately mechanical,
lacking some necessary human quality of consciousness, how should we
define that quality? Or, if we deem that HAL's actions were equivalent
to those of a human, and certainly there are no end of people who call
themselves human that are capable of such actions, does this imply that
such people are not truly conscious?

Peter Merel.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8096