X-Message-Number: 3652
Date: Mon, 9 Jan 95 16:53:46 PST
From:  (Hal Finney)
Subject: CRYONICS: Turing Tests

Bob Ettinger suggests that the Turing test is neither necessary nor
sufficient to establish "selfhood" for computers.

I think most people would agree that it is not necessary.  A severely
injured human being may be unable to speak or communicate well enough
to pass the test, yet be fully conscious and aware.  Look at physicist
Stephen Hawking for a tragic example of a mind increasingly isolated by
a failing body.

The interesting question is whether it is sufficient.  Could we really
refuse to ascribe selfhood, consciousness, to a program which speaks
so eloquently that we think it is human?

Bob points out that some programs may already fool some people.  This
could be true.  A naive person, faced for a short time with PARRY or
ELIZA or one of their offspring might indeed think them human.  Yet
the inner workings of these programs make it unlikely that they are any
more conscious than, say, an automobile.

On comp.ai.philosophy there was some discussion of this point
recently.  There it was pointed out that although we sp really
refuse to ascribe selfhood, consciousness, to a program which speaks
so eloquently that we think it is human?

Bob points out that some programs may already fool some people.  This
could be true.  A naive person, faced for a short time with PARRY or
ELIZA or one of their offspring might indeed think them human.  Yet
the inner workings of these programs make it unlikely that they are any
more conscious than, say, an automobile.


On comp.ai.philosophy theread, the researchers were not doing their job, or 
perhaps they
were not the right people for the job.  When magicians helped create
and supervise the tests, the results were very different.

Similarly, asking naive people to judge Turing tests is not going to
be the most productive approach.  Rather, we should pit the computers
against the most sophisticated human judges, those who will know best
what avenues to explore, what weaknesses to look for.  Probably the
best judges will be AI designers themselves.  Give them a long time
with the machines, hours of discussion.  Any AI which can pass that
test is going to communicate at least as well as you or me.

This idea is touched on in Greg Bear's science fiction novel, Moving
Mars, recently out in paperback.  He discusses the "history" of AI
development in the early to mid 21st century.  As the machines become
smarter, more and more people are unable to distinguish them from
humans in Turing test situations.  But one man is the best at drawing
these distinctions.  Despite the best efforts of the programmers,
given enough time he can always distinguish the AI programs from real
people.

Finally a program is designed which even this "Turing test master"
cannot distinguish from a human.  This is the first of the true
thinkers, accorded the rights of human beings.

One of the ideas which philosophers play with in connection with the
TT is the "zombie".  This is a being which acts exactly like a normal,
conscious, intelligent person, except for the fact that it is actually
not conscious - there is no "self" there.  It claims to have a self,
but it does not.  Imagine what a tragedy it would be if the whole
human race were suddenly turned into zombies.  The world would go on
exactly as before, with politics and romance, but there would no
longer be any actual selves experiencing life.  I find this idea so
bizarre as to be almost inconceivable, but I supose some concerns
about cryonics might stem from fear of ending up in just such a world.

Hal Finney


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=3652