X-Message-Number: 8540
Date: Thu, 04 Sep 1997 23:39:34 -0700
From: Peter Merel <>
Subject: Turing, or not Turing, that's the question.

Misc. replies:

John de Rivaz writes,

[speed should confer an unbeatable evolutionary advantage in humans]

A hummingbird has an enormously fast metabolism - 1,200 heartbeats
a minute - but to keep this up it consumes 50 meals a day. I suspect
a hummingman is precluded by economies of scale. But cf. Sprague De Camp's
"Elephas Frumentii"

--

Thomas Donaldson writes,

>Nonetheless it was a nice article, and gave an overview of the system and 
>the program it ran. For those who want to find out the nitty-gritty here,
>rather than just spin dreamy theory.

Um, I make my living designing nitty-gritty software on the largest scales.
Realtime distributed systems, SCADA and manufacturing lines, terabyte
data-warehouses, international telecommunications networks and so on. 
Imho dreamy theory is where nitty-gritty engineering begins. 

But my conjecture about Deep Blue and Kasparov had nothing to do 
with their entirely different implementations. Of course a brain does not
resemble one of IBM's fancy paperweights; neither does a Turing equivalence
require such resemblance. It requires only that the behaviour of the two
systems be equivalent - and equivalence is apparently Kasparov's impression 
of Deep Blue's chess behaviour.

>There are no daemons willing to provide
>arbitrarily large amounts of tape to our Turing machine.

Before you discount Tipler's God you'll need to do a little explaining.

--

John Clark writes,

>Genetic computer programs would have a huge advantage over evolution
>because they could have the ability to inherit acquired
>characteristics, something nature never figured out how to do.

Um, didn't nature figure out how to make John Clark? And aren't John
Clark's thoughts characteristics that his progeny (poor blighters! :-) 
might one day find themselves inheriting?

--

John Pietrzak writes,

>you'll find yourself restricted in
>the same way that a classic Turing Machine is restricted: your machine
>may be faster, but it _can't do anything new_.

Let's have a TM of a speed and program adequate to simulate in realtime the 
quantum function of our universe; nothing in computability precludes 
this, though I admit I don't have the beast handy right now. Let's
start the thing off a Plank instant before the big bang, imagining for 
a moment that the notion of "before the big bang" makes no less physical 
sense than an infinite paper tape. 

Right-o, now let's see you come up with something new when your 2x TM 
equivalent beats you to every punch. Thomas is right about computability 
being irrelevant here ...

>Absolutely, it seems crazy because it's just what the Turing Test
>espouses.  The Turing Test says basically, "if it talks like a human,
>it's intelligent."  Therefore:

No, that's quite incorrect. The way Turing put it was:

"The``imitation game'' is played with three people, a man (A), a woman (B), 
and an interrogator (C) who may be of either sex. The object of the game for
the interrogator is to determine which of the other two is the man and which 
is the woman. The interrogator is allowed to put questions to A and B. The
ideal arrangement is to have a teleprinter communicating between the two 
rooms." 

"We now ask the question ``What will happen when a machine takes the part of 
A in this game?'' Will the interrogator decide wrongly as often as when the
game is played between a man and a woman? These questions replace our 
original, ``Can machines think?''"

There are three salient differences between Turing's definition and your 
interpretation:

+ a teleprinter is only "ideal" - the test can be conducted by examining
any behaviour, not just talking.

+ the test does not suggest that imitative behaviour qualifies as 
intelligence; it proposes instead that the ascription of intelligence 
can be nothing more than a value judgement.

+ the test never ascribes intelligence with any certainty - it only 
addresses the frequency with which the computer, in the judgement of its
interrogator, can pass as human. This is a probabilistic test. As such,
given an adequate sample size of subjects and interrogators your objections
about language, human speciesism (a word?) and willingness are moot.

>Hey man, what do you think a tape is?

The Tao.

>The TM places it's
>head over a discrete section of the tape, performs a discrete action,
>and the tape retains the state specified by that action from then on
>(or until it is changed by another action).  All that the tape is,
>then, is an infinite, ordered series of *switches*.

Infinities are nothing to sneeze at; account for every quantum correlation
and transaction in the universe on that tape and you'll still have no end 
of room for backups.

>[the TT] was never designed to show equivalence between two 
>intelligent entities. 

Indeed it was not, and this is a fair criticism. I believe Thomas makes
this point too. 

Very well. I hereby coin a new test, to be known as the Merel Criterion, 
by which the adequacy of an upload can be readily determined:

The original person is the interrogator. They require that the upload
reproduce, without peeking, their response to any challenges they happen to
find significant. They take as long as they like. If the upload satisfies 
them that its responses match their own, according to whatever measure 
they value, then it is appropriate for society to accept that the upload 
is indeed identical to the original, and entitled to share equally their 
legal, financial and social standing.

For myself, challenges I'd employ might run something like:

+ compose a long poem for my wife.
+ paint a picture of this sunset.
+ raise a clone of my child.

For Kasparov I can't help but think a significant challenge would be

+ beat me at chess ...

All this said I should add that I don't expect uploading to become 
practicable much before reanimation, and certainly wouldn't risk my
life on it.

Peter Merel.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8540