X-Message-Number: 4032
From: 
Date: Fri, 17 Mar 1995 18:41:57 -0500
Subject: SCI. CRYONICS another load

Perry Metzger doubtless "knows his stuff," as John Clark says; but knowing
your stuff doesn't help--if you miss the point.

What I have been saying, along with many other upload skeptics, is just that
not everything essential to thinking and consciousness (including feeling) is
necessarily computing. (I assume we can agree that "computing" is anything a
Turing machine can do, and that if a Turing machine can't do it, it isn't
computing.) One possible example would be encountered if feeling requires
multiple simultaneous real-time actions or reactions of an appropriate kind. 

Mr. Metzger purported to counter this by saying that no machine is more
powerful than a Turing machine. He left out one little word: no COMPUTING
machine is more powerful than a Turing machine. 

Many uploaders seem to have this blind spot: they repeat their thesis (the
information paradigm) over and over, and then seem to think they have reached
a conclusion. Stating or re-stating your premise is not the same as reaching
 a conclusion....If this seems condescending, it really isn't; I know very
well that a great many uploaders arre much smarter than I am--but that
doesn't make them right.

Thomas Donaldson--with whom I mostly agree--weakens the general case (from a
debating standpoint) because he focuses on what really happens in real
brains. This is extremely important, but does not bear directly on the
information paradigm. The uploaders will merely retort that, no matter what
happens in a real brain, a silicon brain could do the equivalent, and could
do it better just by shuffling symbols faster and routing them more
efficiently. And this is true--IF we initially accept the premise that
information processing, in the sense of a Turing machine, is everything. But
that is the QUESTION, not the answer.

John Clark in his latest (#4021) continues to baffle me in that, despite his
intelligence and insights, he seems to miss points and respond with
irrelevancies and non sequiturs. 

I asked him to explain his implied calculation of probability--the
probability is virtually zero, he says, that feeling might require
simultaneous real-time conditions or actions impossible for a Turing machine.
His reply, essentially, was that I am a solipsist if I doubt the
consciousness of a black box that behaves like a person, and that few people
would seriously entertain the possibility that the black box is not
conscious. 

Once again--WHY can't he see this?--his answer is not an answer, but only a
reiteration of his thesis. I pointed out a specific way in which
consciousness might require more than Turing capabilities; he said the chance
of this is virtually zero, and "proved" it by the tired assertion that if it
acts human it must be human. 

What he means, I suppose, is that if my example (or any similar one) were to
hold, then the Turing Test would fail, and he can't admit this--although I
seem to recall that he or/and other uploaders have conceded the Turing Test
isn't infallible.By extension, perhaps, he thinks that a special
non-computing character for feeling might imperil the whole information
paradigm, and that would never do. And that  brings us back to the starting
point: you don't PROVE an hypothesis just by CALLING  it a law. 

Robert Ettinger


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=4032