X-Message-Number: 11800
From: 
Date: Sat, 22 May 1999 14:10:28 EDT
Subject: duckness & quackery, being & seeming

Continuing the Zombie discussion, Daniel Crevier writes (in part):

> You must have decided that consciousness in com-
>puters and consciousness in humans are completely different matters.
>I am saying that they are fundamentally the same, because there cannot
>be one kind of mechanisms (the A's) that make physical systems just
>behave as if they were conscious, and another kind (the B's) that make 
>them really conscious. If there were, the A's, which would do less,
>would presumably be more simple. If this were the case, as nature pre-
>fers simple solutions, we'd be A's. But we're not, since we really 
>are conscious. 

No, I must insist that I am respecting the facts and leaving open questions 
that are not yet answerable, while leaning in one direction; whereas Mr. 
Crevier (among others) takes an arbitrary stance and fails to label his 
postulates as such. Please look again at two very simple and unarguable 
propositions:

1. We do not yet understand the physical basis of consciousness.

2. A sufficiently fast sequential (Turing) computer, supplied with enough 
information (both about the laws of nature and about the system being 
studied, in particular a human brain), could in principle predict or describe 
the behavior or states of that system in all circumstances, even if not in 
real time.

Prop (1) by itself ought to be enough to warn against any assumption that any 
inorganic system, let alone a computer, coud be conscious. How can you 
possibly claim, with confidence, that a system has property A if you don't 
even know what property A is? The various discussions, such as Dennett's, do 
not constitute proof of anything. Claiming that consciousness is 
"computational" again is stating your premise as a conclusion; we do not 
know, and cannot assume, that consciousness is "computational" in your sense. 

Again I suggest that consciousness may reside in some kind of standing wave 
which binds space and time--i.e., which includes a non-zero region of space 
and time. If this or something like it is correct, then a computer could be 
conscious ONLY if we swallow the "isomorphism is everything" postulate, which 
once again would lead to the conclusion that a book could be conscious, since 
nothing matters except relationships between symbols. That last notion seems 
extremely far-fetched to me, although we don't know enough yet to rule it out 
entirely.

(Mike Perry suggests the isomorphism postulate should be modified to forbid 
replacing time by symbols, while still allowing replacement of space and 
matter by symbols. But even if such an ad hoc adjustment were arbitrarily 
accepted, a Turing computer would still only reflect a small part of real 
time relationships.......At least, Dr. Perry understands my arguments, and 
agrees that his isomorphism postulate is only that, and does not claim 
certainty but only expresses leanings, as do I.)

Prop (2) tells us what we already know--that a computer could fool at least 
some of the people some of the time, and programs with that capability 
already exist. Further--forgive the redundancy, which seems to be 
necessary--even perfect prediction or description would still only be 
prediction or description, not the things or events themselves. 

Now let's look again at parts of Mr. Crevier's post:

>there cannot
>be one kind of mechanisms (the A's) that make physical systems just
>behave as if they were conscious, and another kind (the B's) that make 
>them really conscious. 

Yes there can. The computer predicts or describes behavior, and a book 
written by a computer could do the same, and a robot controlled by the 
computer or the book would behave as if conscious. 

>If there were, the A's, which would do less,
>would presumably be more simple. If this were the case, as nature pre-
>fers simple solutions, we'd be A's.

No. The system (computer) which describes another system (the brain) is not 
necessarily simpler. In fact, the contrary is true. The putatitive computer 
would have to be capable of modeling all the brain's structure and function, 
and then using this capability to control a robot. 

For those impatient with tortured terminology, it may be enough to think 
again of Being vs. Seeming. What looks, walks, and quacks like a duck may 
still be only a decoy. If two things differ in any way at all, then they are 
not the same. If you choose to define "sameness" by arbitrarily restricted 
criteria, then you risk abuse of language and you risk misleading yourself 
and others about important facts or facets of nature.

Robert Ettinger
Cryonics Institute
Immortalist Society
http://www.cryonics.org

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11800