X-Message-Number: 5457
Date: Wed, 20 Dec 1995 21:53:55 -0800
From: John K Clark <>
Subject: SCI.CRYONICS Feeling and Evolution

-----BEGIN PGP SIGNED MESSAGE-----

In  #5448     On  Tue, 19 Dec 1995 Wrote:       
         


                >We can  (in principle, and increasingly in practice) look
                
                >inside at the anatomical and physiological details; if these
                
                >are not similar to our own, we have good reason to withhold
                
                >judgment or demand a higher standard of evidence.
                  

The key word is "similar". Most would say that similar skin
color is not important, however a similar brain is, but how
similar does it have to be, and  similar in what way? How about
an object made out of elements similar to a brain, like a rock?
What about an object made of chemicals similar to a brain, like
a piece of wood? I think it's more important that an object have
a logical structure similar to a brain. On the other hand,
perhaps only an object that is IDENTICAL to my brain can be
conscious, but I doubt it.
                  


                >Eventually we will understand the anatomical/physiological
                                >basis of feeling and consciousness in mammals

Eventually we will have some good practical theories but, some
will always think they are wrong because unlike intelligence,
there is no way to  DIRECTLY test for consciousness, at least
not consciousness other than our own. We must use theories,
theories that say this change in brain structure will lead to
that change in conscious awareness. Unfortunately you can't test
it, to test anything you need a control group and in this case
there is only one  object in the universe that you can be
absolutely certain is conscious  regardless of what theory of
consciousness is true. One example is not enough  for science.
                 


                >by some criteria, some existing programs are much more
                
                >intelligent even than people. Surely it requires no great
                
                >leap of imagination to  think we will fairly soon have
                
                >programs much superior, but still without feeling or
                                >consciousness. 

I can tell you're assuming that the primitive AI programs we
have now are not  conscious, and I agree, but exactly what is
it that makes us think these  program is not conscious?  There
is only  one reason, after a while we noticed that the answers
these sort of programs give are simple, repetitive and not at
all like the sort of answers that you or I or other people would
give. It's easy to fool somebody for a short time, but not long
term. In other words we don't think these simple programs are
conscious for the  same reason we don't think rocks are conscious, 
they failed the Turing Test.

                 

                >I come to the opposite conclusion from John's; I think I can
                
                >make a good case that feeling does have evolutionary utility,
                
                >and therefore "genetic drift" would not eliminate a strain of
                                >conscious beings. 
                   
But that's not opposite to my view, I think feeling is very important, 
that's why it's still around. My point is that if evolution can detect 
consciousness  (indirectly) then so can The Turing Test because they both 
use the same thing, behavior.
                


                >feeling (the "self circuit"or "subjective circuit," the SC)
                
                >may improve the efficiency of the organism by reducing the
                
                >necessary brain  weight or the power consumption. [...]
                
                >Without this kind of system it might be much more difficult
                                >and laborious to match inputs with appropriate reactions. 
                
If true that would mean that a non conscious intelligence would be HARDER to 
build than a conscious intelligence, so our first intelligent computers would  
almost certainly be conscious.
                    

                >How does it do this? By CATEGORIZING inputs and outputs.

This sounds a little like Marvin Minsky and his "Society Of Mind" idea. 
He wrote a great book on the subject by the way.
                    


                >it seems very plausible to me that the SC may be
                                >evolutionarily favored.
                
The sense of self must be favored by evolution or we wouldn't have it. 
Evolution is interested in our internal states ONLY if they effect behavior. 
Behavior can be tested for, so the sense of self must be testable, and that  
is The Turing Test.         
    

 In #5449   (Thomas Donaldson)  On Tue, 19 Dec 1995 Wrote:



                >It [ The Turing Test] only uses one subclass of actions,
                
                >verbal responses (or written responses, as over the net) to
                
                >conversation. [...] This is the main reason why many people
                
                >object to the Turing test as a test of either intelligence
                                >or whether or not there is a human being on the other end.    

I've been using a somewhat liberal definition of The Turing Test
that includes all behavior and not just speech as in the
classical Turing Test, but I don't see how that changes things
in any fundamental way. The format of the output may be different 
but it's basically all the same thing, actions, and it all could 
be duplicated in a machine.
                    


                >Since the Turing test never leaves the verbal arena, and
                
                >(as you yourself mention) words can only be defined verbally
                                >in terms of other words, 

As I said the test is not perfect, it could be wrong and you
could be the only  conscious object in the universe. In this and
in most parts of life we must  learn to live with uncertainty
and just play the odds.   
                


                >you don't even know whether or not your counterpart, if it
                
                >is a robot, can do something so simple as to get up and walk
                
                >to a window without running into a wall or otherwise getting
                
                >into trouble. You just know that it can convincingly play
                                >with words.

Stephen Hawking can't move without getting into trouble.
Stephen Hawking can hardly move at all, yet because of his
convincing play with words everybody considers him intelligent
and most think he's conscious as well. Muscular coordination is
a wonderful thing, but I don't think it has much to do with consciousness.


In case I don't get another chance, I want to wish everybody on Cryonet 
a happy Isaac Newton's birthday on December 25.


                                           John K Clark       

-----BEGIN PGP SIGNATURE-----
Version: 2.6.i

iQCzAgUBMNjy2n03wfSpid95AQFH+wTvTeswRMRY3KeN1/a1LviKHP/UpZELfnGy
LMZ0C5802NAdxZn3qMbNQ942bsppiscT4EqSr89A4U7VjzZhUF/U5Y6byjmB74OP
ONAlm7GFkfYjHnPL+lgaznT6jpAkE56ka3EsK8qH7vgP3haIjmcWvHnR0n3D46n9
HQa+PnU9+h45AvJUzLLYvWHhCaKHIxkVEEfET7ihxnYoWNaT0e4=
=5Gqj
-----END PGP SIGNATURE-----


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=5457