X-Message-Number: 5469
From:  (Thomas Donaldson)
Subject: Re: CryoNet #5463 - #5465
Date: Sat, 23 Dec 1995 11:01:43 -0800 (PST)

Hi again!

To Mr. Clark, re consciousness once more:

1. As for you, I don't KNOW that you are not a machine. I will say though that
   as someone who has tried to keep in touch with the current state of computer
   technology, if you are a machine then someone has done something that 
   deserves several Nobel prizes at once. If you are a machine that could also
   travel in time (yet more improbably) then perhaps no prizes are due, but 
   you become even less likely. If you are a machine that magically assembled
   by chance, well, I suppose that could happen, though I would put it as 
   even less likely than one travelling in time ....

   Of course, what so many people (too many!) on Cryonet may not understand is
   that they are already machines. The parts of these machines use biochemistry
   to operate; they work very differently than any machine we can now build,
   yet still even in thinking are ahead of those machines in several critical
   ways. Enzymes and chaparones, of course, are nanomachines, and it is because

   of them that we can operate with a chemistry still unequalled by other means:
      they can cause one among several sites to receive a new chemical attachment,
   operate only on compounds with the right parity, and many other things which

   we now either cannot do or require a lot of wasted energy and a long sequence
   of reactions, with lots of undesired side-products, to do by ourselves.

   As for Hawking, what I said still stands. It doesn't matter just how he 
   manages to communicate: the point is that he can actually look at the world
   outside his window and tell us what he sees, something which a Turing 
   machine in the strict sense cannot do.

2. Here is why using the notion of "Turing machine" so broadly will get you
   into trouble: as defined, the Turing machine is exactly a machine capable
   of answering questions given to it on a computer terminal with words also
   provided on the screen (or on a punched tape, in the old days when the 
   definition was first put forward by Turing). This is explicit about the
   abilities of the machine and what its questioner would do and see.

   But just what is "behavior"? It is part of my behavior that I eat food,
   shit and pee, and breathe out carbon dioxide. If that were counted as 
   "behavior" then you have a big restriction just there. These are of course
   all fully verifiable objective facts. We can go on from there: in the
   past I grew from a single cell, into a child and now into a man. Is that
   "behavior"? One more restriction. And if these things are not "behaviors"
   PRECISELY what do you mean by "behavior"? Mental behavior? Well, even     
   without saying a word I can do puzzles, put together puzzles into a 
   picture, follow mazes on paper with a pencil, and many other such things.
   And when we examine each one, we notice that the behavior involves not
   just ABSTRACT actions but physical activities too. Are they required?

   When I think about maths (programming is a bit different, but I still do
   some planning beforehand) I don't just sit and think. I write things down.
   To others what I write may seem meaningless: a garbled collection of 
   equations, short verbal notes, pictures drawn and then scratched out, etc.
   I am writing notes to myself (and incidentally using the paper and pencil
   as an adjunct to my brain, because I would find it impossible or very 
   difficult to do this thinking without them). Even in the most abstract
   cases it would be difficult to separate out my thinking from what I am
   visibly doing. When I write I try out various ways of saying things, and
   (on a computer) erase the ones I don't like, so my reader never knows the
   difference. (Before computers I scratched them out and then if needed
   rewrote a clean copy --- an advantage of computers even in writing). Once
   more we see the abstract mixed in with very concrete behaviors.

   If you don't make any careful distinctions here your version of the Turing
   test comes down to saying that you must have a human being on the other
   side of the terminal, since only a human being does all the activities
   involved. And it was just because of this problem that Turing made his
   Turing test: to PRECISELY distinguish just the behaviors which would be
   important in judging intelligence from those which are not. Yes, I think
   his test failed, but some distinctions are needed. If you don't make them,
   then you are saying no more than that we can recognize human beings by
   their behavior... not a surprising or even interesting statement at all,
   given the wide range of behaviors human beings show.

3. Consciousness. It is likely that consciousness does help and did not just
   arise by chance. However if we look at current theories in neurophysiology
   on the subject of just what brain regions are responsible, we find something
   interesting. They are in the thalamus, which ordinarily deals with our 
   emotions and not just with our thinking. As animals we did not evolve to
   solve abstract problems; we evolved to do such things as find or make food,

   keep ourselves warm, find mates, etc. And that remains true, though all these
   activities have undergone a great deal of elaboration because of our brain
      cortex. The way we are put together mentally is that desires of all these
   kinds connect to our cortex, thus providing it with problems it must solve,
   and our understanding of the problems of course goes back to our desires
   and (often) tells them to restate themselves. (We wake up at night feeling
   hunger, wanting to eat a pear. We go down to the kitchen and find that 
   there are no pears but there are apples. Shall we eat an apple and go 
   back to bed, or shall we get dressed and find a 24-hour supermarket that
   sells pears, buy one, eat it, and then go back to bed? --- yes, we are all
   fortunate that our desires don't usually present themselves in stark terms,
   that this is what we do rather than choose between starvation and some kind
   of food). 


   Given this, I see no reason why a problem-solving computer need be conscious,
   no matter how intricate, involved, large, or elaborate the problems it 
   solves   may be. If we are to consider such a computer "intelligent" 
   (whatever that
   really means!) then intelligence does not require consciousness. Even one

   capable of learning need not be conscious, though (as in the book I mentioned
   by Kanerva) it might benefit from special circuits playing a directive 
   role:
   a step towards consciousness, certainly, but not all of it.

   You have stated that "intelligence" must imply consciousness. I assume that
   you don't believe in the contrary, that consciousness requires intelligence.
   But even so, just what do you mean PRECISELY by intelligence here? If it is
   an ability to solve problems, then we see that we need to specify the 
   problems more exactly, and even then (unless the "being" that is to be 
   "intelligent" sets its own problems) there is no clear requirement for
   consciousness. Yes, psychologists generally (not all!) believe that there is
   a general factor which lies behind performance of human beings in those 
   tests called "intelligence tests". But if we want to think more broadly such

   ideas fall apart completely: are computers intelligent because they can solve
   problems which we cannot without help? Some animals even have highly 
   specialized abilities IN THEIR BRAINS which we lack: birds which store their
   food in winter can remember thousands of locations, while we cannnot (unless
   of course we use physical aids, like paper and pen). 

   If you want to talk about "intelligence" more broadly than in comparisons
   among human beings, then you need a much better definition of what it is, 
   applicable to ANYTHING. 

And incidentally, for the others on Cryonet, I would say that this issue of 
defining "intelligence" also bears a lot on ideas of how we might improve 
ourselves, especially beyond the human. It's easy to toss off the notion that
we might make ourselves "more intelligent", but the definition of intelligence
used by psychologists now is restricted by its nature to performance of human
beings on special tests called "intelligence tests". (Nor does it capture 
everything about a human being that leads to insight into problems). The notion
of intelligence is not so clear as many think. And (as speculation) we may 
find that a creature with "intelligence" beyond the human may appear to us
as an idiot on our poor tests. 

			Best and long long life,

				Thomas Donaldson


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=5469