X-Message-Number: 8586
From: Thomas Donaldson <>
Subject: Re: CryoNet #8541 - #8545
Date: Thu, 11 Sep 1997 23:02:38 -0700 (PDT)

Hi again!

Yes, these comments are somewhat late. 

As for the notion of equivalence, I do hope that John Pietrzak read and 
listened to what I said. There is no single definition of equivalence, there
are many depending on what you want to do. The ideas of Turing about 
computers use a broad definition of equivalence. Practical issues raise the
validity of much narrower definitions. 

I come from a math background, though now I've worked with computers and
computer scientists for some time. I've also had occasion (I've been seriously
involved with cryonics for a LOOONG time) to read and think a good deal about
biology. And thus:

1. No, our brain is very unlikely to work by quantum logic or other means
   suggested by various physicists. As I said, we evolved to an optimum, and
   are still evolving (yes!). Immortality may change the situation, though ---
   not by doing away with evolution, but by making human beings no longer
   the units of evolution. Competition would be not between human beings
   but between specifications for human beings (right now, genes).

2. This issue of optimality continually seizes those who haven't thought
   much about biology. For instance, so far the devices (such as they are)
   implementing quantum logic will only work at very low temperatures. Keeping

   temperatures that low on the Earth requires lots of energy for refrigeration.
      Life forms will of course eventually occur in interstellar space (probably
   evolved from human beings) and the setting will be different. If quantum
   logic brains become possible, then that would be their most likely location.
   To keep one working here would require too much cost for the output. We
   became highly parallel instead. (I do not mean here that human beings might
   not make and use them. Just that they would not have EVOLVED HERE because
   they are in no way optimal for the processing an animal needs to do).

   The same may be said of semiconductors, superconductors, etc etc. It is
   not enough to simply show that on one parameter a computer system
   (or any device) is superior to anything biological. You have to consider the
   setting too. The materials and energy which make our brains are far less
   expensive than those which went into making the specialized chess playing
   computer which beat Kasparov. Carbon, hydrogen, and oxygen are all over
   the place. Iron is the most common metal. The other constituents occur
   only as traces: otherwise, they would take far too much effort for the
   return.

   To say that a biological system might not reach a "true" optimum by 
   evolution is not a claim about biology at all. It is a claim about genetic
   algorithms. If we are trying to find a minimum or maximum in a field
   where we know very little about the function we are trying to minimize
   or maximize, a kind of genetic algorithm has lots of merit. We pick
   random locations and try to increase from there, using the largest 
   ones. Given that we know very little about the function, a genetic
   algorithm has lots of merit. The function may be continuous but not
   even differentiable, for instance --- many things in the real world
   aren't differentiable, after all. And the totality of life on Earth
   provides a lot of parallelism, too. I have got a few books on how to
   optimize: not just linear programming but various other such issues,
   and they are quite respectful of this method. Naturally, if we DO know
   about the function then there are much better methods. 

   Personally, I think we have a lot to learn from biology. STILL. It's
   the only kind of nanotechnology presently operating, among other things.

3. The original Turing test, with limited means by which the interrogator
   could even talk with the computer/person on the other side, fails
   because it does not take in the full range of behavior a human being
   can show. Issues of intelligence and its meaning, while I certainly 
   agree that they are far more vague than most people think, aren't the
   central problem with it. The central problem is that it operates only
   with symbols.

   Yes, folks, deep down our brains do not work with symbols. We operate
   with our perceptions, and possibly because we are human beings we've
   evolved brain areas specialized to deal with language. But those areas
   are useless without the rest of our brain. Nor can they be identified
   with thinking: when we use symbols, we know their MEANING, which ultimately
   cannot be given with other symbols. It's one thing to be able to natter
   on, say, about plants, and quite another to recognize them and their
   workings on a microscopic slide or walking in the hills. Neither of these
   abilities is tested by the Turing test; yet if you lack them you really
   know nothing at all.

   I've said before in this forum that I do not doubt that we can build a
   device capable of whatever a human being is capable of... and not by 
   the ordinary means. Whether you will consider such a device a computer
   or a computer with peripherals depends a lot on just what you think
   is a computer. For that matter, human beings (and other devices like
   them) will be subject to the Turing limits. But then, as I said, whether
   that means we are computers depends a lot of just which test of 
   equivalence you choose to use.

			Best and long long life,

				Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8586