X-Message-Number: 31
From arpa!Xerox.COM!merkle.pa Mon Oct 24 11:03:29 PDT 1988
Received: from Cabernet.ms by ArpaGateway.ms ; 24 OCT 88 11:07:52 PDT
Date: Mon, 24 Oct 88 11:03:29 PDT
From: 
Subject: Thought you'd be interested
To: 
Message-ID: <>
Status: RO

[ Hans Moravec recently published a book titled "Mind Children", which concerns
  downloaders (uploaders), among other topics.  This message has a review of
  the book with a reply from Moravec. - Kevin Q. Brown ]

FYI

--------------------------------------
Date: Sat, 22 Oct 88 15:50 PDT
From: Shrager.pa
Subject: [: Re: Mind Children]
To: ComputerResearch^.PA
Included-msgs: <593514908/>,
               The message of 22 Oct 88 02:15 PDT from

Message-ID: <>
Line-fold: no

Date: Sat, 22 Oct 88 02:15 PDT
From: 
To: Shrager.PA, , ,
,
    , , ,
    , , ,
,
    
cc: , 
Subject: Re: Mind Children

Here's my response to Duane William's review:

  >Date: 8 Oct 1988 23:06-EST
  >From: 
  >Subject: Mind Children
  >To: , , ,
  >	 , , ,
  >	 , ,
  >	 , ,
  >	 ,  
  >Cc: ,  
  >
  >Friends,
  >
  >Hans Moravec's book, Mind Children (Harvard University Press, 1988),
  >has just been published, and I've just finished reading it.  I've been
  >telling him for a long time that I thought this book was crazy; so I
  >decided I owed it to him to read it, once it was actually published.
  >(I'd seen portions of earlier drafts online, as well as his opinion
  >bboard discussions of some of the ideas.)  Well, I still think it is
  >crazy, but it is interesting nonetheless.  Just how original it is I
  >cannot say, although I know that some of the ideas are not entirely his
  >own.  It also largely ignores opposing points of view and occasionally
  >attacks straw men.  One of the major problems I see with this book is
  >that it is not as scholarly as I'd like it to be.  I think this is a
  >justified criticism, given that it was published by Harvard.  (Many of
  >the ideas could have been conveyed just as well by a collection of
  >science fiction stories, but he didn't choose that medium.) So, what is
  >it about?  The development of superintelligent robots and their
  >conquest of the universe!


	It is true that scholarship is not my strongest feature -- I've
always found it easier to make things up than to look them up.  Earlier
drafts of the book were criticized by the publisher's referees for factual
errors in evolutionary and neural biology, physics and even computer
science. I belatedly did more homework to correct the worst of those.
Duane's problems are with my philosophical positions on the nature of mental
experience. While it's true that much of the world still subscribes
(explicitly or under the surface) to Aristotelian mind/body dualism,
cognitive psychology, neuroscience and other scientific investigations of
the mind are constructing an increasingly convincing case (as the new PBS
series "The Mind" asserted) that "Mind is what the brain does".  I use that
position as a starting point for my speculations. Why add another round of
rhetoric to a centuries-old philosophical debate that is in the process of
being settled scientifically? (As I said, I'm not a scholar).

	As for the ideas being science fiction, let me (humbly) outline the
history of the ideas of Konstantin Tsiolkovsky, the russian geometry teacher
who, between 1870 and 1920, was the first person to synthesize the modern
theory of spaceflight.  Astronomy was well advanced in the late 19th
century, and others had published descriptions of how things would look from
the surface of the moon and other heavenly bodies.  These were presented as
interesting pedagogical aids, to help students understand the celestial
light show in terms of bodies as real and solid as the earth. But no one
believed that anyone would ever actually experience such views.  As a 17
year old, in 1873, Tsiokovsky encountered these ideas, and took them more
literally than was intended.  He imagined himself standing on the moon,
experiencing the reduced gravity, able to leap tall buildings in a single
bound. Though he had no idea how he could possibly get there, or how he
would survive the airlessness or the temperature extremes, he decided he
wanted to go. He still hadn't figured it out by 1893, but he did know enough
Newtonian mechanics to figure accelerations and orbits, and enough
thermodynamics to calculate the temperatures of bodies in airless space.  He
used these in a science fiction story in which two scientifically inclined
characters wake up to find themselves, in their house, "On the Moon".
Somehow the lack of air doesn't bother them, but everything else they
experience is physically correct. Eventually they wake up to discover it was
all a dream.  Tsiolkovsky kept trying to think of a solution to the problem
of actually travelling into space, considering cannons (way too much
acceleration), unbalanced flywheel reactionless drives that silently lift
(doesn't work, if you carefully work out the physics), before finally
realizing, in 1900, that rockets (existing then only as small
gunpowder-filled cardboard tubes---toys) could, in principle, do the job.
He then developed rocket theory, and invented most of the paraphenalia of
modern space travel. In 1918 Tsiokovsky wrote a book "Outside the Earth"
that had a multi-stage liquid-fuelled rocket trip from earth to earth orbit,
space suits with reaction jets for maneuvering, CO2 scrubbers,
self-sustaining space colonies, sunshade temperature control, a lunar
landing in a lunar module, and a lot of other correct details.  Until after
WW II most of the world, including especially the respectable scientific
community, continued to hold the idea of space travel as pure nonsense.
(I'd say they still subscribed, explicitly or under the surface, to
Aristotlean celestial/mundane dualism) Tsiolkovsky died in 1935.  A monument
was dedicated to him in Moscow in 1957.  Shortly thereafter the first
Sputnik was launched.
[ref: "The Science Fiction of Konstantin Tsiolkovsky", Adam Starchild, ed.,
University Press of the Pacific, Seattle, 1979]

  >
  >The book contains six chapters and three appendices.  Chapter 1, "Mind
  >in Motion," is about the development of computers and mobile,
  >intelligent robots.  It is a quick sketch of some of the important
  >systems, highlighting some of the work ongoing in Moravec's lab here at
  >CMU.  All is well until page 45, where "conditioning software," which
  >allows a robot to profit from past experience, is described.  I find
  >the idea of "conditioning software," per se, non-controversial, but
  >Moravec writes "I'm going to call the success messages 'pleasure' and
  >the danger messages 'pain.'"  There's no argument to show that these
  >words are appropriate.  He could have picked entirely new words:
  >"robot-pleasure" and "robot-pain," for example, but he didn't.  He
  >picked words that we commonly use to describe experiences of living
  >biological organisms and applied them to machines.  He gives no
  >justification for this extension of their meaning.  He is just as
  >cavalier in talking about robots as having "feelings."
  >

	If mind is what the brain does, then emotions, which are part of our
mental experience, are also something the brain does.  In the chapter I give
examples to support my contention that robot mentality will develop to meet
the same evolutionary pressures that shaped animal minds.  Consequently I
expect successful robot brains to do about the same things as successful
animal brains. So why not give corresponding functions the same names?  Not
being an Aristotelian dualist I see no fundamental difference between
"living biological organisms" and "machines" of similar complexity doing
similar functions.

  >Chapter 2, "Powering Up," is an interesting discussion of the growth of
  >computer power in this century and Moravec's expectations for the
  >future.  He thinks we can achieve "human [computational] equivalence"
  >in 40 years.  There's a neat graph comparing power and capacity in
  >various natural organisms and machines.  I especially like the picture
  >of the mouse sitting next to the Cray 2.
  >
  >Chapter 3, "Symbiosis," describes the integration of robotic and human
  >systems.  The simulated visual and tactile feedback and control systems
  >described are, I think, possible with current technology.
  >
  >Chapter 4, "Grandfather Clause," is where the really outrageous ideas
  >appear.  The motivating question in this chapter is what should we
  >humans do about the rapid development of intelligent machines that (if
  >Moravec's vision is right), within a few decades, we will not be able to
  >compete with.  
  >
  >Moravec rejects the option of eschewing progress, claiming that this
  >would result in our almost certain extinction.  "Sooner or later an
  >unstoppable virus deadly to humans will evolve, or a major asteroid will
  >collide with the earth, or the sun will expand, or we will be invaded
  >from the stars, or a black hole will swallow the galaxy."  
  >
  >Who does he expect to take this seriously?  Most humans have yet to
  >recognize the much more realistic threat of pollution on our
  >environment.  Sure, the sun will expand, but not in our lifetimes.  The
  >farfetched possibility of extinction isn't likely to strike most people
  >as a good reason to develop machines that are very likely to displace
  >us in the near future.
  >

Actually, I argue that economic and military competition will, day by day,
force the construction of ever better and smarter machines (as American
industry noticed during the last decade).  The long-term scenarios are merely
my way of indicating that such inevitable progress is a good idea anyway.

  >
  >Moravec places a high value on the survival of intelligence, in
  >whatever form we can manage, but nowhere in this book does he say why
  >that is important.  Why should any of us care whether there are
  >intelligent beings around a billion years from now?  They won't be us.
  >

Ok, you make your descendants without intelligence, and I'll make mine
with.  There's no accounting for taste!  But mine will be able to
build on my dreams, yours won't.

  >
  >Moravec thinks that by growing fast enough a culture may be able to
  >survive forever, but the culture whose survival he speculates about is
  >very unlike the culture we know.  It is a culture of superintelligent
  >robots that have no more need of their makers, and that expand into the
  >universe on their own.  I can't help but wonder whether they would or
  >would not take a benign attitude towards their creators.  Why shouldn't
  >they treat us as we treat other species?
  >
  >Most of this chapter is devoted to the idea that we might be able to
  >keep up with our robot creations by transforming ourselves into robots!
  >I consider this the most absurd idea in the book.  It depends on the
  >assumption that a person, or human mind (the book does not clearly
  >distinguish these), is an abstract entity, a computer program (or class
  >of equivalent programs), rather than a particular physical object.  I
  >say this is an assumption because there is no argument for this view.
  >Why anyone should believe it I have no idea.
  >

Mind is what the brain does.  The brain is a physical system.  Computers
can simulate physical systems.  So a sufficiently powerful computer should
be able to simulate a brain, thus doing what the brain does, thus being Mind.
(See, Aristotle is good for something after all).

  >
  >On pp. 116-117, Moravec contrasts his "pattern-identity" thesis with
  >what he calls the "body-identity position."  According to him, "The
  >body-identity position, I think, is based on a mistaken intuition about
  >the nature of living things.  In a subtle way, the preservation of
  >pattern and loss of substance is a normal part of everyday life."  He
  >then goes on to explain how the cells in our bodies are periodically
  >replaced by functionally equivalent new ones.  This argument is a gross
  >insult to every contemporary philosopher of mind who has dealt with the
  >problem of personal identity.  The replacement of cells in our bodies
  >is common knowledge among educated people.  No major philosopher in our
  >time is confused about this.  Moravec's characterization of the
  >"body-identity position" is a straw man if there ever was one.  Yet it
  >is the only alternative he presents to his own view.  Moravec acts, in
  >this book, like no other ideas than those he agrees with are even worth
  >discussing.  This makes it very hard to take him seriously on this
  >topic.

	Oh, heck, the cell replacement example was just an afterthought that
my editor suggested I put in.  It's a fact that many people who consider
themselves scientific rationalists and who vigorously reject dualism take
the opposite extreme position, namely that the functions of the brain are
inextricably and forever tied to the meat of the brain.  At least one of the
manuscript's referees, a Harvard neurobiologist, was of that opinion.  I
think his reasoning starts from correct premises, but stops too soon,
exhibiting what Arthur Clarke calls a failure of nerve.  As for other
philosophical positions, those that I've encountered seem to me to fit into
two categories. The first is dualist, in which identity is the result of
some kind of non-material spiritual spark that mere matter could never
capture. These I reject out of hand, in the expectation that science will
continue to whittle away at the unknown territory where such a sparks could
be hiding out. The other category contains detailed theories about the
nature of awareness and consciousness, assuming an underlying material
mechanism.  Some of these may be more nearly correct than others, but
they're all basically consistent with my speculations, and don't require
an answer from me.  I give a hint of my own variant of this kind of
theory in chapter 1, in which robots achieve a kind of consciousness
when a simiulator of self and surroundings gets installed in their
problem solver.

Aside: The reason I think there are such wildly different explanations
about what makes us what we are is that most of what happens in our
brain is not accessible to the little simulation that constitutes
our consciousness. When we introspect, that part attempts to make a
theory about what happens in the inaccessible part -- essentially most
of our brain is just as much a black box to our consciousness as an
external system like the sky is.  Dualism is a naive theory of what our
mind is about in the same sense that crystal spheres are a naive theory
about what the planets and stars are about.  Better theories become
more likely as instruments reveal more details inside the box.  Then
philosophy becomes science.

  >
  >Chapter 5, "Wildlife," is about computer viruses, and chapter 6,
  >"Breakout," explores the idea that intelliget programs might be able to
  >survive forever, despite the ravages of the second law of thermodynamics.
  >
  >Although it is often aggravating, I did enjoy reading Mind Children, and
  >I recommend it to anyone with the spare time for a little science fiction
  >masquerading as fact.
  >
  >Duane


Thanks  --  Hans


--------------------------------------

ClosingComment

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=31