X-Message-Number: 12585
From: 
Date: Sun, 17 Oct 1999 13:57:03 EDT
Subject: Misc.

Brief comments on some recent posts:

Thomas Donaldson (#12577) said that I "seem to believe that intelligence and
values cannot be separated." I thought I said the opposite-that intelligence 
and values are quite distinct. "Intelligence" is mainly goal-directed, 
adaptive processing of information. "Values" are basically wants or needs, 
rooted in feelings or subjective experience, and "machines" do not have them. 
(At the level of action, of course, people's "values" (or decision criteria) 
in some cases reflect mere habits or conditioning, rather than true basic 
wants or needs.) (And machines can lack feelings and still have criteria for 
choices, which some might want to call "values.")

George Smith mentioned John Clark's statement that "The bottom line is we 
don't have thoughts and emotions, we are thoughts and emotions, and the idea 
that the particular hardware that is rendering them changes their meaning is 
as crazy as my computer making the meaning of your post different from what 
it was on yours." Mr. Smith goes on to review some of the puzzles regarding 
criteria of identity or survival. Pierre Le Bert, in another post, reviews 
some of the old thought experiments that reveal the difficulties of adopting 
particular criteria.

Mr. Clark's position seems similar to that of Hans Moravec, who essentially 
appears to think we are just abstractions--patterns of information and its 
processing. This MAY turn out to be correct in some sense, but it is 
grotesquely premature to assume it. It may also turn out that there is no 
correct answer acceptable to us, but it is MUCH too early to worry seriously 
about that. 

The only sensible conclusion I can see is that we agree-admit-that we just 
don't know the answers yet, and push ahead on both the theoretical and 
experimental fronts. We are scarcely better equipped to make final judgments 
than were the ancient philosophers, whose reach pathetically exceeded their 
grasp. As Mr. Smith says, we just try to survive; we place our bets and take 
our chances.

Jeff Davis (#12583) and Mr. Smith both commented on previous remarks by 
Eugene Leitl concerning density of information storage on neurons and the 
implications-pessimistic ones according to Mr. Leitl, optimistic according to 
Mr. Davis and Mr. Smith. 

It will not surprise anyone that I agree with Mr. Davis' optimism, but I 
would not like to be pigeon-holed as an automatic optimist. I think of myself 
as a realist who will never disregard the evidence, and I have changed my 
mind on many things in the past, even at emotional cost. But just a couple of 
remarks here:

First, if I remember correctly, Mr. Leitl said the information resolution on 
the neurons is of the order of micrometers. (Awkward word, since it can mean 
either a kind of caliper or a unit of measurement. Also, lacking a "mu" 
symbol, Mr. Leitl's abbreviation was mm, which means millimeter.) But a 
micro-meter is a thousand nanometers, hence on the scale of nanotechnology 
these are huge structures-a billion cubic nanometers in a cubic micro-meter. 
Needless to say, this has optimistic implications.

Second, I emphasize that the recommendation for further study and improvement 
in cryopreservation procedures is not limited to the pessimists. Not even the 
sunniest optimist denies that we must try to do better to improve our 
chances. But I reiterate that it is neither scientifically accurate, nor 
psychologically useful, to assert that present procedures leave no realistic 
hope.

Third, I repeat my own increasingly strong conviction that information is 
conserved in the universe, both in classical and in quantum physics. This 
does not obviate the need to maximize our chances and reduce the burden on 
the future, but it helps buttress optimism.

Robert Ettinger
Cryonics Institute
Immortalist Society
http://www.cryonics.org

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=12585