X-Message-Number: 8038
From: 
Date: Thu, 10 Apr 1997 17:02:33 -0400 (EDT)
Subject: more miscellany

More quick (?) comments:

Olaf Henny (#8018) says conscious, intelligent computers would mean war. Ways
around this long-heralded danger include (a) making the computer an extension
of a person's mind, rather than a stand-alone individual; (b) using advanced
knowledge of psychology/computers to implant appropriate compulsions. 

Mike Perry (#8020) conjectures (If I understand him correctly) that the brain
may contain two kinds of conscious regions; kind A can communicate to the
outside world and can receive informtion (but not feelings) from B; B is
conscious, but cannot communicate feelings or consciousness either to A or to
the outside world. Therefore, he concludes, consciousness may be, but is not
necessarily, associated with a "seat of consciousness." 

I see serious flaws in this idea. First of all, feeling and consciousness are
not just a matter of a locale or region (whether distributed or not); they
are also defined by specific physiology (e.g. something like a standing wave
or reverberating feedback, the self circuit). Once we understand the
physiology of feeling in A, we can then examine the physiology of B to look
for similarities. If we find none, then there is serious doubt that B has its
own subjectivity. Likewise, if an artifact lacks the physiology of feeling,
we are entitled to doubt that it has any. 

To Jan Coetzee: Best wishes for continued good recovery and long life.

John Clark (#8024) says "Consciousness must be a subset of information
processing..." Again, the desired conclusion is used as a premise or axiom. 

He also says, "I can never experience your consciousness directly." With some
kind of (electromagnetic?) telepathy, maybe you could indeed "share" my
consciousness.

He also asks, in effect, whether I question my own survival from hour to hour
in the ordinary course of events. Yes, I question it. It is easy to be
deceived. We certainly survive long enough to have an experience--a
subjective "moment"--but beyond that it is much too soon to say with any
confidence. Obviously, we must meanwhile act as though we do survive over
relatively long intervals, as though we and at least our nearby predecessors
and continuers are the "same" people. 

John Roscoe (#8026) joins the info group, saying in effect that consciousness
is inherent in any information processing--or perhaps just in any kind
involving goal-seeking or responsive behavior, even a thermostat. Once more,
this is just dodging the problem by saying there is no problem. Just because
you SAY that the behavior of a thermostat is a reflection of consciousness,
or constitutes consciousness, does not tell us anything about the
connection--if any--between the "consciousness" of a thermostat and of an
animal. If mere complexity were the criterion, then almost any very large
mass of matter would be more "conscious" than a mouse. 

John also suggests there is no danger from intelligent computers, because (at
a certain level of development) they would progress so fast that our affairs
would be meaningless to them and there would be no reason for interaction. I
won't address this in detail, for lack of time, but there are many possible
interaction scenarios, with computers of various kinds and at various degrees
of development.

Steve Harris (#8029) had some discussion worth more comment, if time allowed.
But here I'll just say that not all consciousness is articulatable, as indeed
he himself at least suggests at another point. (Mice can't articulate much of
anything--although they may have a limited squeak language, as well as body
language--but surely have a lot of consciousness.)

One more thing: There is no "paradox" in determinism. We have free will on
the conscious level; that is all that is possible, and all that is necessary.

Robert Ettinger

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8038