X-Message-Number: 7912
From: 
Date: Fri, 21 Mar 1997 12:00:36 -0500 (EST)
Subject: cud

Again, most of this is cud (if not crud) well chewed, but perhaps a bit
useful at least for new readers. Maybe some of it will even help more
experienced readers.

1. According to Joe Strout's explanation (Cryonet # 7900), Chalmers'  work on
"functionally equivalent" systems misses the point. If the artificial visual
cortex uses the same inputs to send the same outputs to the "rest of the
brain" as would the natural visual cortex, then certainly the subject, wired
to the artificial cortex, will experience normal qualia, just as he would if
his eyeball were replaced by an artificial eye. The POINT is what would
happen if the "rest of the brain"--in particular, including the self circuit
or subjective circuit--were replaced with some artificial intended
substitute, emulative but physically very different. In this case, calling
the substitute "functionally equivalent" would be begging the question or
assuming the conclusion.

Joe says he is NOT assuming what he is trying to prove, that "functional
equivalence" is well defined as "same inputs, same outputs." But the POINT is
that a quale, or a particular state or sub-state of the self circuit, is in
its central feature neither an input nor an output; it is a CONDITION (or
perhaps an event) which DEFINES or CONSTITUTES a quale. 

2. Mike Perry (# 7905) repeats that, "To me an ongoing computation that fully
*describes* an ongoing process [such as a person] is "as good" in some
reasonable sense as the actual process." He also says that he could converse
with an emulation and would accept it as a person.

Well, an emulation of your deceased dog might be "as good as" the original,
and for that matter a similar puppy might be almost as good--maybe better. So
what? That says nothing whatever about the question of survival.

Again:The criterion is not what your reaction or intuition is, but what it
OUGHT to be, and we do not yet have an adequate basis for reaching a
conclusion. 

3. John Clark again essentially says that, if you question that a "robot" has
subjectivity, you are a solipsist. And again, I point out that there is a
profound difference between assuming life in another person and assuming it
in a robot that behaves like a person. The difference is simply that other
people (and animals) are made very much as you are, hence it is perfectly
reasonable to attribute to them feelings similar to yours. With putative
robots it is a different story.

We KNOW that ALREADY robots (computer programs) exist that, to a limited
extent, can converse like people, and sometimes fool people; and we also know
they have no slightest consciousness. It is OBVIOUS that, a little further
down the road, there will be programs, even if only similar but larger and
faster ones, that could fool most of the people most of the time. What more
need be said?

Robert Ettinger

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=7912