X-Message-Number: 8249
Date:  Tue, 27 May 97 14:08:01 
From: Mike Perry <>
Subject: Artificial Consciousness

The discussion continues on consciousness, and in particular, whether 
it could reside in an artificial device or needs an organic 
substrate. The doubters that things we have done so far, such as a 
programmed robot or a computer program, exhibit the slightest degree of 
consciousness or feeling, seem once again to be overlooking the 
forest for the trees. Just because a machine crunches bits 
at the lowest level in no way (in my view) precludes its having 
feeling and consciousness, or more properly, the system it is running 
might have feeling and consciousness. There is an amusing discussion 
along these lines in *Godel, Escher, Bach* by Hofstadter, pp. 
314-333, about Aunt Hillary, an ant colony which is intelligent and 
can understand things at the human level even though the ants that 
make her up cannot. This is very similar to Searle's Chinese room, if 
I understand it right. The system as a whole can have properties not 
possessed by its parts.

The idea of trying to build "feeling" into a computer system would 
make a great topic for research, in my estimation (speaking from the 
standpoint of having a PhD in computer science that is, though I 
haven't been too active in the field in recent years), provided, of 
course, we aren't worried too much about the "civil rights" issue. 
(This is a non-trivial issue, though, if we are going to take 
seriously the prospect that our computer programs might be able to 
experience pleasure or pain.) I remember that, some years ago, Terry 
Winograd did a dissertation on a program that conversed in English. 
With great difficulty, it was possible to design a program that could 
converse reasonably well about a "blocks world" type of toy universe. 
But this creation could not be extended in any easy way--we do not 
today (a quarter-century later) see programs that converse in fluent 
English about the real world, and seem just like normal humans (i.e. 
pass the Turing Test). As far as I can tell, Winograd approached his 
project as one involving *language* directly and not a system that
*uses* language because of wants or needs it has. That latter 
approach would be quite fruitful, I think, and may have been 
attempted a few times already, but probably we could do better 
if we tried harder, without having to wait for more advanced
technology.

As a start, I would advocate thinking about goal-seeking (or 
avoiding) systems. These can be created artificially, and they
certainly bear a resemblance to systems in the natural world
(biological organisms) that we credit with consciousness. We
might try to create a reasonably general theory
of goal-seeking systems. Some questions then 
come to mind. What are the kinds of "goals" we would study? For 
instance, we might focus on the outside world, but that is a big 
place and, I think, the wrong place to start looking if we want to 
study consciousness. We should instead look inside our prospective 
conscious device to see what is happening there. What is 
"goal-seeking" from an internal standpoint? Is it reducible to 
maximizing a number stored in a certain, designated place, for 
instance? If not what additional features should we consider, if we 
used that idea as a starting point? etc.

Again, I'm sure these thoughts have occurred to other people, and 

I'd guess there is already a substantial literature. In fact, I'd be interested
in references anybody may have.

Mike Perry 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8249