X-Message-Number: 12686
From: Thomas Donaldson <>
Subject: more on feelings, goals, and computers
Date: Mon, 1 Nov 1999 23:27:00 +1100 (EST)

Yet more on emotions and computers:

My very first point in this message is that I am NOT claiming that it's at
all impossible to create machines with feelings and goals. I do not think
it can be done simply by writing a program, but that hardly means it
cannot be done. (To repeat a little, my problem with a program is that
essentially it is a collection of symbols in a particular order. Symbolic
feelings do not become more real simply because a computer can be 
programmed to express them). 

Second, Scott gives a very good set of definitions for feelings. Moreover,
we will necessarily want to know more than just behavior: even with
humans, one apparent set of feelings may come to you because you've 
stumbled in on two people practising a play, while if it is not a play
but a real event, no one is practising and the emotions are real. (This
can be taken either to mean that we want to know what's happening inside
each of the two people, or we want to know more than just the incidents
we see over a short space of time --- clearly if we first saw these 
people busily learning their lines then our approach to the incidents
we see will be quite different).

Second, I specifically do NOT agree that intelligence (at least in the
sense that computers have intelligence, and no doubt will acquire even
more) requires feelings or goals. This is a different issue also from
whether or not we want to make machines which can recognize OUR OWN
feelings (and to some extent, even our goals). Given that we can also
give definitions for feelings, a computer which is sensitive to such
events may very well be able to recognize our feelings, even though it
still has no feelings of its own. (This is simply a more evolved version
of what they've got computers able to do already: to understand what
we say). 

Finally, while I'd certainly agree that we won't want to remain in the
same form indefinitely, computers provide even at their very best a
rather faulty version of such a form. They have no ability at self-
repair, no ability to make themselves new parts or form new connections
between their chips. If nothing else they remain much more delicate
than we. Yes, we want even more ability for self-repair (what else is
the total abolition of aging?), but that hardly means we have none even
now, and even in our brains. I would say that when we come to change 
our forms, we'll change into something which does not fit any definition
around today: not computers, perhaps not simply "living things", but
something else. And just what that may be remains one of the interesting
things about the future.

			Best to all, and long long life to all,

				Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=12686