X-Message-Number: 17941
Date: Thu, 15 Nov 2001 12:46:37 -0500 (EST)
From: Charles Platt <>
Subject: machine pain

This is an off-topic post.

David Shipman makes a really interesting point: If machines reach a level
of complexity where they feel "pain," is it unethical to allow them to
remain in that state?

Obviously it will be necessary to include an "aversion response" in any
robot that is freely mobile, to discourage it from running into things and
damaging itself. Since the pain experienced by a biological organism is
really just a set of chemical/electrical responses, I see no easy way to
distinguish between this and a similar set of programmed responses in an
artificial intelligence.

But this leads me backward to question the empathy which causes distress
when someone else is suffering. At the most primal level, we have very
good reason to respond when a human baby is crying. But from a coldly
rational point of view, how do we benefit from an empathic response when
an animal is suffering?

Of course, some people lack such an empathic response. As kids, they may
burn live ants with a magnifying glass; as adults, they may enjoy
tormenting cats or attending bull fights.

Personally I went through a very interesting experience during a
resuscitation experiment, where a dog that had been successfully revived
was taken off pain-management medication. For a few minutes, the dog
yelped in a way that I found so distressing, I had to leave the building.
(And I am not a dog lover.) As I sat outside, I tried to analyze my
feelings. Rapid pulse, fast breathing, slight trembling--I realized, with
great surprise, that I was afraid!

I concluded that most (perhaps all) extreme reactions to the suffering of
other creatures are based on fear, because the observer identifies with
the creature and is reminded of times when she or he went through painful
experiences. From this I would guess that people who have suffered most in
childhood are most likely to be moved by empathic responses later in life.
I leave it to the reader to consider whether this explains the extreme
actions of some animal-rights activists.

So, to get back to the initial question, I'm not sure that it is really
"wrong" to allow an entity to suffer--robotic or otherwise. It just
_feels_ wrong (to some people, anyway).

PS.

Long ago, when microcomputers were such a novelty that many people felt
intimidated by them, a friend of mine visited me and looked warily at the
computer on my desktop, which was running a short program that I had
written. "What's it doing right now?" he asked.

"Well, it's waiting for someone to press a key," I said. "It's saying to
itself, 'Has a key been pressed? Has a key been pressed?' about 500,000
times a second."

My friend looked shocked. "That's awful!" he said, only half joking.
"Can't you put it out of its misery?"

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=17941