X-Message-Number: 10186
Date: Thu, 06 Aug 1998 00:28:38 -0700
From: Brian Manning Delaney <>
Subject: Re: validation of values
References: <>

I enjoyed Peter Merel's reflections; they highlight a big
part of what I take to be the problem here.

Robert Ettinger has made a nice summary of some of his
views. I disagree with just about everything he says :) --
but a lengthy continuation of this discussion, as Brook
pointed out, may not belong here on Cryonet (or in
Sci.life-extension) (I tend to jump into these discussions
once someone else has started them, hoping to help resolve
things -- however much I fail. Hmm... well, in truth,
resolution is probably not what I'm really after -- though
trying to help is certainly what I'm after).

This will be my last message on the topic (most likely...)

>1. Demanding a definition of "happiness" really
>misses the point. Happiness (satisfaction, feel-good)
>is not to be defined, but studied. It is not a
>matter of language, but of biology. We talk
>about it initially in necessarily vague terms,
>to be sharpened as we learn more.

There are two ways you could mean this. In general, it's
quite right to say that a poorly understood (or even nearly
but not quite perfectly understood) phenomenon shouldn't be
expected to be defined perfectly at the outset. After all,
something we're studying is something we don't understand,
and therefore can't yet define: we need to learn more about
it first. So you have a very important point, if that's what
you mean.

But that can't be, or shouldn't be, what your point is in
its entirety. That's because our primary goal isn't to
understand happiness; rather, it's to figure out what we (or
what one or what you or what I) should do. Happness comes in
as part of the definition of what our goal should be. You
can't answer a What is? question about something with
another thing about which the What is? question can't be (or
won't be) answered.

Or, perhaps more importantly: you do, it seems, have to
define what you mean by happiness, even if part of what you
think about happiness is that we have more to learn about
it. Here I just mean define in the sense of pointing to, as
in: "this thing here, with what looks like aspects X and Y,
is what I want to understand more."

You seem not to view these objections as serious problems.
That's understandable. I think, basically, what you mean is
this: we already know, biologically, enough about what
happiness is that what we need to do is more biology (and
more of related sciences) to learn more about it. But then,
rather than demand a definition of happiness, I'll demand
justification for the claim that the question is a matter of
biology. You haven't given one yet. I believe you can't give
such a justification, for the simple reason that the claim
isn't justified.

>2. Nevertheless, the essence of feel-good is
>intuitively obvious, if we don't let our
>sophistication get in the way.

The phrase "let our sophistication get in the way" --
something I hear often -- rings alarm bells for me. A higher
degree of sophistication is precisely what we need, I
contend.

>We are talking about
>subjective conditions (qualia), caused by (rather,
>equivalent to) objective states or events in the
>brain.

"Caused by" and "equivalent to" are two radically different
things, a difference which matters here. I don't see how you
can equate them, or imply that either formulation will work
in your argument. Let's leave that aside. More importantly:
try proving that subjective conditions are caused by (or
that they're equivlent to) states or events in the brain.
Plenty of smart people have tried, and they always fail. One
problem is giving a rigorous definition of "inside" the
brian. Cranium? Nervous system? Is my computer part of
brain, in a relevant way? My friends? "No, of course not!"
one wants to say. Yet the physical correlate of psychic
activity is very difficult to locate. This is precisely
because the psychic isn't a matter of "physics".

The biggest problem, though, is that the science on which
claims about the relation between brain states (or anything
physical) and qualia rest is not supported by philosophy.
You say in #5 that all questions are scientific (our most
fundamental difference, perhaps). This means that the
justification for using the scientific method for
establishing the relation between brain states and qualia is
itself scientific. But, of course, one can't use science to
justify the use of science. It's like: "Why do you believe
in astrology?" "Because astrology says I should believe in
astrology." Such an attitude is neither philosophical nor
scientific, but religious.

>We all know there are "good" qualia and "bad"
>qualia. The most basic value or goal is to
>create or increase a preponderance of good
>qualia or feel-good or satisfaction.

You've said this repeatedly. I still see no justification
for it (unless "basic goal" is just "what we do," in which
case there's the circularity problem with respect to means,
as I pointed out -- and of course, again, the use of science
in elucidating our goal needs to be justified). This is a
more important problem than the one about showing that
happiness is a matter of biology, which, with a certain
(perhaps not very useful) definition of happiness, might be
possible.

>3. Philosophers often claim it is impossible to
>derive an "ought" from an "is." (I think Brian
>agrees with that.)

No, I think we probably _can_ derive an Ought from an Is,
but not from a scientific Is.

I thought we were talking about Oughts at the level of basic
values, not means. If we're talking about means, and not
basic values, then _of course_ an Ought can be derived from
an Is. Your example here doesn't address basic values --

>I can't quickly prove my
>claim that we can always derive "ought" from "is,"
>but I can quickly disprove the philosophers'
>claim that we never can--because I only need one
>counterexample.

>Consider an ordinary person in ordinary
>circumstances. He wants to maintain good health
>for an extended period. To do so he needs to eat
>a reasonably well balanced diet. Hence he "ought"
>to do so, and want to do so.

The Ought in question is wanting good health for an extended
period, not the eating of the balanced diet. The diet is
just a means, and that will depend, obviously, upon the
physical world (this scientific Is). Wanting good health for
an extended period isn't derived from an Is -- _unless_, of
course, it's _not_ a basic value, in which case it likely is
derived from an Is (the Is of the fallibility of one's body,
which needs to be taken into account to achieve one's true
goal of writing 5 novels, for example, itself a goal not
derivable from a scientific Is, unless, of course...).

You're thus not speaking to the issue, unless I'm missing
something.

>At the base of the pyramid (or inverted pyramid),
>the most basic values must stand on their own. I
>have already said the most fundamental value(s)
>can be found in feel-good. It is irrelevant that
>we do not yet know anything about the anatomy/physiology
>of qualia; we have every reason to presume we
>will learn. And once again, if anyone questions
>this position, his challenge is to offer
>something different.

I don't think that's THE challenge at all, though it's _a_
challenge. The challenge for someone who disagrees is simply
to show that you have no warrant for your claim. I've done
this. (Or at least shown that you haven't given a warrant
yet.) But it certainly would be helpful to offer an
alternative, I agree! I don't have a good one, yet. For now,
mine is simply: Our basic value OUGHT to be to figure out
the best basic value. (I don't have a great justification
for this yet, but what I do have would take us too far
afield.)

>(To my knowledge, no one has ever offered a
>genuine alternative to determinism either, but
>that is another long story.)

(Personally, I don't think anyone has ever offered a genuine
philosophical determinism -- though in the sphere of
physics, I'm actually more deterministic than most:
dice-throws in the world science studies have never made
sense to me.)

>4. Part of Brian's problem seems to be the
>concept of maximization in the context of a
>limitless future etc. Again, this is just a
>matter of common-sense manipulation of
>probability calculations. We weight more heavily
>the consequences that are closer in time and
>space and more amenable to estimation. The best
>you can do is the best you can do. It makes no
>sense to fail to try, just because you know your
>competence is limited.

You seem to have missed my point entirely, though it's
likely my fault: the point was a complicated one, and
expressed in a somewhat compressed way. (By the way,
"common-sense" is another alarm-ringer for me.) My first,
not so important point was that the best we can do in a
limitless universe can be shown to be not just possibly
sub-optimal, that is, less than what an omniscient being
would achieve, etc., but actually WORTHLESS. The point that
followed from this wasn't at all that it makes no sense to
try. Rather, the point that followed is that the employment
of the notion of outcome calculation may not work in an
attempt to ground a notion of an Ought. No matter, we can
drop this one. The second, more important (and difficult)
point was that, even if we can differentiate between better
and worse estimations of infinite consequences (on our
happiness or whatever), the relation between an estimation
and the actuality of how things turn out is such that we're
led into a contradiction. (This was my point about the
difference between (S1) "The right thing to do is to make
choices that maximize happiness," and (S2) "The right thing
to do is to make choices that we assess as having the best
chance of maximizing happiness.") I'll not restate, unless
it wasn't clear.

(Actually, near the end of yesterday's post, putting a
'that' after the 'yet' in the paragraph beginning "With #S2,
you..." might make the point slightly clearer.)

>5. Brian says the definition of happiness is not
>a scientific question, and mentions mind vs.
>brain. My position (and in this, for a change, I
>am far from alone) is that mind is just an
>aspect of the brain or its functions, and that
>ALL questions are scientific questions, if we
>define "science" in an appropriately broad way.

It seems to me that we would have to define "science" so
broadly that it would no longer correlate at all with how we
normally use the word. That is, it would be more like
philosophy. But you're not willing to take the definition to
that level of generality, it seems, since you don't see the
need to justify the use of science (the less general
cognitive science or neurology) in answering the question
about our basic values. Philosophy (unless you're redefining
that, too) would demand precisely such a justification. I
don't see what intermediate level of generality you mean,
nor what level of generality is workable in helping you make
your point.


It seems our worldviews are radically different. Yours is,
in certain respects, much more contemporary than mine. In
other ways, though, it's goes back to the beginning of the
Enlightenment, when it was thought, at first by a few, but
then by many, than science would be able to solve
_everything_ (I'm not saying this is precisely your view).
My own view of things includes a counter-Enlightenment
aspect (though also adheres to the spirit of the
Enlightenment): science can do a lot, but can't answer
everything. Above all, it can't answer the fundamental
question of our times: why science? (This point actually
goes way, way back to Plato.)

As Nietzsche says: our highest values (like Enlightenment
values) are de-valuing themselves, though it will take a
while for people to see this. Through no special talent, but
mere chance, it seems, I've come to see the importance of
Nietzsche: I fell in love with a brilliant poet/theologian
who eventually convinced me that my "scientism" (which was
_extreme_) was unfounded. One gets used to answering the
question Why science? with a "What else is there?" or "It
works" or "Give me a better alternative," and imagines that
those answers suffice, that they're "good enough" (after
all, what other answer is there?...). It's hard to see just
how meaningless these answers are. Another ex. I've given
before: you ask pragmatists why they believe in pragmatism,
and they answer: "it works". This is not an answer.

Authentic Socratic Eros certainly helped me. I highly
recommend it.

We all have blind spots, though, of course! I'll gladly
accept recommendations myself!


>What do you know--I have provided cosmic
>enlightenment in less than two pages!

They say a high estimation of one's worth correlates with a
long life. I think you will live a long life, and that you
_ought to_ live a long life!

Best wishes,
Brian.


P.S., in brief:

Brook Norton writes:

> So long as I'm happy, give me the drug.
> I'd rather be dumb-happy than smart-sad.

This highlights that one's own goal is what needs to be
determined, not The Goal, for I can't imagine _ever_ wanting
such a drug.

However, there almost certainly are some universals among
us, as far as goals go. I suspect it could be shown that you
_shouldn't_ want such a drug, because the state it induces
produces conditions which contradict the conditions for the
possibility of determining that you should want the drug in
the first place. But explaining that would take pages.

--
Brian Manning Delaney
<>

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=10186