X-Message-Number: 7959 Date: Fri, 28 Mar 97 04:10:08 UT From: "Robert Ettinger" <> Subject: More clarifications 1. Peter Merel asked the difference between "raw sensory data" and "qualia." I said a raw sensory datum is the signal, for example, from the retina to the (first stage of the) visual cortex, while the quale is the eventually resulting modification or modulation of the self circuit. Peter then (#7945) said qualia "may indeed be implemented by your self-circuit. But my question concerns the definition of qualia, not their implementation." Once more: A quale is a modification or modulation of the self circuit. The modulation is not the cause of the quale; it IS the quale. By analogy, a flame is not "caused" by a column of hot, radiating gas; the flame IS the column of hot, radiating gas. And perhaps we could say that YOU are (in the narrowest sense) your self circuit; in a broader sense, you are your self circuit including the modulations thereof, your qualia; in a still broader sense, you are your qualia, cum modulations, plus the associated computing parts of your brain, especially your memories and habits; and in a still broader sense you are your whole person. (But even if we agree on this, it does not solve the puzzle of survival criteria.) Peter also asks what reason we have to recognize qualia as qualitatively distinctive from other phenomena, and in particular from sensory data. The fact that feeling is a special and mysterious kind of phenomenon needs no argument; distinguishing feeling from raw sensory data I have already addressed. The earlier events in the sensory train carry the message or stimulus; the quale (a modulation of the self circuit) is the interpretation of the message, or a part thereof; it extracts or derives meaning from the stimulus. 2. While the self circuit certainly could be distributed, I think Joe Strout (# 7947) is a bit off the mark in citing evidence for this--such things as disturbances to consciousness by lesions in the cortex. Joe agrees that the cortex MIGHT be merely feeding inputs to the self circuit, located elsewhere [and disturbances in intermediary signals could certainly cause differences in consciousness, just as differences in the outside environment will result in differences in perception], but he says it is simpler to assume the cortex is either the seat of consciousness or a significant part of it. My main reason (besides certain experiments involving other parts of the brain) for thinking the cortex is not the (main) seat of consciousness is simply its relatively late appearance in evolution. I suspect feeling is older than the cortex. This doesn't prove that, in modern animals, feeling couldn't have migrated from, or branched out from, the older regions of the brain, of course, but often older structures or their vestiges remain after new ones evolve. But regardless of any of this, the value (if any) of the self-circuit concept is just its usefulness in directing attention to that necessarily existing portion or aspect of the brain or its functions. 3. Joe also says, "Perhaps simple information processing produces simple consciousness and complex info-processing produces complex consciousness." This is what I strenuously object to. As frequently noted, by this reasoning a thermostat has a little bit of consciousness. ("It's too warm in here," or "It's too cool in here.") Also, by this reasoning, since modern computers in SOME WAYS do much more complex info processing than humans do, they must in some ways have more consciousness than we do. Among other problems, I fail to see any testable hypothesis here. 4. Joe also asks what knowledge of biology and physics will help answer questions regarding survival and personal identity. He notes correctly that we need to understand the basis of consciousness but then--oddly--says "I suspect that biology and physics are not up to this task. They can tell us what properties it has...but not why it arises." First off, science in the broadest sense, including physics, biology, and logic, is essentially the one and only key we have to every new door of knowledge and power. If they can't do the job, then it can't be done. But I know of no reason to believe consciousness is beyond ordinary investigation and understanding. Also, we need to know more about the nature of time, or spacetime, since the tricky question of continuity enters into the puzzle. 5. Asking "why [consciousness] arises" can have at least two meanings--the nature of the anatomy/physiology of feeling or the evolutionary basis for it. On the latter question, first, not all traits have been selected for by evolution--except in the sense that, if ancient, they weren't substantially counter-survival or counter-procreation. Many traits are just accidents, sometimes long-persisting, neutral in survival or procreative value. Conceivably, then, feeling could have arisen by accident and persisted by being harmless and accidentally associated with other, survival-enhancing traits. After all, as far as I can judge, feeling uses only a small portion of our brain volume and energy. But I think there is a positive possibility also. Feeling could have survival value by making certain "computations" or decisions or reactions more efficient, as follows. A "robot" might have a hard time classifying sensory inputs so as to produce the appropriate responses. It might, for example, have to build a huge mental library of possibilities and associated strategies. For every complex input, run down an enormous checklist of possible dangers or opportunities and available responses. Needs a lot of volume and a lot of time. Feeling, on the other hand, may offer a neat method of categorizing inputs. "Smell bad, me no hang around." Or, "Smell REALLY bad, me long gone." This, in the form here given, is certainly an extremely bare-bones suggestion, but I don't think it is vacuous. 6. And again on the possibility of thinking without feeling: We need to remind ourselves that MOST of the information processing in our own brains seems to have nothing to do with feeling. If I had time, I could also cite fascinating experiments showing that a person can DO or SAY something and be (apparently) entirely unaware of it! On a more mundane level, we walk (say) with minimal awareness; we do not consciously control the detailed muscle movements. Therefore, in a sense, we are to a considerable extent robots ourselves....What I am getting at once more, of course, is the possibility of thinking without feeling (and therefore without being, without life as we know it). If feeling helps a system compete but is not necessarily essential to a reasonable degree of function, then we can indeed imagine this dichotomy--feeling systems (people) and non-feeling systems (including intelligent robots). 7. And yet again on criteria of survival. Various people may feel comfortable with various combinations of continuity of matter, identity of matter, continuity of information or persona, and identity of information or persona. But for every person who is comfortable with one of these or some combination, there are also people who are entirely dissatisfied, and with good reason in the form of thought experiments. None of this proves anything, one way or the other. If you have a preference or a hunch it's all right to say so, but it's not all right to try to convey the impression that you have arrived at a rigorously derived conclusion. We just don't know yet, and we don't have the information we need. Robert Ettinger Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=7959