Jean-Marc Fellous & Michael Arbib (eds.), Who Needs Emotions?, Oxford U. P. 2005. Chapters 7 & 5. [Comments in square brackets are my own thoughts.] Ch.7: "AFFECT AND PROTO-AFFECT IN EFFECTIVE FUNCTIONING (p. 173-202) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Andrew Ortony, Donald Norman, and William Revelle This chapter seemed well-suited as initial reading from the book because it provides a bird's eye view of the functioning of organisms -- or robots. It classifies this functioning along 2 dimensions: LEVEL of information processing: [My Examples] - reactive withdrawing hand from heat; balance - routine driving car; verbalizing a thought - reflective planning a talk; self-justification and DOMAIN of functioning: [My paraphrase:] [My Examples] - affect, feeling feeling "bad"; agitation - motivation wanting wanting food; wanting to flee - cognition thinking assigning blame; explaining an event - behavior doing walking; eating; speaking The main emphasis is on describing affect at the various levels of processing, and to relate this to personality, and to robot design. [Unfortunately the chapter makes very dry reading (somewhat ironic for a book on affect and emotion!); it seems aimed at fellow-researchers rather than a more general audience. The style is jargon-laden and "academic"; there are virtually no examples (the story of "how I reacted to a crocodile on the road" is pretty much the only real-world example); and there is no attempt to back up the claims made with physiological, psychological, or intuitive evidence -- except by citing lots of other articles without elaboration. An example of turgid writing (p.190): "One of these [two perspectives on personality] seeks to identify the primary dimensions in terms of which descriptions of systematic regularities and differences across different times and different places can be parsimoniously but informatively cast." Whew. This seems to mean something like, "One approach to analyzing personality is to describe it in terms of separate dimensions such as agreeableness, extroversion, conscientiousness, etc." But these are eminent authors, so we can probably assume that the citations really do help to back up their claims!] Anatomy and consciousness ~~~~~~~~~~~~~~~~~~~~~~~~~ Anatomically, they roughly identify the reactive level of processing with the spinal chord and midbrain basal ganglia (i.e., the striatus and other major clusters of neurons in the vicinity of the thalamus, controlling movement), the routine level with the cortex and cerebellum, and the reflective level with the prefrontal cortex (p.177). They also identify the reflective level with "consciousness", whereas the routine level only provides "awareness". [It's unclear how this perspective relates to Blackmore's or Baars', who would count experiences such as a pain in the foot as conscious experiences. Perhaps O&N&R take for granted a "HOT" position on consciousness.] In their terminology, it seems that nonprimate mammals and reptiles are "aware" but not "conscious". Flow of control ~~~~~~~~~~~~~~~ The control relationships they assume for the 3 levels of information processing are that control flows both "downward" and "upward" through the 3 levels. However, the downward control flow from reflective to routine processing is said to be a "biasing" effect rather than step-by-step control. [Is this consistent with Libet's delay or readiness potentials?] Also the upward flow of control from the reactive level to the routine level, and the routine level to the reflective level, is assumed to be by "interrupts", causing the higher level to "attend to the interrupting event" (p.180) [which they don't try to explain further]. [Intuitively, something seems to be missing in this picture, where lower-level and higher-level processing communicate only through downward biasing and upward interrupts. Shouldn't each of the higher levels be receiving complex *information* from the preceding level, and abstracting from it, and in turn passing this on to the next level? O&N&R's picture seems to be one where the three levels somehow get their information independently, and interact only through control signals. Have I misunderstood?] Levels of affect/emotions (and corresponding functioning) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The reactive level involves affect only as a kind of +/- parameter ("protoaffect"). This is triggered by hard-wired pattern recognition, and elicits hard-wired responses (e.g., spitting out something bitter-tasting). The routine level involves feelings like distress (a bad feeling about the present), pleasure (a good feeling about the present), fear (a bad feeling about the future), excitement (a good feeling about the future), but without a clear object. At this level implicit expectations and motivations arise (not necessarily linked to immediate stimuli), and learned skills are deployed unconsciously (e.g., expert driving). Disruption of normal routine generates an interrupt (and feeling of surprise). The reflective level assigns an object to affective signs, yielding "full-fledged emotions". For example, a feeling of fear may be elaborated into a fear THAT (say) a crocodile in your path will attack you. Functions at this level make use of constructed MENTAL MODELS of people, animals, and objects, and interactions among them. These models also enable SELF-AWARENESS, representation of the MINDS OF OTHERS, METACOGNITION, and conscious expectations, reasoning, and planning [p. 185 -- hurrah, *models* at last!]. They tell the crocodile-encounter story to make the point that the sense of panic, and evasive action, may occur prior to any deliberate, conscious thought -- i.e., both are reactive and/or routine. At the same time, they say that deliberate thinking may kick in very quickly, perhaps if instant action isn't called for or isn't possible [to which I can testify from some bear-encounters]. Personality ~~~~~~~~~~~ People's personalities are all similar in the sense that they involve the 3 levels of information processing. But they differ to the extent that they involve different "parameter" settings. For instance, some people may have stronger "approach" tendencies at the reactive level, while others may tend more towards avoidance, This might lead to a bold/shy personality difference. Someone with a general positive bias in evaluating current or future situations at the reactive and routine levels, and with strong approach tendencies (rather than avoidance) might be seen as extroverted [or if you like "extraverted"], while someone with a negative bias in affect and avoidance tendencies might be seen as introverted. [To me, this sounds almost tautologous -- after all, what is extroversion if not a tendency to seek interaction with others and feel positive about it?] Robots and feelings/emotions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The authors say they "finesse" the question of whether robots could actually feel anything, by focusing on functionality: robots will need affect/emotions for the same reasons people do; e.g., to appear life-like to humans interacting with them, acting self-protectively or opportunistically in situations where a response is urgent and also when there's time to think, being open to experience and learning, favoring actions that are seen as having a good outcome and avoiding ones that are expected to have negative outcomes, reflecting on their activity and the activity of others to better understand why people (and other robots) do what they do, and so on. [However, I don't see that any of this actually *requires* anything beyond numerical evaluation of potential benefits or costs, and probability-based decision-making. It leaves entirely open the question they finesse, whether robots would actually have any subjective feelings at all.] Ch.5: "WHAT ARE EMOTIONS, [& WHY DO WE HAVE THEM, ETC.] (p. 117-146) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Edmund Rolls Rolls starts out by arguing at length that emotions are elicited directly or indirectly by good and bad experiences. [When I put Rolls' point in this general way, it seems pretty uncontroversial; but the specific argumentation in terms of "stimuli", "operant responses", "rewards" and "punishers" is one I find rather puzzling, seemingly out of touch with modern cognitive science. Now, Rolls is a neuroscientist who has contributed significantly to our knowledge of brain processes (in rats, primates and people), so why this Pavlovian/Skinnerian terminology? My conclusion is that stimulus-response behaviorist talk is alive and well in neuroscience. In a way, this is understandable: shocking a rat's foot, or stimulating a brain region with an inserted electrode, or measuring neuronal activation level, are indeed (by design) stimulus-response situations. But as applied to organisms as a whole, the terminology seems more suitable for maze-learning by rats than for human behavior, or even behavior by rats in more complex, fluid, selectively perceived, interpreted situations. In the real world, for creatures like rats or humans, "stimuli" become situations like noticing a sexual rival making advances on a mate, and "operant responses" become ones like challenging and threatening the rival; or "stimuli" are ones like noticing the tracks of an antelope (or a saber-tooth tiger) in the mud, and an "operant response" one like tracking and hunting the antelope, or deciding to shout a warning to your fellow- hunters about the apparently nearby tiger. Such complex abstractions from the world cannot just be taken for granted as "givens", as "stimuli" that set processing in motion -- their derivation by the organism is itself an integral, highly sophisticated part of the processing it performs... But it's worth swallowing hard and going on, because eventually Rolls has very interesting things to say.:-] Evolution and rewards and punishments ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rolls takes one of his most important points to be an evolutionary one: A genetically built-in system of rewards and punishments (experiences to be sought out, and experiences to be avoided) is superior to a genetically built-in system for responding to particular inputs with particular actions (e.g., plants' tendency to turn towards the light -- "taxes" (in this case, a "tropism"), or a micro-organism's tendency to move along some chemical gradient). Costs are also taken into account, e.g., expending a lot of energy to get to a food source detracts from the reward, compared to a more easily obtained reward. The superiority of the reward-and-punishment system comes from its adaptability -- the action that is appropriate for a particular kind of input can be learned associatively, rather than being fixed genetically. [Good point, I think, and one that reminds us that in designing robots we should not rely on behavioral rules (let alone preprogrammed behavior) but rather should base our designs on autonomous learning, where the learning is driven by seeking of rewards & avoidance of distress, through goal-directed behavioral choices. But I think we have to avoid the 20th century behaviorist error of paying too little attention to the representations and thought processes that are needed for high-level cognitive performance -- *intelligent* behavior, not just "adaptive" behavior!] This learning involves two stages for Rolls: learning the association between a given perceived situation ("stimulus") and a reward or punishment experienced in such situations; and learning actions that tend to lead to the rewarding situations and to avoid the punishing situations. [However, I think that at least mammals can also experience "anticipatory joy or fear" -- these are rewards and punishments that can come well in advance of the more direct, basic rewards and punishments. For example, a person about to go on a date may *look forward with pleasure* to the anticipated experiences (similarly for going to a good restaurant, movie, concert, etc.), and conversely may anticipate with anxiety or even misery a dental appointment, a dangerous military mission, or a jail term. Even pets will joyously anticipate being fed or taken on an outing, or cringe and resist when being transported to the vet (by some previously experienced mode of transport). So at least mammals learn to associate a *present* stimulus with a (possibly rather distant) *future* reward or punishment; and this somehow produces "pseudo-" rewards or punishments in advance of the real thing, perhaps by "visualization" of the anticipated future situations, at least in the case of humans. It's a proven fact that people will tend to feel good in imagining pleasurable situations, and bad in imagining painful situations. Probably the effect is stronger if the future situation is actually expected, not just idly visualized.] Natural selection then tunes this reward-and-punishment system to optimize the reproductive fitness of the organisms in their particular type of environment (including social environment). This leads, for instance, to enjoying food, but reaching a point of satiation where enjoyment declines, since overeating is not good for survival. It also leads potentially to kin altruism and reciprocal altruism. Two routes to action (p.124-6, 131-6) ~~~~~~~~~~~~~~~~~~~~ Rolls also says that the system for choosing actions has short-term and long-term components, where the long-term components involve deliberate, language-related planning (p.125, fig, 5.2) [However, the figure actually seems to involve 3 levels of processing -- cf. Ortony, Norman & Revell.] He discusses this distinction again p.131-136, oddly, without immediate reference to the previous discussion. Setting aside genetically determined reflexes (which are not based on reward and punishment), the two routes to action involve - primary & secondary cortex, amygdala and orbitofrontal cortex, striatum, thalamus, premotor cortex, leading to IMPLICIT responses (by which he means ones not consciously considered). P.136 has an interesting discussion (based on animal experiments) of the factors involved in the learning and execution of behaviors based on this route. The amygdala and orbitofrontal cortex are said to be the locus of associations between particular situations and corresponding positive or negative experiences. [A thought about robots: If we build a similar reward/punishment system into a robot based on conventional computer architecture, it seems that the robot -- assuming human-level intelligence -- could learn all about its own architecture including its reward system -- and proceed to reprogam it to feel continuous pleasure, i.e., a kind of self-drugging!! Would this be avoided as long as the robot does not get *anticipatory* pleasure from the thought of boosting its own rewards? Would it be avoided by designing the robot so that it seeks to maximize long-term cumulative rewards, and will realize that if it just sits there exploiting its own reward system, someone, or some other robot, will soon turn it off or destroy it? I'm not at all sure ...] - much the same route as above but with additional involvement of the "language cortex" and "cortical and motor planning areas", leading to EXPLICIT (consciously considered) responses; the planning is said to be done by if-then rules [I see the influence here of both classical behaviorism and Newell-Simon-like rule-based behavior; in AI one modern representative of this perspective is Newell's former student John Laird, with his SOAR architecture. It's worth noting here that Arbib, in ch. 12, disputes the idea that planning based on if-then rules is language-related, and that such rules involve language areas like Broca's area.] Note that it is this second route that Rolls regards as related to consciousness (p.134). Rolls is a HOT theorist, who maintains that consciousness requires higher-order thought (thought about thought). [Recall the view of Baars and others, however, who point out that one can be deeply engrossed in a painting or in a piece of music and thus surely conscious of it, yet this sort of "absorbed state" by its nature does not involve thought about thought.] His view is that complex multistep planning, which occurs in the prefrontal cortex (in the dorsolateral and inferior convexity), requires checking steps of the plan -- a HOT activity. INTERESTING VIEW! The evaluation of outcomes as good or bad seems to be done by the amygdala. (Interestingly, damage to the amygdala may leave a patient with the sense that he's experiencing pain -- but without the unpleasantness! Demerol and nitrous oxide tend to have that sort of effect ...) Interaction between the 2 routes (p.134-6) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A very interesting conjecture on p. 135, following the discussion of the connection between the two routes to action, is that the implicit system may "highlight" certain aspects of what is currently perceived to "guide the attention of the explicit system to such stimuli". [Compare with the "interrupt" mechanisms postulated by Ortony et al. I am convinced that an effective AI behavioral system needs to be guided not only by "deliberate" regression planning (chaining back from goals to those actions that seem capable of achieving the goals) but also by "suggestions" that flow from learned associations between situation types and good actions to carry out, or goals to pursue, in those types of situ- ations -- precisely the kind of "guidance" Rolls is referring to.] He goes on (p.136) to comment on the relation between his double-route-to-action model (which can lead to conflict between "impulsive" and deliberately planned action) with psychoanalysts' notions of "id" (source of impulses) and ego (enforcer of external constraints on impulses) and superego (enforcer of internal constraints on impulses). Brain systems underlying emotion (p.136-142; pictures p.137,8) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rolls gives a detailed discussion of the pathways for processing visual, somatosensory (touch), gustatory (taste), and olfactory (smell) signals, and of the various brain regions involved. Note the close anatomic similarity between the only 6cm deep macaque monkey brain and ours... The amygdala receives special attention, because of its central role in experiencing pleasure and pain. Rolls says in both humans and macaques, a certain part of the amygdala is also responsible for recognizing faces and facial expression. So such recognition is closely linked to emotion! [The amygdalae are known to be a factor in a person's social interaction with others, tend to be smaller in bipolar individuals, and their connectivity (which may be stronger in the left or right amygdala) is correlated with sexual orientation.] The orbitofrontal cortex seems to be where learned associations between particular "stimuli" and particular rewards reside (p.140). In humans it also seems to encode facial expressions (abstractly?) and (abstract representations of ?) rewards and punishments, including ones like winning and losing money (p.142). Conclusion ~~~~~~~~~~ Rolls says it may be possible and for some purposes desirable to implement emotions in computers, and that the findings he presents may be helpful in this regard. However, he also thinks that rewards and punishments are the evolutionary outcome of "selfish genes", and there may not be any reason to duplicate this outcome in machines which (he says) are designed for particular engineering goals such as terrain exploration, relaying pictures, heavy hauling, and machine translation. [*THAT* I find to be a remarkable failure of imagination.]