Jean-Marc Fellous & Michael Arbib (eds.), Who Needs Emotions?, Oxford U. P. 2005. Chapters 2 & 8. [Comments in square brackets are my own thoughts.] Ch. 2: "COULD A ROBOT HAVE EMOTIONS? (p. 9-25) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Ralph Adolphs Though Adolphs is primarily a psychologist & neuroscientist, he seriously addresses the question of whether robots could actually *have* emotions, rather than just appearing to have them. He rejects purely behavioral criteria for having emotions, citing the example of a life-like -- but remote-controlled -- robot (p.11). [His skeptical remarks about the Turing test aren't quite fair in this connection, because Turing's test was aimed at an operational definition of human-level intelligence, not human-like emotion.] Functional architecture, not behavior ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Adolphs thinks robots could indeed *have* emotions if their functional architecture is suffiently like that of humans. The main requirement, he concludes eventually, is that there be a continuously updated SELF-MODEL which somehow "signals emotional value [of the currently attended-to situation or recalled or anticipated situations??] for the organism". These emotional values are tied to survival and "maintenance of homeostasis" [homeostasis refers to a stable, sustainable bodily state]. He identifies this signalling of emotional values with "feelings" [something that raises the usual "explanatory gap" questions, but which Adolphs doesn't address]. Unfortunately he has nothing specific to say about the nature of the self-model, or the value- signalling, or just what sort of causal role this value-signalling would have to play in order for a robot to have genuine emotions. [He wants the effect to be global, much as in Baars' global workspace idea.] Most of the chapter is devoted to general remarks about the nature and role of emotions in humans (or other animals). He starts off by saying that we need to distinguish emotions from behavior in general, given that ALL behavior is ultimately adaptive and homeostatic (p.10). The way to do that is to look at functional architecture (p.12). (We don't need to make the robot out of "squishy cells", p.13.). We also need to distinguish emotions from reflexes, since emotions connect situations with responses much more flexibly (p.12). Nature of emotions ~~~~~~~~~~~~~~~~~~ He says eventually (p.17) that 3 serial processing steps are involved in emotion [cf. Damasio, who was Adolph's advisor]: 1. Perception (or recollection) of a situation; this involves higher-order sensory cortices, and specialized systems. e.g., for recognizing faces (& expressions) and biological motion; 2. Association of the perceptual representations with emotional response and motivation; this evaluates "the significance that the situation has for the organism, both in terms of its immediate impact and in terms of the organism's plans and goals" (p.14); this involves the amygdala, the ventral striatum [see p. 362?], and regions in the prefrontal cortex [esp. orbitofrontal cortex, I suppose]; these effects include effects on heart rate, viscera, hormone release (endocrine system) and other autonomic systems; 3. Sensorimotor representation of that response; i.e., the responses in (2) are felt and represented (in structures such as brain stem, and thalamic and cortical somatic and sensory regions, p.14), as are the relation of these to the perception of the situation and social implications. He says emotion processing is domain-specific, but it's reasonable to study basic states such as thirst, hunger, pain, and sex together with emotions that involve more subtle, more plastic evaluation (allowing e.g., for action preparation and internal mapping), because they are neurologically related (p.15). He also surveys various ways of characterizing emotions (p.16), e.g., Rolls' scheme that relates emotions to rewards and punishments (present reward => pleasure; present punishment => distress; anticipated reward => excitement; anticipated punishment => fear; withheld reward => anger, frustration, sadness; withheld punishment => relief; [I've modified these a bit to accord with my own recollection of Rolls]). Emotions and social communication ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Again he argues against "faking it" as the best approach for developing emotional robots. His emphasis here is on the "simulation system" people seem to have in anticipating emotions in others, and interpreting expressions of emotion. For example, he cites work by Buccino (2001) showing that "primates construct motor representations suited to performing the same action that they visually perceive someone else perform" (p.19-20). He also mentions emotions that can only occur in a social context (such as embarrassment, jealousy, shame, and pride), but reminds us that some emotions are nonsocial (e.g., fear of falling off a cliff). [Question: to what extent do various animals display emotion -- think of a gnu being taken down by a lion; a calf or fledgling bird calling to its mother; a bull in the bullring; a pig shrieking on its way into the slaughterhouse. What are the evolutionary advantages, or lack thereof, for communicating emotions for various animals? Is the fact that we communicate in language important?] His conclusions about the nature of emotions and about emotional robots (p.21-23) were already summarized at the beginning of these notes. Ch. 8: "THE ARCHITECTURAL BASIS OF AFFECTIVE STATES AND PROCESSES (p. 203-244) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Aaron Sloman I feel somewhat ambivalent about this chapter. Sloman is a very good philosopher, and his enthusiam for AI (and insistence that philosophers ought to look to computational modelling for insights) have made him something of a "Philosopher Royal" of AI. I also like that he endeavors to explore the structure of minds and emotions IN GENERAL, not just in humans or other animals. But on the other hand, his particular architectural suggestions are not very clear or convincing or helpful to me (as someone trying to actually construct self-aware, and perhaps conscious AI systems). He severely chastises other writers for confusions and ambiguities in their writing on emotions, but I'm not convinced that he has cleared things up. Still, he sets up some worthy goals! The main themes are the following. 1. Functional components, needs and representational states ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ His goal is to give a definition and categorization of emotions in terms of "functional architecture". By functional components he seems to mean something like the components of an information processing system, in the sense in which computer scientists would understand that term. Such components may or may not be identifiable with distinguished physical parts. In general they include perception and motor systems in addition to "central" processing components. The term "functional", for Sloman, implies serving some "need" of the system as a whole, under certain conditions. Needs are certain kinds of states or activities of the system that can be viewed as desirable for that system, such as "continuing to survive", or "growing, reproducing, avoiding damage, or repairing damage" (p. 204). He also posits "intermediate states" that *represent* needs, and sensed facts, and thereby goals, desires, and beliefs. He doesn't really define these states (p.207-8), except to say that they have causal connections to action selection by the agent. In this way he thinks he can avoid much talk about the representational properties of such "states". [But for me, serious talk about internal representations, and what they mean and how they are processed, is precisely what is needed, and is lacking from the essay. He says on p.214 that belief-like states should have a truth-conditional semantics, and I agree, but think this needs technical underpinnings; Stan Rosenschein, long ago (mid-to-late 80s, see refs below) had a serious formal theory of such states. Sloman seems unaware of that work. Also the entire area of KR&R (knowledge representation and reasoning) in AI is concerned with beliefs, goals, desires, plans, etc., and I don't know why Sloman skirts around this; perhaps he doesn't want to get embroiled in (Jerry) Fodor-ese arguments about "mentalese" vs. "subsymbolic" representations.] 2. Affective states ~~~~~~~~~~~~~~~~ [IMPORTANT PRELIMINARY: Sloman's "states" should not be understood in the computer science sense of "a unique internal configuation of the system", where the system can be in one and only one such specific state at a time (setting aside quantum computers!) Rather, Sloman's "states" are ones like a state of anxiety, a state of being hungry, a state of having a certain belief, etc. -- ones that a particular agent may be in, SIMULTANEOUSLY. So in computer science terms, Sloman's states are SETS of specific states that exhibit some property in common; e.g., there are many *specific* states an agent as a whole can be in when it is in a state of anxiety. This observation has repercussions for the coherence (or otherwise) of the definition of affective states that follows.] After much talk about the muddled literature on emotion and affect (p. 208-212), he defines *affective* (desire-like) states as "states that have the function of detecting needs", for the purpose of triggering action that serves the need. [This seems to leave out affective states brought about by thinking, e.g., recalling memories. Also, from this it seems that animals are permanently in some affective state, since all their activities are presumably based on detection of some need, such as to eat, or find a safe shelter, or to sleep, or to evade predators or other dangers, or to reproduce, etc. Even a thermostat seems to be in an affective state, as it's constantly alert to changes in temperature, which trigger it into action. Probably Sloman would assent to all that, since he also concludes at the end that operating systems are emotional.] He terms "belief-like" states as non-affective. Recall, a system can be in an affective state, and at the same time be in a non-affective state. But presumably as long as it's in ANY affective state, it'll tend to be triggered into action. He further defines POSITIVE affective states as ones the system is disposed to strive toward or sustain, and NEGATIVE affective states as ones the system is disposed to avoid or terminate (p.215-216). [He puts this more verbosely.] [I don't think this works too well. Since a system can be in many different states at once, how do we define, objectively, which ones it is striving towards or avoiding? To me it makes more sense to talk of rewards and punishments, and to view an agent as striving to optimize an overall utility metric. One could then view those aspects of a state that contribute positive terms to the overall utility metric as the ones with "positive affect", and analogously for negative affect. So this would be more in line with Rolls and Adolphs -- they are pretty vague as well, but more persuasive to me. Of course, discovering what the utility metric is (from the behavioral tendencies of an agent) and what aspects of a state are contributing positively or negatively to it, is a tough problem, but at least one can see (vaguely at least) how a conjecture about the metric could be evaluated objectively based on observed behavior. I don't see how that's possible for Sloman's independently defined positive and negative affective "states". I think the problem is that Sloman is trying to put too direct a "utilitarian" spin on the behavior of animals, i.e., directly connecting emotion with "what's good for the organism". But I think the connection is indirect: animals (including us) *evolved* ways of attaching positive and negative values to perceived situations (external and internal), and acting so as to optimize these values *generally* has advantages for survival and reproduction, individually and as groups, but the emotions associated with these positive and negative values can't be understood as *directly* serving various needs.] Sloman does detect a problem for affective states like depression (p.218), and the "damping" theory he proposes has some intuitive plausiblity [but if such action damping is possible, then it seems to undercut his definition, whether he admits it or not; anyway, as I indicated, I think the difficulty is much broader.] He objects to affect-summing [but that seems to me precisely what's required; e.g., in trading off the exhileration of skydiving against the scariness of it in deciding whether to do it or not; living involves constant trade-offs!] He gives a large number of factors (p. 220) that he says should be used to subclassify affective states; e.g., whether they are general or aimed at a specific goals, how long-lasting they are, whether they "include" (in what sense?) beliefs, etc. 3. Functional architecture and affect ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ He superimposes his notion of affect on a 3-by-3 subclassification of components of a cognitive system into {perception, central processing, and action} components and {reactive, deliberative, and meta-management} components (p.220-7). [Where would Ortony et al.'s routine level and cognitive level fit in?] Deliberative layers aren't present in all systems -- they may be purely reactive. He hints (p. 223) that a deliberative level requires abstraction of symbolic representations from the input, so the input can be reasoned about; he also mentions modelling of other agents. On p.224, he seems to take the view that emotions involve alarm systems. Though 2 of his figures show alarms arising at the reactive level, here he says that primary (basic) emotions correspond to alarms generated at the reactive level, and secondary emotions correspond to alarms generated at the deliberative level. Nonetheless, on p.228 he says the notion of basic emotions involves deep muddles. In Fig 8.3, p.227, alarms are shown in a cluster associated with reactive processes. [In general, it seems that his "alarms" correspond roughly to the "interrupts" of Ortony, Norman, & Revelle.] 4. Defining emotions (p.229-233) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ P.230 he finally defines an emotional state as : a state an agent is in when it has detected some condition that is apt to interfere with its current or potential processing activity. [His phrasing is more complex.] It turns out that processing activities can be such things as setting the heart rate or pupil size, not just conscious activities. [But is the notion of interfering with (interrupting, preventing, disturbing, or modulating) an activity or potential activity objectively distinguishable from activity that is simply responsive to inputs, but not in a way that disturbs or modulates it? I doubt it.] [This is a bit startling and seems virtually vacuous. If emotions don't even need to be conscious, then they're nothing like the ones we informally talk about! Besides, as in the case of "affective states", it seems that every organism and every mechanism capable of any kind of ongoing information processing is then in a permanent emotional state when not asleep or turned off. After all, if a system is AT ALL reponsive to inputs and to internal events, then this means its activities are affected by those events, ergo, it is in an emotional state! Even noticing a small rock in your path and adjusting your step to avoid it means you are in an emotional state. Even hearing a truck coming down the street, as you're waiting for the bus, and glancing at it, puts you in an emotional state (because you wouldn't have glanced, if you hadn't noticed it). And in fact, Sloman concludes (as mentioned) that operating systems are emotional. Aren't most Turing machines emotional on this analysis as well, since they are constantly detecting symbols on their tape that cause them to do something (change state, write a symbol, move left or right) they would not have done, had the symbol not been there...?] He talks about different kinds of fear (p 231-2), and he's certainly right that there can be both very simple and direct triggers of emotions, and ones arising from thought and recall. [But it's unclear how such emotions fit with his action-triggering notions, since memories may evoke emotions without triggering action.] Conclusions ~~~~~~~~~~~ Even though he says on p.239 that "operating systems running on standard computers have several emotions in our technical sense", he says prior to that (p. 234) that most so-called emotional agents developed so far are not emotional, because one could build an equivalent finite-state machine. But aren't all computers finite-state, and in any case, it seems that even finite-state machines, given that they can "change their course" in response to their inputs, are emotional on Sloman's definition. Anyway, Sloman says that implementing emotions is not a matter of implementing a separate capability, but something emergent from the architecture of a system (alarms and such). Like Adolphs, he expresses dissatisfaction with merely creating the illusion of emotional response, and says that emotion should be a natural consequence of building machines that function adaptively and intelligently in their environment. [Perhaps so, but it seems we're already there, on his definition. One can take the weaknesses of Sloman's definitions as a challenge to proceed more formally (e.g., in the style of Stan Rosenschein), and ensure that the resulting formal concepts at least resemble the intuitive ones we are trying to explicate. However, I doubt that this is the fastest way to the goal of understanding intelligence, consciousness, and emotion. I think we have to try to build conscious intelligent agents, (while also keeping our eyes on discoveries in neuroscience and cognitive science) as an integral part of our methodology in trying to gain a general understanding of consciousness and emotion. Informal analysis and schematic diagrams alone won't suffice. The chief merit of the chapter, for me, is that the notion of "functional architecture" it appeals to -- essentially that of computer science -- offers greater hope of leading to a GENERAL conception of consciousness and emotion than a notion like Adolphs' which is much more closely tied to our specific brain architecture. (On the other hand Adolphs is careful NOT to lump emotion together with adaptive processes in general, or with reflexes.)] Refs: Stanley J. Rosenschein: Formal Theories of Knowledge in AI and Robotics, New Generation Comput. 3(4): 345-357. Stanley J. Rosenschein, Leslie Pack Kaelbling: The Synthesis of Digital Machines With Provable Epistemic Properties. TARK 1986