Jean-Marc Fellous & Michael Arbib (eds.), Who Needs Emotions?, Oxford U. P. 2005. Chapters 9 & 10. [Comments in square brackets are my own thoughts.] Ch. 9: "MOVING UP THE FOOD CHAIN: Motivation & Emotion ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ in Behavior-Based Robots (245-269) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Ronald Arkin (Georgia Tech.) Both this chapter and the next are concerned with "behavior- based robotics", which has flourished for many years now at MIT (under the leadership of Rodney Brooks) as well as Georgia Tech and many other places. The idea is to view robot behavior not in terms of reasoning and higher-level cognition, but in terms of coordinated "behaviors" of various types, such as foraging, feeding, predator avoidance, etc., as in 20th century studies of animal behavior (ethology). A given set of behaviors are goverend by a behavior coordination module, that activates and terminates behaviors based on perceptual and internal parameters. The behaviors can be hierarchically organized; for example, while at an intermediate level a coordination module might control foraging, feeding, evasion, sleeping, etc., any one of these, such as feeding, might in turn consist of more specific behaviors, controlled by a feeding module, such as biting off a chunk of food, chewing, and swallowing. This chapter starts with insect-like robots and moves on to dog-like and humanoid robots -- but in all cases, the level of cognition (at least the types of cognition that are regarded as affecting emotion/motivation) remains at a very simple, insect-like level. This tends to be generally true in behavioral robotics, and Brooks has said that "I always thought AI stood for Artificial Insects". (However, younger researchers such as Deb Roy at MIT and Matthias Scheutz at Tufts are very much concerned with linguistic interaction and language learning in behavioral robots.) What's interesting, though, is that the encoding of affect that is used consists of giving certain parameters positive or negative values, as a function of perceptions and other aspects of the current state, and using these values to influence the current behavioral dispositions of the robot. Isn't that pretty much what several of the theories of affect (at least the basic sorts of affect) we have looked at call for? For example, Ortony et al. (ch.7) had +/- valued (proto)affect at the reactive level, and present- or future-directed good or bad feelings at the routine level. Rolls' "first route to action" was based on associating situations with good and bad feelings, and learning actions for attaining or averting such situations. Adolphs (ch. 2) also associates perceived situations with an emotional response or motivation (which then leads to a bodily emotional response and a representation of that response). Sloman (ch.8) also posits negative and positive affective states, and various pathways by which these send "alarms" to various subsystems, including the motor system. So, even though Arkin is careful to disclaim cognitive validity of his robotic mechanisms, nonetheless those mechanisms seem quite compatible with what the theorists and neuroscientists have written. The implication, then, in chapters like Adolphs' or Sloman's that the roboticists are faking it, isn't really accurate: the robots really *have* (primitive sorts of) emotion, in the very sense these writers advocate. Of course, what's lacking is the higher levels of cognition that many of the theorists talk about -- e.g., the cognitive level of Ortony et al., Rolls' "second route to action" (involving the language cortex), Fellous & Ledoux's working memory, and memory- based processing in "contextual fear" and "emotional memories", Adolphs' self-model and internal representations, and Sloman's deliberative and meta-management layers. That is why "behavior- based robotics" needs to get together with "cognitive robotics"! Further, from a broader, Blackmore-like perspective, the fundamental issue that remains open even at the level of very simple affective states is WHETHER THE ROBOTS ARE EXPERIENCING ANYTHING AT ALL! [I doubt we can say they do... Just because the artificial mantis has a "fear" parameter induced by presumed "predator" objects in its vicinity, and this causes it to avoid those objects, doesn't mean it's *actually* afraid of those objects! In fact, ANY neural- net like device (by definition) operates on the basis of positive (excitatory) and negative (inhibitory) parameters, but this surely doesn't imply that it is emotional or even proto-affective. This line of thought rather strongly suggests that "If it doesn't think, it doesn't feel!"] Motivations/emotions ~~~~~~~~~~~~~~~~~~~~ Arkin wrestles with the distinction, suggesting that "emotion" is restricted to modulation of current behavior, while "motivation" is the broader term, allowing as well for setting of goals. He is generally more comfortable with the term "motivation" [I would say, because it has less of an implication of consciousness than "emotion" -- though he doesn't put it that way.] His goal is to use emotional/motivational parameters to boost a robot's "survivability" and effectiveness in social interactions (p.247). Tolman's sowbug (woodlouse) ~~~~~~~~~~~~~~~~~~~~~~~~~~~ This was an early cybernetic model (1920's & 30's). [I have heard of an even earlier "light-seeking dog" built by Hammond around 1920.] It was intended to have photosensors, and move among obstacles in search of food. It had motivation parameters for going after food (hunger), for changing direction, and for moving forward. In principle Tolman also wanted to allow for developmental parameters, rest periods and such. Arkin and collaborators implemented this model in the late 1990's, apparently around the same time (or only slightly before) the Mantid and Aibo work (below). Arkin says little about experimental results; the simple robot did indeed tend to move toward preferred types of objects (side-by-side objects of the same color, vs. side-by-side objects of different color). The main reason for his mentioning the sowbug model at all seems to be as a way of acknowleding historical antecedents, and his commitment to an ethologically inspired approach to robotics (thus the behaviorist tone). Mantid (p.251-4) ~~~~~~~~~~~~~~~~ This 6-legged robot robot has a broader range of behavioral modes (and associated motivations): prey acquisition, predator avoidance, mating, and searching for a hiding place/hunting ground. The success criterion in all cases is just based on getting next to, or away from some object -- a food object, predator, mate, or hiding place (just various types of stationary blocks). Behaviors are selected based on the level of hunger, fear, or sex drive, and proximity to food, predator, mate, or hiding place. Hunger and sex drive increase linearly with time and drop to 0 upon satisfaction. Obstacle avoidance interrupts what is otherwise forward motion as determined by the current behavior. The overall behavior is as you might expect (though Arkin really has nothing to say about it) ... Attachment theory and robotic pets (p.255-261) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Arkin used psychological theories of attachment (typically of an offspring to the mother) to guide work on robotic dogs like Aibo. Basically, the strength of the attacher's desire to be closer to the attachee [if you'll excuse the word] is the product of 3 terms (apart from a constant multiplier), respect- ively linear in (a) the strength of the bond, (b) the distance from the attachee (up to some max value, and with value 0 if "close enough"), and (c) the comfort level (this has a negative coefficient within a range of sufficient comfort, ie., the agent is less drawn to the attachee when comfortable; the linear dependence levels off at the low and high ends of the range of sufficient comfort). The intention is that comfort level is dependent on familiarity of nearby objects (and other factors -- maybe perceived threats?). However, the experimental results shown p. 259 (on amount of time spent by Aibo close to the attachee, a fixed point) were apparently obtained by hand-setting various comfort levels. As you might expect, the lower the comfort level, the more time is spent near the attachee. [But I presume that in "free" mode the comfort levels are derived from other parameters?] Arkin says they also use "pleasantness", "arousal", and "confidence" parameters (p.260), in terms of which they approximate emotions of happiness, anger, sadness, fear, surprise and disgust (the sort of classification Ortony et al. referred to, and which Sloman seemed to regard as poorly founded). [I would guess, for instance, that anger would be modelled as low pleasantness, high arousal, and high confidence; that fear would be modelled as low pleasantness, high arousal, and low confidence, etc.] [Unfortunately I could not get a real sense of how this robot works, or what varieties of behavior and behavior changes it is capable of; not a well-written article, in my opinion. Arkin just says (p.261) that "a behavior is selected based on inputs derived from external stimuli and the robot's current motivational state variables." Humph, I could have guessed that much.] There's vague talk about "symbol grounding" on p.261 which [as far as I can make out] amounts to being able to give brief voice commands when the robot is attending to some object, causing the robot to change its behavior with respect to that object [to my considerable consternation, no specific examples are given, almost as if this paper had been post-edited by the Sony intellectual property lawyers]. The Sony Dream Robot (p. 262), later called QRIO (and terminated in 2006) was intended to be capable of "emotional expression" [I thought this was true about Aibo too; but then "emotional expression" is so vague as to be nearly vacuous.] It also was intended to have planning capabilities, not described here for proprietary reasons! However, one learns from the web that QRIO was able to walk, run and dance (in a kind of low-key, but amusing fashion), and had some capacity for voice and face recognition, and remembering people's likes and dislikes. Subsequently a new "Asimo" robot was developed (see http://en.wikipedia.org/wiki/ASIMO), twice as tall (4') as QRIO, and with impressive running (6km/hr, with a fairly natural gait), stair-climbing, and grasping skills (have a look on YouTube!). It can greet, learn to recognize about 10 different faces, recognize some objects/obstacles, movements, and gestures, shake hands, distinguish voices from other sounds, respond to some questions, and pick up and deliver (specially designed) trays of food. It's hard to know whether Asimo's designers tried to incorporate emotions into Asimo in any real sense; but since there is no real higher-level cognition, any emotions built into it are bound to be simple dispositional parameters, as in all its predecessors. Current work in progress at Georgia Tech (p.263-5) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ They want to add @ *traits* (based on the OCEAN model: "Openness", Conscientiousness", "Extroversion", "Agreeableness", and "Neuroticism" dimensions); [they cite McCrae & Costa 1996 for this 5-factor model] @ *moods* (based on two "affect categories" that are not described, with influences from environmental conditions, and time of day/week...) @ *attitudes* (preferences for favorable or unfavorable responses to particular objects [likes and dislikes, in other words]). The figure p.264 indicates that (as you would expect) attitudes and traits influence emotions and moods, and vice versa, and the emotion and mood parameters together with perceptions determine choices of behaviors. Multiple behaviors can be active, and are coordinated by a behavior coordination module. In the conclusion Arkin says that "the diversity of emotional models is something to celebrate and not lament", apparently because the more theories there are for roboticists to try out, the better off the world is ;-). Ch. 10: ROBOT EMOTION: A Functional Perspective (p. 271-310) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Cynthia Breazeal and Rodney Brooks (MIT) This chapter is devoted to the Kismet robot of Cynthia Breazeal at the MIT AI lab. (For some nice pictures of Breazeal with Kismet, see http://books.nap.edu/openbook.php?isbn=0309095565.) The first 10 pages, on the role of emotion in animals and robots, and the arguments for endowing robots with emotions and emotional expression, are very well written. (A delight after the preceding chapter). However, the rest of the chapter, on Kismet, is not so good. It shares the defect of Arkin's chapter of saying very little about what the robot is actually capable of. Motivating the development of emotional robots ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ They predict p.272 that robots will surely become commonplace in the home and everywhere else, even though they may now seem "esoteric and unnecessary" -- just as electricity, refrigerators, running hot water, telephones, movie projectors, and (most recently) broadband internet connections did at one time. They point out that current vacuuming robots would be more acceptable if they showed people recognition, and were polite and obedient and got out of people's way. They also point to studies in the psychological literature showing the reciprocal relation between cognition and emotion, each helping the other to function more effectively (and decision-making functioning poorly without emotion). Their interest is in making robots benefit from having an emotional system, and also making them more socially useful by expressing emotion. So they're interested in functionality, not in "faking it" (p.276). In this respect they are much like Arkin. They enumerate the many potential applications of (emotionally competent) robots as tools, cyborg extensions, avatars (telepresence devices), and partners for people (e.g., eldercare). They argue vigorously for "emotional intelligence" in robots -- but for them, this seems to mean primarily *expressing* emotions, as a way of communicating understanding, success, difficulty, etc., while they largely neglect the usual meaning of that term -- namely, intuiting or inferring how *others* feel and what they want [which would require mental modelling, something totally outside the range of nonthinking robots like Kismet! As mentioned below, however, Kismet can to some extent interpret the interlocutor's tone of voice/pitch contour.]. Kismet's capabilities ~~~~~~~~~~~~~~~~~~~~~ Coming back to Kismet, why are they so tightlipped about what it actually does? Do they fear that saying this in plain words will sound as if they are doing something trivial, or developing toys rather than engaging in "serious research"? [I suspect so.] So let's try to lift the veil and glean from incidental comments here and there what Kismet is all about: It - has some basic visual recognition abilities (It doesn't see with its "eyes", but rather has 3 cameras positioned near the eyes and another in the "nose"); can tell humans from toys; - has various facial expressions using eyebrows, eyelids, lips, and mobile ears - looks at people and at toys displayed to it, with more or less "interest" or "pleasure" (there are references to "playing with toys", but this seems to be wishful thinking, as Kismet is just a head, with big blue swivelling eyballs, big yellow eyebrows, and big orange lips. A more up-to-date robot, Leonardo (see http://www.youtube.com/watch?v=ilmDN2e_Flc, http://www.pbs.org/saf/1303/segments/1303-4.htm, http://robotic.media.mit.edu/projects/robots/leonardo/ overview/overview.html) is a lot cuter and seems more ambitious, intended as a complete Gremlin- or Ewok-like robot; it has better face and object recognition, can reach for toys as if to grab them, and has been used for "mental modeling" experiments in a table-top blocks world. Those experiments involved motion planning and visibility modeling (who can see what unoccluded blocks). However, its general behavioral range still seems quite limited. - turns away in "fear" from fast-moving or looming objects - makes some sounds (including greeting), especially to get someone's attention, - turns toward people and wiggles ears to get attention, - gradually becomes "sad" if not attended to - tries to detect approval, disapproval, attention-solicitng, and consolation in the tone of voice when spoken to; (it has no language or thinking or learning capabilities, as far as I can tell) - can go into sleep mode Kismet's architecture ~~~~~~~~~~~~~~~~~~~~~ Like Arkin's robots, Kismet is inspired by animal behavior theories (ethology), elaborated in robotics by Rod Brooks in his "subsumption architectures". I think the best understanding of its operation is provided in a rather late section, on p. 290, Fig. 10.4, showing the behavioral system (which is interconnected with the perceptual/emotional systems). Essentially, this shows that the behavioral system is structured as a decision tree, where each node consists of several competing if-then rules. The strengths of the "if"s determine the choice of "then", and the "then"s correspond to subordinate nodes (which are again sets of if-then rules). Thus each set of if-then rules implements a behavior coordination module. At the root node, the 3 "if"s are drives (viz., social, fatigue, and stimulation) and the "then"s are 3 corresponding nodes (sets of rules) for achieving social interaction, sleep, or toy-viewing. As an example from the second level of the tree, the "if"s for socializing ask whether there are good people at hand (if so, go to rules for engaging them), whether there are over-active or bad people at hand (if so, avoid/withdraw), or whether there are no people at hand (if so, visually seek and try to attract people). At the bottom are things like greeting someone, turning towards a toy, going to sleep, etc. The strengths of the "if"-parts depend on the perceptual inputs and emotional state. Perception starts with feature extraction and attention control. (See p. 285, Fig. 10.3) The features are fed into various recognition systems, confusingly [for me] called "releasers". E.g., there are recognition systems for detecting toys and faces (these are emotion-neutral, and as such classified as "cognitive"), for recognizing scolding or praising speech, or a threatening (fast moving or looming) object (these are value-laden, and as such classified as "affective"), etc. The results affect behavior through a number of routes: - determining the "drives" at the top level of the behavioral system - acting on other "if" parts of the behavioral system - computing the emotions of anger, disgust, fear, joy, sorrow and surprise, which again feed into "if" parts of the behavioral system As well, the computed emotions are used to control the expressions of the face -- raised eyebrows for surprise and fear, upturned lips for pleasure, wide-open eyes and perked-up ears for interest, etc. On p. 294 there is a table giving the conditions under which the various emotions are generated, and the corresponding behaviors (display and emotion, engage a person, attend to a new object, etc.) The influences mentioned above are all transmitted as levels of activation or inhibition, and an "if" is triggered if its summed inputs exceed a threshold (p. 286). So the "common currency" of the system consists of these numerical activation values. [What is strikingly missing is any REPRESENTATIONAL common currency. That's why the system cannot rise to the cognitive level, notwithstanding the description of some of the recognition systems (for toys, people, etc.) as "cognitive".] pp. 294-6 give some details of the emotions: anger, disgust, fear, joy, sorrow, surprise, interest, boredom. Results ~~~~~~~ The tone-of voice classification is about 80% accurate. People find Kismet engaging, and took pleasure in "cheering it up". From its gaze and expression and reactions they quickly figure out what Kismet likes and doesn't like. Kismet shows the expected sorts of behavior as a function of circumstance; e.g., it intensifies its focus on satisfying a drive that hasn't been satisfied for a while, and also gets more "finicky" (p. 305) and prone to disgust when encountering stimuli that don't satisfy the urgent drive. Their conclusions reiterate the important role of emotions in robots, but disappointingly, there's no hint of interest in adding genuine thinking capabilities. -=-=-=-=-=-=- Final notes: ~~~~~~~~~~~~ We've already commented on the Honda Dream Robot (QRIO), terminated in 2006, and the larger (4'), more capable Asimo robot. Asimo proved to be too costly to be marketable. Japan is banking on solving the problem of its aging population with humanoid robots as assisted living companions, in hospitals, as receptionists, etc. Another humanoid robot project is KASPAR at the U. of Hertfordshire in the UK. It's a stationary, sitting, child-like doll with 8 degrees of freedom in moving the head and neck and 6 in moving arms and hands. It uses gestures and facial expressions, and is intended to help autistic children to become better at solitary and social cooperative games. At MIT, Aaron Edsinger is continuing some aspects of Breazeal's work, using an unexpressive 2-armed robot DOMO that can grab a target object (see http://www.youtube.com/user/edsinger/feed); Breazeal has moved from the AI Lab to head the robotics group in the Media Lab, and has built a mobile robot "Nexi" with a doll-like, mobile head that expresses emotions much like Kismet did; (this is also the type of robot used by Scheutz at Tufts); see http://www.youtube.com/watch?v=XrmrU7P-ysA, http://www.technovelgy.com/ct/Science-Fiction-News.asp?NewsNum=1562 . There's still no attempt to bring in higher cognitive functions, but at least DOMO can grab a ball, and the like, and Nexi besides being mobile also has arms that it can gesture with and pick up objects with. APPENDIX -- RECENT HUMANOID ROBOTS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *Perhaps the most interesting from an "emotional robot" perspective is KOBIAN from Waseda Univ. in Japan. It is said to be able to walk around, perceive, and perform tasks. See http://www.youtube.com/watch?v=2FBUt336wBI. http://www.dailymail.co.uk/sciencetech/article-1195112/ [no space] Meet-Kobian--emotional-humanoid-robot-joy-disgust.html A favorite mentioned by a 2012 class member is Keepon, e.g., see http://beatbots.net/research/ (and many Youtube videos). This simple "little yellow snowman-on-a-pedestal" robot, designed for interaction with children, flexibly bobs around in place to music and "seeks out eye contact using facial recognition software". Wikipedia describes and lists many robots at http://en.wikipedia.org/wiki/Humanoid_robot Another site with many robots is http://www.ilookforwardto.com/2010/11/my-entry.html, e.g., Actroid-F (AIST) (http://www.youtube.com/watch?feature=fvwp&NR=1&v=zIuF5DcsbKU) nods, blinks, smiles, moves mouth, bows; HRP-4C (sings, gestures) Jules (canned conversation); Repliee Q1 (http://www.youtube.com/watch?v=biBHJvGx3s8&feature=player_embedded) Geminoid (very stiff)] fake robots ("Donna robot"), ... The life-like look and walk of HRP-4C (the fashion model robot) is impressive, but there's no brain to speak of ... A seemingly similar effort from perfect-woman.com, but clearly a hoax, is The Perfect Woman -- robot woman Lisa", supposedly from a company in Kobe, Japan (though the hoaxter seems to be named Etienne Fresse). See http://www.museumofhoaxes.com/hoax/weblog/comments/5203/ and the ad at the following site -- but the face is far too good to be true, and the claims about doing housework are ridiculous: http://www.youtube.com/watch?v=i2wYWAlg8Do&feature=channel The following is the most recent part of the Wikipedia Humanoid Robot timeline (the citations can be found in Wikipedia): 2004 KHR-1, a programmable bipedal humanoid robot introduced in June 2004 by a Japanese company Kondo Kagaku. 2005 The PKD Android, a conversational humanoid robot made in the likeness of science fiction novelist Philip K Dick, was developed as a collaboration between Hanson Robotics, the Fedex Institute of Technology, and the University of Memphis.[20] 2005 Wakamaru, a Japanese domestic robot made by Mitsubishi Heavy Industries, primarily intended to provide companionship to elderly and disabled people. [21] 2007 TOPIO, a ping pong playing robot developed by TOSY Robotics JSC. [22] [stationary, doesn't do anything else. The newer "Tosy Topio" version looks better but seems not to play yet. It just sort of does Tai Chi.] 2008 KT-X, the first international humanoid robot developed as a collaboration between the 5 time consecutive RoboCup champions, Team Osaka, and KumoTek Robotics. [23] [Clunky-looking small robot for robot enthusiasts, ca. $1000.] 2009 *HRP-4C, a Japanese domestic robot made by National Institute of Advanced Industrial Science and Technology shows human characteristics in addition to bipedal walking. [Also billed as "fashion model robot". 5 ' tall, 95lbs, $200,000-$3,000,000. No smarts, some voice interaction. See e.g., http://www.youtube.com/watch?v=brD5D0ytD04.] 2009 Turkey's first dynamically walking humanoid robot, SURALP, is developed by Sabanci University in conjunction with Tubitak. [24] 2010 NASA and General Motors revealed Robonaut2 a very advanced and humanoid robot. Its intended to do spacewalks for NASA.[9] 2011 "In November Honda unveiled its second generation Honda Asimo Robot. The all new Asimo is the first version of the robot with semi-autonomous capabilities." 2012 "In April, the Advanced Robotics Department in Italian Institute of Technology released its first version of the Compliant huManoid robot CoMan which is designed for robust dynamic walking and balancing in rough terrain." The IEEE Spectrum has a collection of recent articles on humanoid robots at http://spectrum.ieee.org/tag/humanoid+robots Featured robots are Diego-San (UCSD), Kenshiro (Being built in Japan), Robonaut, and others. SEE ALSO "MOST FAMOUS ROBOTS", http://sites.google.com/site/luisbeck007/ This includes links to the TOBIO robot mentioned above. Apparently Japan is planning a robot mission to the Moon for 2020 (a bipedal humanoid robot). South Korea's "Albert Einstein HUBO" robot manages facial expressions and arm/finger motion. No smarts, though, despite being billed as able to speak (cf. the Disneyworld animatron figures, which have been "speaking" for many decades). The same goes for robots built by Hanson Robotics -- the faces and facial expressions are impressive, but despite the "dialogues" in their demos (Canned language), there's no brain to speak of.