MARVIN MINSKY, THE EMOTION MACHINE , chapter 4 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (Simon & Schuster, Nov. 2006) http://web.media.mit.edu/~minsky/ [Comments in square brackets are my own thoughts.] 4. Consciousness ~~~~~~~~~~~~~ I had the impression in this chapter that Minsky more or less identifies "consciousness" with self-awareness -- which according to most philospohers, neuroscientists and cognitive scientists is only one aspect of consciousness -- the "higher-level", or reflective aspect. Note this apparent identification right at the beginning: "What kinds of creatures have consciousness? ... Are crocodiles, frogs or fish AWARE OF THEMSELVES to any extent?" [(my emphasis). Of course, McDermott, Arbib, and many other "HOT theorists" suppose that self-(model) access is essential even for phenomenal consciousness; so this is not an unusual position to take. But is seems to be taken here as if it were self-evident.] His general contention is that "consciousness" is a "suitcase word" which we need to unpack and analyze functional-part-by-functional- part to reach an understanding of it. He lists many activities as playing a role in consciousness: reacting, recognizing, classifying, wondering, envisioning, choosing, reconsidering, retrieving, describing, arranging, reasoning, being apprehensive, thinking, reflecting, evaluating, modelling, etc. [... At first thought, this seem to me more like an inventory of mental activities required for intelligent behavior generally, rather than being hallmarks of consciousness (except for "being apprehensive" and "reflecting", and some related terms he also employs). In particular, AI programmers have implemented versions of many of these activities, but I wouldn't call their creations "conscious". On second thought, however, I decided that Minsky had in mind that "Joan" was *conscious* of all these mental events and activities -- she consciously recognized the sound, classified it as coming from a truck, wondered whether or not to make a dash for it, and so on. In that case, I have a different objection to the list as evidence that "consciousness" is a "suitcase concept". For, in this case, Joan was *conscious of* the sound, *conscious of* classifying it as a truck sound, *conscious of* trying to decide what to do, etc. In other words, in this case "consciousness" is being treated as a binary relation, and the listing is simply a listing of some of the values the *second argument* of this binary relation can have, rather than a listing of different senses of "consciousness". Minsky's argument, in this case, could be transformed into an argument that *any* binary relation is a very complex concept. For example, we could argue in this case that the verb "touch" (in the physical sense) is a very complex concept because there are so many different kinds of things you can touch -- hands, pencils, cats, computers, etc.; or, that "being in a suitcase" is a very complex concept because so many different kinds of things can be in a suitcase -- shirts, pants, toothbrushes, books, etc.! Just because the "contents of consciousness" can be many different sorts of perceptions and thoughts doesn't mean that consciousness is a complex concept!] A-brains and B-brains, evolution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ He argues that a brain process can't literally reflect on itself, as it would interfere with itself -- at best one part can reflect on another. This leads to a conception of successively "higher" layers, each reflecting on the one beneath. Driving at more or less the same point from another angle, he argues that efficient evolution requires a kind of divide-and- conquer approach -- the formation of mental organs (just as in the case of physical organs), which are fairly stable over time in themselves and with relatively simple interconnections, but which are "patched" to adapt them to new purposes, and which become layered-over with additional organs. Because of the "patches", we shouldn't expect to find uniformly true principles governing the "organs", but only more-or-less true ones, allowing for miscellaneous exceptions. [Good points, I think.] Looking inside one's own mind ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ He points out that full self-observation is impossible, and would be self-defeating. He also considers the philosophical claim that you can't scientifically observe subjective experiences. As usual in the book, he basically answers, oh yes, subjective experiences *are* within the reach of scientific explanation, if you just unpack the suitcase word "subjective", and investigate each aspect separately. [But he is thus proposing an *objective* analysis of processes that lead to the conscious agent's sense of having subjective experiences, and is talking past the issue of subjectivity!] He says that "we must DESIGN -- as opposed to DEFINE -- machines that can do what human minds do [again, an enterprise that sets aside subjectivity.] Over-rating consciousness ~~~~~~~~~~~~~~~~~~~~~~~~~ Here he just points out the advantages of only having a very high-level, summary view of what's going on in our minds, as opposed to being able to introspect on all the myriad details. Suitcase words in psychology ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ He says that just as ordinary words like "large" or "small" shift their meaning as a function of context, so do "attention", "emotion", "perception", "consciousness", "thinking", "self", "pain", etc. He then more or less acknowledges that many (perhaps all?) ordinary words (e.g., "furniture") show the same vagueness and shiftiness in their meaning. [This is a familiar theme in the philosophical literature on vagueness. Friedrich Waismann, a member of the "Vienna Circle", argued that even a word like "cat" suffers from (what he called) "open-textured vagueness". But then it seems that all terms are "suitcase words", so the qualifier seems not to make a very useful distinction.] How do we initiate consciousness ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ He suggests that self-modelling, serial, symbolic thinking, and retrieval of recent memories are heavily involved in consciousness and are initiated by "critics" (once again) when mental processing encounters difficulties. [Personally, I think that (a) we use self-modelling much more routinely than that, e.g., based on our beliefs about the completeness of our own knowledge in certain respects, we know that pigs don't have wings, or that the phone didn't ring; and (b) *detecting* trouble might be relatively easy, but without saying what sorts of trouble are detected, how trouble detectors work, and more importantly, how corrective actions work, we've done little more than speculate idly. Some details, mostly concerning his own past work and work with PhD students, do come subsequently in the book...] Self-models ~~~~~~~~~~~ Minsky thinks we construct many self-models for different purposes. [But the examples he gives just seem to concern different *aspects* of a self-model, and I don't see why they can't all be tied to the same concept (or symbol) of the self. On his account, we all suffer from dissociative disorder.] Symbolic descriptions ~~~~~~~~~~~~~~~~~~~~~ Finally he mentions this important topic (central to McDermott's account), and gives the example of an "arch" in a blocks world (as described in the PhD thesis of Patrick Winston, his doctoral student and eventual head of the MIT AI Lab). He says [rightly, I think] that OUR CAPACITY FOR SYMBOLIC REPRESENTATION GIVES US OUR EDGE OVER ANIMALS IN COMPLEX PROBLEM-SOLVING. Recognizing consciousness ~~~~~~~~~~~~~~~~~~~~~~~~~ It seems that Minsky uses the term "critic" for any kind of recognizer. [Breazeal & Brooks used another peculiar term for this, "releasers".] Here he suggests that just as "trouble-detectors" initiate what we perceive as conscious reflective processes, "conscious detectors" detect the activity thus initiated, and signal this to language processes, etc. [But it's really no explanation of either consciousness or language to say that consciousness detectors detect reflective processes and send signals to the right words...] The comments about the "distancing" (through layers of abstraction) from immediate perceptions that occurs in consciousness is the price paid for the breadth of awareness that we enjoy. He also makes much of the time delay between perception and conscious awareness of what is being perceived or thought about. The feeling that our consciousness of events is concurrent with the events is the "Immanence Illusion" [this was called "subjective antedating" in Blackmore, ch.3]. He also mentions briefly that our fast orientation to a particular setting we might walk into (like a classroom) depends on our already having an extensive prior model of that setting [his "frame system" idea published 1975, and elaborated later in this book]. Self-models and self-awareness ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Here is where he goes into detail about "multiple" self-models [when really this might as well be about different facets of the unique self-model. ] The picture he draws of the self-model divided into a body-model and mind-model seems perfectly reasonable. I like the observation that our lack of introspective access to the physical *location* of our thoughts (in the brain) can cause us to model ourselves as having no connection between mind and body [or, to suppose that the core of our mental being lies in the heart, or elsewhere]. The Cartesian Theater ~~~~~~~~~~~~~~~~~~~~~~ He cites William James to indicate that this idea (as a metaphor) came from him [without the "Cartesian" qualifier, which is Daniel Dennett's]. He first comments on the intuitive plausibility of the idea, and partially seems to go along with Bernard Baars, but goes on to find considerable fault with it. He is certainly right that the model/metaphor leaves much to be fleshed out. [But I think some of the confusion he finds in the idea is the result of his importing his "multiple selves" notion into Baars' model, which Baars wouldn't necessarily accept. Thus he finds a model of "Joan" on stage, another in the audience, another behind the scenes, etc., but I don't think Baars had in mind various self-models as actors or audience members, etc.] The serial stream of consciousness ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Once again, this is said to be a construct, which no doubt it is, in some sense [but I suspect it models at a high level of abstraction what's really going on, with reasonable accuracy; the slight time delays mostly don't matter]. He attributes to his "Citizen" the contention that there's something mysterious about self-awareness. [But insofar as access consciousness is more easily understood than phenomenal consciousness, I don't think self-awareness is the main mystery.]