Drew McDermott, Mind and Mechanism, MIT Press, 2001. Chapters 3 & 4. [Comments in square brackets are my own thoughts.] CHAPTER 3: A COMPUTATIONAL THEORY OF CONSCIOUSNESS (p.93-136) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ p.94: "What I would like to argue in this chapter is that, if there is ever such a thing as an intelligent robot, then it will have to exhibit something very much like consciousness." [This seems somewhat stronger than an anti-zombie claim: consciousness can't be faked.] He warns (p.64-5) that "a computationalist theory of consciousness [even the correct one] will inevitably sound like 'explaining away' rather than true explanation... In the end, the correct theory will win the argument only if the evidence in its favor outweighs intuition." My interpretation of what he means by a self-model is that the model represents what the agent is experiencing and believes and knows and expects, etc. (What is being experienced is perhaps the aspect of the self-model that comprises working memory, though McDermott doesn't venture such hypotheses.) So for instance if the agent is seeing and sniffing a rose, its self- model might include representations equivalent to something like type .------>(Rose) see / color (I) ----->(Rose34)-------->(Red) \ odor `------>(Rose-fragrance) The key point, for McDermott, is that being able to form such a representation is certainly not *necessary* for a creature that has perceptions and acts accordingly. For example, a bee might detect the fragrance and color of a flower, and hence seek out its nectar, without being able to form such a representation. By assuming that consciousness depends on such a representation of the fact that certain perceptions are currently being experienced, he is committing to a "second-order" theory of consciousness (recall "HOT" theorists). However, he doesn't mean to IDENTIFY consciousness with awareness of perceptual "facts" like those above. He just claims that consciousness of the perceptions depends on HAVING such representations. (So, creatures without a self-model are not conscious.) Further, since these representations have to "bottom out" somewhere -- e.g., in the example above, at the "primitive" notions of "Red" and "Rose-fragrance" (whereas "Rose" might be further analyzable in terms of shape and parts and other properties), a robot would behave just like people in trying to explain what it meant by saying "I experience the rose as being red and fragrant". It would not be able to do any more than compare the color or fragrance to that of other things; it could not explain the "essence" of redness, because "Red" is a primitive for it. So it would believe that there are qualities in its experience that are indescribable: qualia. [However, it's not entirely clear to me why having redness and rose-fragrance as primitives should imply *experiencing* the qualities. Recall the "vision zombie" that could perfectly well identify a rose as red (thanks to the vision prosthetic whispering into his ears), without *experiencing* the color.] He doesn't talk here about the utility of self-modelling to an agent [though we could speculate about that, e.g., that functioning in a social environment depends on modelling oneself as well as others and one's relationship to others, and that a self-model is helpful for *learning* by allowing for such thoughts such as "I made a mistake, and next time I'll do things differently"]. He makes the case at length, however, that once we assume such a self-model, then we can begin to provide a computational account of "free will", emotions, and our ability to either take perceptions at face-value, or to view them *as* perceptions, and in particular as perceptions that are at odds with reality (the seemingly bent stick in the water, etc.). This is what the sections that follow are all about. Pains and their locations ~~~~~~~~~~~~~~~~~~~~~~~~~ McDermott starts, as an exercise in computational modelling (prior to bringing self-models into the discussion), by addressing the problem of how it is possible for a brain state to be experienced as a pain IN THE FOOT (for example). He says the explanation is that the nerve signals transmitted to the brain carry the information that the pain stimulus is in the foot. [Would Jackendoff & others who have raised this issue be satisfied with McDermott's answer? I doubt it. Jackendoff is not ignorant of computer science, and knows that the pain in the foot leads to neural activity associated with the foot portion of the "body image" (somatosensory region) in the brain. He would probably say that McDermott misses the point: the puzzling point is not that the person with the foot pain KNOWS that the pain is in the foot, but that the pain is EXPERIENCED as actually BEING in the foot. McDermott does concede that "none of this explains what subjective experience actually is".] Free will ~~~~~~~~~ "So today we're going to ask---and hopefully answer---this question of whether there's free will or not. If you want to know where I stand, I'll tell you: I believe in free will. Why? Well, the neurons in my brain just fire in such a way that my mouth opens and I say I have free will. What choice do I have?" Scott Aaronson, PHYS771 Lecture 18: Free Will http://www.scottaaronson.com/democritus/lec18.html McDermott's first example of a computational explanation that depends on assumption of a self-model is the explanation of FREE WILL. His explanation (using the vivid example of a robot deciding to exit from a room containing a bomb with a lit fuse) is the following: a sufficiently sophisticated decision-making agent has a model of itself, but it cannot comprehend, in causal terms, its own action choices [at the time of making a choice?!], because that would require predicting the outcome of its own reasoning before the reasoning is finished [my paraphrase]. So it must treat itself as if it were exempt from causality, i.e., had free will. [I feel McDermott is onto something here, though it isn't quite precise enough to be fully persuasive. McDermott seems to have in mind the following argument schema: 1. An agent has free will iff its choices are deliberate (& thus conscious) and purposeful, yet can't be predicted based on a causal model of the agent. 2. An agent that makes deliberate, purposeful decisions can't predict its own decisions, because it would also have to predict its own deliberations, as well as the prediction of those deliberations, etc., leading to an infinite regress. 3. Therefore, such an agent cannot view its decisions as causally determined, and so it concludes it has free will. One hole in such an argument is that in general conformity of a complex system with causal laws isn't taken to mean that the detailed behavior of the system can actually be predicted *before* that behavior occurs. Rather, it means that the behavior is predictable (or explainable) *in principle*, given unlimited time for working out causal consequences, and given all the details of the state that the system starts in, and of the environmental events that affect the changes of state. Even the motion of a few gravitationally interacting bodies in physics is very hard to work out, let alone the behavior of biological molecules or of an ant colony -- or of a mammalian brain. So we should more carefully phrase (1) as 1'. An agent has free will iff its choices are deliberate and purposeful, yet can't be predicted (or explained) based on a causal model of the agent, even in principle, given unlimited time and perfect knowledge of all relevant aspects of the initial state and external influences. Now, part (2) of the argument remains true -- the agent still can't predict its own decisions, no matter how much time is allowed, because the longer it thinks, the more of its thinking needs to be predicted -- the infinite regress remains. However, the conclusion (3) no longer follows. A rational agent might perfectly well consider its own decisions as determined (in principle) by causal laws, while recognizing (in view of the infinite regress in 2) that it cannot expect to predict its own mental actions in detail. (And the agent may find it reassuring that even for others, as in the case of many-body problems or the ant colony, prediction might be infeasible.) Note also that inability to predict one's own mental actions in detail still allows one to think of oneself as acting, at a general level of description, in accord with causal laws. For example, suppose the robot in the doomed room does consider itself fully subject to causation, but it believes that its construction is such that it "cannot help" but perform evaluations of the type, "If I take such-and-such actions, the consequences will be so-and-so". Furthermore, it believes that whatever actions evaluated in that way seem to offer the highest expected utility are the actions it will inevitably take (just thanks to its construction). Sure enough, it finds itself doing the evaluations, and bolting from the room turns out to have the highest expected utility, and sure enough, having drawn that conclusion, it finds itself bolting out of the room. So this robot seems to have an entirely causal view of itself, yet "does the right thing". The robot could further confirm this deterministic view of itself IN DETAIL IN RETROSPECT. For instance suppose the robot maintains a detailed "trace" of the operation of its programs over the last 20 minutes of operation. If asked shortly after the emergency "Why did you bolt from the room?", it would consult that trace and explain at any level of desired detail exactly how one execution step led deterministically to the next, and thus to its exit from the room. But I think what's interesting about McDermott's argument is just the idea that the *feeling* of free will comes from the fact that we can't predict our own decisions (no matter how hard we might try!), and so they seem to us "free", i.e., not the result of causal laws.] He also suggests that people lose their freedom when they do view themselves causally, giving up on deliberating their own choices. [Again, I don't think this quite stands up to close scrutiny, but I won't attempt a detailed commentary.] P.100, a nice comment: "A statement of the form, 'Because we can't make decisions, we should ..." is silly, because any statement about what we 'should' do presupposes that we can make decisions. We're stuck with free will." [Question: is this still true if we substitute 'we can' or 'we might as well' for 'we should'?] Pleasantness and unpleasantness ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ He goes on (p.101-3) to suggest that something like a quale of "unpleasantness" when sensing extreme heat would be the consequence of endowing a robot with an intrinsic goal of avoiding extreme heat (& thus damage). This is getting to the crux of his theory that qualia are the byproduct of being a reflective agent with primitive, irreducible goals, or primitive, irreducible perceptions. Primitive evaluations are ultimately necessary because "a creature that could really question the value of everything would never act" (p.103). [But again, why should an evaluation be *experienced*, just because it is viewed in the self-model as not being further analyzable?] Robots and emotions ~~~~~~~~~~~~~~~~~~~ Amusing comments about "Star Trek"-like robots/androids on p.103, that have preferences but lack "emotion chips". p.104 he asks what the difference is between preferences (such as sci-fi robots typically have) and emotions (such as they typically are assumed to lack). He suggests that emotions "seem to have three components: a belief, a preference, and a quale peculiar to each emotion. Fear is a belief that something is likely to happen, a preference that it not, and a set of sensations peculiar to fear." [cf. Fellous & Ledoux, Adolphs, Sloman, Arkin, Breazeal, Arbib]. Then he asks why the quale couldn't be missing, leaving only the belief and the preference, as is often supposed in the case of sci-fi robots. He doesn't answer the question at that point, but instead launches into a lengthy discussion of perception and introspective access to perceptions, and his explanation of qualia (esp. color qualia) based on that distinction. Eventually, on pp.117-118, he comes back briefly to the original question, suggesting that the "fear quale" would in effect be present in the robot as well, because otherwise it couldn't distinguish between, say, fear and pain in its introspective account of its own perceptions. [That's unclear to me -- couldn't it distinguish on the basis of the different *beliefs* that go with fearful situation and a painful one? But perhaps McDermott's point could be made with say, a stinging pain and an itch on the leg -- the distinctive qualia lead to different immediate reactions and different *conjectures* as to what may be happening.] Perception and (introspective) judgement ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ On p.105-8, he makes the point that we don't always accept what we perceive at face-value: we are able to introspect on our perceptions, and in some cases judge them to be mistaken (based on other evidence). The example he belabors is that of a stick thrust into water, which appears bent but we believe it stays straight. Similarly for a stereo viewer, where we perceive shapes at various depths but believe that we are confronted with two flat, slightly different images. [Looking in a mirror, or seeing a puddle-mirage on a hot highway, might be other examples.] He therefore distinguishes between "normal access" and "introspective access" to a perceptual module; the former involves accepting the representations provided by the module, without scrutiny, as information about the world, to be used directly for action, while the second involves regarding these representations as providing information about the module itself, without necessarily interpreting them in the normal way. He notes p.109 that "The fact that what we are conscious of and what we can talk about are so close to being identical is in urgent need of explanation"; but he doesn't reveal why he is making this point at this juncture ... Later (p.113), it turns out that he wants to employ linguistic access "as an accurate measure of cognizance [access]". He also comments on our "artistic" ability of learning to conceive of what we see in terms of a flat image (where e.g., objects appear smaller when they are further away). His intent is presumably to illustrate in yet another way that we can reflect on our perceptions *as* perceptions. Qualia ~~~~~~ He then (p.112-117) closes in on the notion of qualia, arguing that if we built a robot with the same color-discrimination abilities as a human (no more, no less), and the ability to verbally describe and compare these colors, it would easily describe this or that patch as yellow or green, and this patch as being very similar in color to some other patch, but would be stuck when asked to clarify its *reasons* for making these judgements -- the yellowness or greenness of yellow or green patches is just a primitive property that it perceives and can state, but not something it can further anaylyze. Such a robot, he suggests, would believe that it experienced "qualia". The same would be true for touch sensations and emotions, as long as different primitive labels are associated with, say, "fearlike unpleasantness" and "painlike unpleasantness", etc. [As I've stated, I remain unconvinced. What would impel the robot have to declare that it is *experiencing a particular sensation* in any of these cases? After all, it will surely also have nonperceptual primitives such as physical distance (between objects), temporal duration, truth (of a claim), cardinality (of a set of distinct objects), zero, ordering (of objects in space or events in time), the notion of a location, of a point, of a straight line, and many other basic abstract notions. If we expect the robot to be convinced that it experiences colors, unpleasantness, etc., do we also expect it to be convinced that it experiences distance, truth, cardinality, etc. as particular sensations? If not, why not?] The self-model (p.118-130) ~~~~~~~~~~~~~~~~~~~~~~~~~~ [I found this section interesting in parts, but rather rambling, and prone to object-language/metalanguage confusion. See below.] He says p.18 that modeling something means using a representation for it that resembles it in some way. [Here I think he's struggling to avoid mention of denotational semantics, having foresworn it earlier. Something like the symbols '(Saw I Bladerunner)' might represent (in some robot) that the robot has seen the movie "Bladerunner". Can we say this representation "resembles" and therefore "models" the latter fact? Well, only if we can regard the symbol "Saw" corresponding to the relationship of seeing (in the past), the symbol "I" as corresponding to the robot itself, and the symbol "Bladerunner" as corresponding to the movie with that name. Even then, "resemblance" is too strong a word -- about the only resemblance to the things represented is that the 3 symbols in '(Saw I Bladerunner)' are in close proximity, just as the robot and (a showing of) the movie must have been in close proximity, with a "seeing" going on at the same time. But that's a mere accident, and the whole point of symbolic (as opposed to analogue) representations is that they can represent arbitrary properties and relationships WITHOUT using tokens or structures that *resemble* what they represent. But the role of the correspondence between symbol and object or symbol and property is crucial.] There seems to be an equivocation in McDermott's discussion of the self-model. Sometimes he seems to view the "I" of the self- model as simply a symbol standing for the agent, and at other times as some sort of active "module" (p.123) that does computational work -- something perilously close to Blackmore's "inner self" or "true self". For example, consider the statements on p.121: "'I' is the creature who makes ...decisions", and "'I' is an object of the self-model". How are we to read the two quoted "I"s? Normally, when we quote a symbol, we mean the symbol itself, not what it stands for. For example, when we say "'Love' is a 4-letter word", we are referring to the word "love", not what it stands for; by contrast, when we say "Love makes the world go round", we are referring to the phenomenon that the word "love" stands for -- it is not the *word* that makes the world go round. By that criterion, McDermott's first sentence above makes no sense -- the *symbol*, "I" is not a creature! But perhaps he intended the quotes as "scare quotes" (as I'm doing here!) to indicate that we should not take the term too literally. In that case, the first sentence does make sense, and is a truism (*of course* I am the one who makes my decisions!). But if the quotes in the 2nd sentence are also scare quotes, then the 2nd sentence makes no sense, because the creature itself is certainly not an object of the self-model, it is the entity that *contains* within it the self-model. So it seems that in the second sentence we should regard the quotes as real quotes, i.e., the self-model contains a symbol that denotes the creature itself; but McDermott wants to *avoid* any talk of symbols as having denotations! The same point comes up again on p.123 when McDermott says, "Robot 1 believes Robot 2 to have (or be) a self like its own 'I' ...". Robot 1 might well think that Robot 2 has a *symbol* for itself, much as Robot 1 has the symbol "I" for itself. That much makes sense, if we take the quotes on "I" to be genuine. But the sentence also refers to this symbol -- if that's what it is -- as "a self", and surely a symbol is not a self! On the other hand, if we take Robot 1's "I", as used in that sentence, to stand for Robot 1 (*not* its self-denoting symbol), then the phrase "have ... a self like its own 'I'" makes no sense, since this then means "have ... a self like (the self that is) Robot 1". Finally, if we opt for the bracketed "(or be)" in our reading of the sentence, we get that "Robot 1 believes Robot 2 to be a self like (the self that is) Robot 1", which amounts to "Robot 1 believes Robot 2 to be like itself", and surely McDermott intended to say more than that. This shiftiness about what "I" refers to may seem like a minor flaw, but it is an instance of blurring the distinction between *use* and *mention* of theoretical terms (equivalently, between object language and metalanguage), one of the most insidious sources of confusion when one is trying to analyze meaning. I had the feeling throughout that McDermott does not fully appreciate the importance of this distinction, and as a result misunderstands why many of those who postulate general symbolic meaning representations "in the head" believe that these meaning representations should be formally interpretable (in a model- theoretic sense). On p.130 he even talks about the "true selves" of certain creatures he is considering in his thought experiment about a language faculty separate from consciousness, and this indicates a shift from a symbolic conception of "I" all the way over to a Blackmore-like "inner self". One point that was made earlier in the book is elaborated here, namely that a creature has properties in virtue of the self-model saying that it does. One example he mentions is that a creature may have intentions in virtue of its self-model saying it has those intentions. I imagine he means that a self model may contain a representation expressing "I intend to do such-and-such", and in virtue of containing this representation, the robot may actually do it. [That seems fairly plausible.] p.123 McDermott has some comments about the illusion of having a "Cartesian theater" [which I don't understand]. He also speculates (based on work by Gazzaniga) that the self- model is located in the left hemisphere, associated with the speech centers, while he denies that it is a "localized black box". Subsequently McDermott considers episodic memory (our autobio- graphical memory, rather well-studied in cognitive science), and concedes that like language, this seems to call for a general internal representation -- which he then promptly rejects again as "far-fetched", staying with his view that the idea that we have a general internal representation is an illusion created by our self-modelling. [How unfortunate.] He summarizes his position p. 126, saying that "to be cognizant of a state of affairs is for some representation of it to be accessible to the self model". [Well, he defined "cognizant of" as "having access to" some representation, so this seems like a tautology, except for the phrasing "accessible TO THE SELF-MODEL". This sounds as if the self-model is some sort of agent-within-the-agent, rather than just a dynamically updated representation of the agent within its world, accessible to various inference processes and to linguistic processes. Is that the intention? I suppose his drift towards an agent-like conception of the self-model ties in with the idea (above) that the self-model will contain representations of intentions -- and these cause the agent to act; but I would caution that it isn't the self- model that acts; rather, in Baars' terminology, it is the unconscious processes comprising the "audience" that gets the job done!] His thought experiments p.126-130 didn't really clarify things for me. He tries to imagine language being separate from consciousness, but I'm not sure what he concludes. CHAPTER 4: OBJECTIONS AND REPLIES (p. 137-165) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ His main goal is to defend his secord-order theory of consciousness against the view that there are "raw feels" without introspection -- phenomenal experiences without access to a representation of them (contra Ned Block, Michael Tye, Larry Shapiro, and perhaps most other philosophers). Larry Shapiro perhaps makes the argument for raw feels most clearly (p.139), where he is quoted as writing that "it's the tickling [that a person being tickled] feels, not her feeling of being tickled. It's the dog's barking that I hear, not my awareness of the dog's barking." [Others have pointed out that when I sit musically entranced at a concert, or when I stand visually absorbed in front of a Monet painting, I am not generally aware of my awareness of these qualities -- I am enjoying the perceptual qualities as they flood into me, without cogitating about my experience *as* an experience.] McDermott replies that consciousness does not lie in feeling that I'm feeling a tickling sensation, or perceiving that I'm hearing a barking noise; it lies in feeling the tickling, or hearing the barking -- but the fact that the representations of these experiences is accessible to introspective processes is necessary for those feelings to be conscious. [One may still wonder how this mere accessibility to reflective processing "turns on" consciousness, i.e., turns a mere flood of percetual data into a conscious experience. Also, on p.149, McDermott says, "I have argued that a quale is nothing but the brain's way of thinking about its own sensory comparison system". That sounds suspiciously like an equation between qualia and second-order thought.] Why aren't computers already conscious? (p.142-144) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It's not hard to arrange for introspective access to perceptual representations, and some computer vision systems do just that. So are robots already conscious? McDermott says no, because their mental world isn't rich enough -- they would have to be more like us in the perceptions they have, the decisions they make, and (p. 143) "being a self, and such" [whatever THAT means; but I agree that human-like conciousness requires human-like knowledge of and interaction with the world, including linguistic interaction]. Absent-qualia argument (p.144-150) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ McDermott gives a neat refutation of Chalmers' zombies: he says that in the zombie-universe Chalmers considers to be logically possible, Chalmers himself would be claiming to have genuine conscious experiences, & denying that his zombie-double has them! But he would be wrong in claiming to have genuine experiences, since he is himself a zombie in a zombie-universe; so Chalmers' arguments are self-defeating. The next 3 subheadings are all about qualia comparison, in one way or another, and McDermott's position is that such comparisons, like qualia themselves, are meaningful only relative to an individual experiencing the qualia. Mary, the color-deprived color theorist (p.150-154) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ He grants Frank Johnson's point that Mary does learn something new -- but this something is meaningful only from her subjective perspective. If she is running a full simulation of her own brain, she can see that her brain will arrive at the conclusion "Now I know that red looks like *THIS*", but *THIS* will just be gibberish from an external perspective -- it is an arbitrary symbol for an external viewer of the simulation, but is subjectively meaningful because it differs from other visual sensations, and is similar to visual sensations for tomatoes, roses, cherries, blood, lipstick, etc. It is a quale only from a subjective perspective [cf. the artificial gravity analogy]. So where exactly does Johnson's argument that materialism is false break down? Here I don't think McDermott is completely clear. [I would say that Mary learns nothing new as far as any 3rd- person perspective is concerned; she does learn something new from her own subjective perspective -- and it's something her 3rd-person knowledge of color theory *couldn't* have predicted, because that theory is not formulated in subjective terms! So what happens if we completely reformulate Johnson's argument in terms of subjective experience? In that case the premise that she already knew everything about color was simply false, because she had no subjective experience of color yet. So is subjective experience something extra? No, it's just an alternative perspective on the same reality, as can be seen from the completeness of the predictions of the materialist theory, implemented on the Mary- simulator -- it predicted all her brain processes, her verbal and endocrine reactions, etc.] Inverted spectra (p.154-159) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The argument under consideration is that one person's color qualia might be the inverse of those of his identical twin, and computational modelling doesn't allow for this. McDermott replies that inter-personal qualia-comparisons are meaningless. It is even meaningless to assume that one person's red-quale, say, is the SAME as another person's (even if it's an identical twin). Qualia are just arbitrary symbols from an external perspective. He considers various ways one might try to actually compare one individual's quale with another's, but concludes that this would never demonstrate anything but the arbitrariness of qualia, or the fact that comparison is only meaningful intra- subjectively, never intersubjectively. What it's like to be a bat (p.159-161) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ He pokes fun at the entire "what it's like" line of consciousness philosophy instigated by Thomas Nagel with his "What is it like to be a bat". He says that "being like" is a comparison, and you can't compare one individual's subjective experience (whether human or bat or robot) with another's. [I sympathize with his line of argument, and my reaction is (again) that I think it would be better to just talk about the nature of the feelings, sensations, or emotions that an individual experiences, rather than using the phrasing "what it's like". Still it remains rather mind-bending, and counterintuitive, to suppose that it's simply *meaningless* to say "my experience of red (or pain, etc.) is pretty much the same as yours, under the same conditions". I guess one just has to interpret such statements not in terms of specific qualia, but in terms of the "significance" that experiencing the qualia has for us -- e.g., we may go into similar mental states, make similar associations, show similar physical reactions, etc. to similar experiences, and it's only in THAT sense that the qualia are "the same".] Searle's Chinese room (p.162-3) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ McDermott says that in Searle's Chinese room scenario, intelligence and consciousness are implemented in a "unexpected medium" (i.e., pieces of paper, and a rule book, all manipulated by a person who knows no Chinese but holds a "conversation" using the rule book). Searle argues that the person using the rule book is like a computer following a program, and just as neither the person in the room nor the rule book has any understanding or awareness of what's going on, neither does any computer. But McDermott counters that the system consisting of the person in the room *together with* the rule book does understand and and is aware of what's going on. [I recall seeing replies by Searle to this "system argument" as he calls it. I think he takes it as obvious that pieces of paper and the rule book are not part of any larger intelligent system. The strength of that intuition arises, I think, from not taking matters of speed and storage seriously enough. (On the same basis we might say chess requires no intelligence, because it can be played perfectly by a person who just consults a rule-book that supplies the optimal move for every board position.) The system envisioned by Searle would be tens of millions of times slower than a person, so it would take at least several years to process a single sentence. This makes it less plausible that the system really *does* behave like a person fluent in Chinese. If you fix that problem by equipping the person with some mechanized ways of determining the correct responses in a more reasonable time, you can see where that leads...] He also mentions Block's variant of having a billion people working together to simulate a brain. Block argues that surely these people wouldn't have any "collective consciousness", just their own individual consciousness. But McDermott disagrees -- they would indeed be a collective consciousness. Our intuitions to the contrary just have to yield to reason. The intuitions arise from our instincts for classifying certain kinds of cohesive organic things as sentient, but these instincts cannot be applied to entities that don't occur naturally. At the end of the chapter he expresses some qualms about the fact that his brain might not really harbor a single conscious self, but many, in some sort of confederation (just as we could network computers, each with some consciousness). [I think this again edges over towards confusion between a self-model and "the self", which in my view is most reasonably thought of as the person as a whole!]