Drew McDermott, Mind and Mechanism, MIT Press, 2001. Preface and chapters 1 & 2. [Comments in square brackets are my own thoughts.] This seems to be the only book on consciousness that has been written by a mainstream AI researcher. The emphasis is on the possibility of machine consciousness -- though as a mechanist, McDermott emphasizes that whether a sentient being is made out of organic or inorganic matter is quite irrelevant to whether it is conscious or not. Roughly speaking, McDermott's central thesis is that consciousness depends on introspective access to a self-model. This model contains information about what the agent is currently perceiving, among many other things. The representation of these perceptions ultimately needs to be framed in terms of primitives, such as some symbol for "blue", that cannot be further analyzed. If we design a robot to have this kind of introspective capability, allowing it to determine, for example, that "according to my self-model, I am currently perceiving the blue of the sky", and if we also ensure that the symbols employed are linked in the right sort of way to other cognitive contents, then the robot, too, will regard its perceptions as involving "qualia" that it is conscious of and that it cannot further analyze or describe. We will have no reason to doubt, either based on its behavior or its construction, that it is genuinely conscious. PREFACE (p. xi-xiii) ^^^^^^^^^^^^^^^^^^^ Apparently McDermott's reason for being in AI (which he would like to rename "cognitive informatics", p.30) is his fascination with the mind-body problem [rather than, say, the desire to build very smart mechanisms, I suppose]. His emphasis is on computational ideas, and he sets forth, right away, some common myths he intends to dispel: - all intelligent reasoning is deductive - we need to choose between 2 disparate paradigms: symbolic and neural-net approaches - serial programs can model only conscious thought, while unconscious thought requires massive parallelism - whether something is a computer depends on whether it is used as such - only manipulation of formally interpretable symbols yield interesting behavior - symbols in a computer are meaningful only relative to someone's interpretation of them - a computer could be made to behave just like a person, but without experiencing anything (zombies) Denial of the last point of course relates directly to McDermott's central thesis concerning consciousness. CHAPTER 1 (p.1-27) ^^^^^^^^^^^^^^^^^^ He begins with the view that cognition is a computational phenomenon, i.e., people are computers (in a general sense), and so the problem is to explain how computers produce mind. He makes the analogy (p.3) that certain data structures in a laptop (say) that are not themselves image-like can produce images of "windows" and various contents on the screen (though of course this also involves execution of instructions). Then he gets to a very important point: that all self-attributions of a person depend on having a SELF-MODEL (p.4). This self-model needn't be correct to be useful. He also has a nice quote by Thomas Metzinger on this topic (p.4). [I find it refreshing that he gets straight to this important -- and from an AI perspective rather obvious -- point. In this respect he is in accord with authors such as Marvin Minsky, Ned Block, and Ralph Adolphs; whereas it's a point completely missed by Blackmore, for example, and many other authors we have read or read about, with resultant confusions concerning an "inner self", etc. Unfortunately, in chapter 4 he falls at times into Blackmore-like confusions because of insufficient attention to the distinction between *use* and *mention* of "I".] He makes the claim (p.5) that a self model is different from models of external entities such as trees, in that "the self doesn't exist prior to being modeled", and that beliefs about the self are self-fulfilling. [Well, don't *I* exist, as much as trees do, independently of my self-model? It's true that to some extent, we can can change our own properties through an act of will, e.g., a decision to become an environmental activist. We can of course delude ourselves, for example regarding ourselves as virtuous while actually being mean-spirited or dissolute; and at times an unrealistically positive self-model may also be genuinely useful to us, allowing us to function effectively under stress and hardship and in the face of our mortality. But delusional thinking isn't confined to self-modeling -- for example, many people take astrological advice columns seriously, which may or may not be beneficial for them. But in any case, it seems to me that McDermott at times conflates the popular notion of an "inner self" with that of a self-model, and thereby invites confusion. I would say that some people's self-model *includes* the notion of an "inner self" that is somehow the locus of their essential personal characteristics and of their private thoughts and feelings. But to suppose that a person's notion of his or her inner self is somehow that person's *entire* self-model leads to a conceptual muddle, I think.] He then discusses phenomenal experience and qualia, and how these appear to support dualism. Further apparent support for dualism lies in the way we perceive ourselves as functioning -- we do things for various *reasons*, whereas objects do what they do as a result of causal forces (p.7). Why would we have the feeling that we freely choose our actions, if they are the result of causal forces (p.8)? He then argues at length against dualism, showing that a supposed nonphysical mind, playing some role in the loop from perception to action (as assumed by John Eccles), would in fact have no effect at all that wasn't already predicted on a purely physical basis (p.5-14). This then leaves puzzles such as why hearing a bell involves any experience at all, considering that one can imagine building an artificial ear that can identify a bell sound as such, without experiencing anything (p.15). A similar example is that of wine-tasting (p.17). A dualistic position then leads to epiphenomenalism, which is absurd. He also comments (p. 19) on the divide between scientists' general rejection of dualism and the public's (& religious and political leaders') unquestioning acceptance of it. He says scientists might feel superior as a result, but haven't managed to close the explanatory gap either. He agrees with Chalmers (p.20) about what the hard problem is. He refers to a distinction made by O'Brien & Opie between VEHICLE and PROCESS theories of consciousness. Vehicle theories are ones that regard the particular properties of the physical medium in which consciousness arises as crucial. Examples of vehicle theories are ones that identify consciousness with certain patterns of neural activation [Crick?], or with quantum mechanical effects (Hameroff, Penrose). Process theories regard consciousness as a feature of what the brain does, especially things it computes. He argues (p.22) that there is nothing a vehicle theory could account for that couldn't also be accounted for by a process theory. He says that process theories should be able to predict any behavior that a vehicle theory predicts, thus showing the dependence on the physical medium to be a misconception. He points out that explaining behavior associated with conscious experiences is a necessity for any adequate theory, even though it may not be sufficient. He then distinguishes between first-order and second-order process theories of consciousness. First-order theories posit that some perceptions are intrinsically conscious perceptions, while second-order theories say that perceptions must themselves be objects of perception to be conscious perceptions. In other words they carry a "label", allowing them to be reported on (p.23). On p.24 he says that he refrains from getting deeply involved in neuroscience, because from a process perspective it is irrelevant exactly how computations are implemented -- and computers are, after all, the "Swiss army knife" of information processing. Indeed, almost all neural net studies have been performed on ordinary computers. But the neural perspective is popular because of the interesting observations about brain functioning that have emerged in recent years -- and because people have the intuition that "the brain feels ... because it is made of living, feeling parts" (p.24). He also sets aside philosophical notions such as functionalism (p.25), because this involves an inappropriate (and unsuccessful) attempt at theoretical clarity. Roughly a functional theory is one that frames its explanations in terms of causal interactions of mental states (regarded as physically based) such as beliefs, desires, or pain. [Hmm, does this imply that Sloman is a functionalist? I'm not sure.] But this isn't very clear, and trying to make it perfectly clear is like trying to make the concept of "life" perfectly clear. We don't need to decide, e.g., whether viruses or prions are alive -- what matters is whether we know how they *work*. The same should be the goal of a theory of consciousness. CHAPTER 2: AI (p.29-92) ^^^^^^^^^^^^^^^^^^^^^^^ This is a selective survey of some basic AI techniques, with philosophical asides. The three main areas of focus are game playing using game tree search, computer vision/robotics, and NLP, with several pages devoted to mathematical theorem proving as well. His conclusions are about "the architecture of the mind", where he argues for the "MIT view" -- that the mind is a collection of a large number of complex "specialist" modules, whose interactions somehow give an illusion of unity. He denies (provisionally, at least) that there is a language of mind that provides a unifying mental medium for thought. [This negative view of mental knowledge representation and reasoning is currently quite common and in my view disastrous for AI, one that has handicapped progress towards human-level AI for over 20 years. Amazingly, John McCarthy, co-founder of AI (along with Marvin Minsky, Alan Newell, and Herbert Simon) and most famous exponent of the use of logical knowledge representations in AI, is mentioned neither here nor elsewhere in the book!] Game playing (30-46) ~~~~~~~~~~~~~~~~~~~~ He discusses computer chess and standard game-tree search, pointing out that successful programs like Deep Blue depend on massive (but not exhaustive) search. They don't reason based on general principles [in my opinion, this does not show that they couldn't or shouldn't]. He expresses skepticism about studies of human chess playing that seem to indicate that expert players search in a much more focused manner than people, because for all we know the expert's 50 billion neurons may be doing a lot of simple-minded stuff. [However, books on chess contain lots of advice of the sort, "in such-and-such a situation, you should do so-and-so, because of this-and-that". Typically, the situations referred to are NOT particular positions, but CLASSES of positions with some crucial things in common; and the justification is not a subtree of moves and replies based on *particular* board positions, but rather a tactical and strategic analysis that again applies to whole classes of board positions and the positions they might lead to. In other words, experts seem to think in terms of PREDICATES (applicable to board configurations) and ACTIONS that might subsume multiple moves and achieve certain GOALS -- which is not unlike our general way of thinking about the world.] He next discusses the neural net-based Backgammon program TD-gammon, and the claim by some that the neural-net paradigm is superior to "good old-fashioned AI" (GOFAI). He lampoons the supposed distinction between symbolic and nonsymbolic (or subsymbolic) computation (p.42), pointing out that any actual, supposedly "symbolic" computer program manipulates lots of internal symbols that have no well-defined meaning in relation to the application domain; conversely, he claims that neural nets can be viewed as manipulating externally meaningful symbols -- e.g., certain states of the frog's vision system (when a fly is spotted), launching the frog's motor system into action, can be interpreted as expressing the symbolic notion "attack in direction D". As another example, one that involves remembering a fact, he argues that an animal that has momentarily seen a predator which then vanished behind a bush, will for a time remember the symbolic fact that the predator is behind the bush, and this is true regardless of the neural encoding of the information. [I think that's a bit of a stretch; the notion of having an internal symbolic representation of the current situation seems to require more than just having a subsystem whose current state correlates reliably with the current type of situation. This is again a point where Stan Rosenschein's work from the 80's is relevant, Rosenschein made a clear distinction between being able to DESCRIBE the relationship between the state of a machine and its environment in a logical notation, and the machine itself HAVING such a logical description within it. However, McDermott's first point, that actual "GOFAI" programs do lots of uninterpreted processing of "symbols" seems sufficient to demythologize neural nets.] On p.46 he says "Computers don't deduce conclusions about things; they perform computations about them... If symbols denote anything, it's because they are connected by the right kind of causal chains to the things they denote." [That, for me, is going way too far, promulgating a notion that is harmful to progress in AI. The "meaning of meaning" is too complicated a business to discuss seriously here, but one needs at least to distinguish between (i) the "formal interpretability" of a representational system -- the ways (in general infinitely many) in which expressions CAN be brought into systematic correspondence with a domain of interpretation (these ways are called models); and (ii) the ACTUAL correspondences between certain symbols or expressions of an agent USING the symbolism, and entities or states of affairs in the world. The latter involve the causal connections McDermott talks about, but (i) is at least equally important -- it is the basis on which we can make judgements about the correctness (in all instances, or in sufficiently many instances to be useful) of the inference methods used by an agent that reasons. Sure, the agent gets by perfectly well without understanding "meaning" in the sense of (i). But we won't be able to *design* intelligent agents, nor *understand* human cognition and why it works as well as it does, without paying attention to meaning in the sense of (i), and its significance for inference.] Vision (p.47-64) ~~~~~~~~~~~~~~~~ He gives a nice introduction to this subject, with enough detail (e.g., about optical flow) to make it real, without going beyond high school-level math. In robotics he discusses depth perception, tool manipulation, and map learning (including his fairly recent work with Tagare on locale recognition, with some mention of Sebastian Thrun's work as well). He even manages to concisely present scene interpretation based on Bayes' theorem. His goal here is really just to make clear how different the techniques used in V/R are from those used in game playing (and in NLP, to follow). Language (p.64-80) ~~~~~~~~~~~~~~~~~~ [Here is where his prejudice against internal representations really comes into the open. In my experience, only AI researchers and cognitive scientists who have never seriously tackled language understanding as a research subject hold anti-representationalist views. He talks about internal representations as something that appeared attractive "at one time", quite forgetting the fact that the only systems that ever actually understood language well enough (in some limited domain) to make INFERENCES from linguistic input were based on internal meaning representations (this includes Winograd's 1972 SHRDLU system, all of Schank's systems and those of his students in the 70's and 80's, and Allen & Ferguson's TRIPS system.) Furthermore, the biggest successes among practical applications of AI -- in diagnosis, credit card transaction monitoring, mineral prospecting, equipment trouble shooting, etc. -- all depend on rule-based systems or description logics, i.e., on symbolic internal representations (with "symbolic" taken in the sense of being interpretable, at least informally, in terms of facts and actions in the domain of application)! For some idea of what natural language understanding involves, here's a quick overview. First, if we start with spoken language, the acoustic signal needs to be mapped to a sequence of words (or a probabilistic lattice of possible word sequences). This involves analyzing the frequency spectrum, and mapping this in stages to phones, phonemes, and then words. Phones are the distinct speech sounds that occur in human languages. Phonemes are groups of phones that are equivalent in a given language, in terms of what they contribute to the structure of a (spoken) word; e.g., in English "depict" might have two short "i" sounds, both perceived as such but actually corresponding to different phones. Without going into details, the various processing stages typically use *generative* models, involving state-to-state transitions where each state probabilistically generates an observable element. If the current state of the model is influenced only by one or two earlier states -- there is no "memory" of what happened earlier in the sequence -- we have a "Markov" model (in fact a "hidden Markov model", or HMM, assuming that the states themselves are not observable). For example the phones computed by signal processing methods might be viewed as the observable elements generated by a Markov model whose states correspond to ("hidden") phonemes, to be inferred from the observed phones. Once a likely phoneme sequence has been inferred (using e.g., the Viterbi algorithm, a fast dynamic programming method), a similar kind of HMM can be used to infer a likely word sequence, where the words are thought of as generating several phonemes in a row. So let's assume we have word sequence like "Romeo loves Juliet". How do we get to an internal representation that can be used for inference? (E.g., if we also have a general fact that if x loves y, then x wants to be near y, we should be able to infer that Romeo wants to be near Juliet; or, "Romeo loves Juliet" might itself be an inference, if all we know is that "Everyone (i.e., every person) loves Juliet" and "Romeo is a person".) Very sketchily, what we do is to *parse* the sentence, assigning it a hierarchical (tree-like) structure, and then use rules that tell us how to map that structure to a logical representation. Here's a parse tree for our sentence: S Phrasal rules: / \ VP --> V NP NP VP S --> NP VP | | \ Lexicon: Romeo V NP Romeo: NP | \ Juliet: NP loves Juliet loves: V Here we've recognized "Romeo" as a complete noun phrase (NP), "loves" as a verb, and "Juliet" as another complete noun phrase. (We get these facts from a lexicon for English.) We've also recognized the "V NP" combination as a verb phrase (often called the *predicate* of a sentence); and finally we've recognized that the NP "Romeo" can now combine as *subject* of the sentence with the predicate VP, "loves Juliet", forming a sentence, S. In general there will be ambiguities in this recognition process, but we set this aside for now. Next we perform "bottom-up" semantic interpretation, i.e., mapping the parse tree to a logical formula. Here we use a semantic lexicon and rules for interpreting each phrase type. The semantic lexicon tells us that the logical name (constant) corresponding to "Romeo" is (say) Romeo1, and the one for "Juliet" is Juliet1. For "loves", the semantic entry is a bit trickier: (lambda y (lambda x love(x,y))). As in Lisp, the lambdas just specify the order of argument binding; i.e., y (the outermost lambda-variable) is to be bound first when we apply the lambda expression to an argument, and x is bound after that. We'll see this in action in a moment. So first, we label the nodes NP, V, NP to which "Romeo", "loves" and "Juliet" are attached with their semantic (logical) counterparts Romeo1, (lambda y (lambda x love(x,y))), and Juliet1 respectively. We now compute the semantic value of the VP using the rule VP' = V'(NP'), where we use primes to refer to the semantic values of parse tree nodes. The result is VP' = (lambda y (lambda x love(x,y)))(Juliet1) = (lambda x love(x,Juliet1)). Finally we apply the rule S' = VP'(NP') at the top level of the parse tree, obtaining S' = (lambda x love(x,Juliet1))(Romeo1) = love(Romeo1,Juliet1). So that's the resulting "logical" formula. As you see, that's a lot like the original English, but it has been made clear what the logical relation is ('love') and what its first and second arguments are, thanks to the bracketing and ordering of these arguments. A slightly more complicated case is one where instead of "Romeo" in the original sentence we have "everyone" (= "every person"). Assume that the semantic lexicon supplies the value for this, a so-called *unscoped quantifier*. Then you can see that the resulting logical form for the altered parse tree will be S' = love(,Juliet1). Now, "every" is the same as the universal quantifier (as used, e.g., in mathematics or formal predicate logic -- the upside-down A), and as such should appear *outside* the sentence; for example in number theory we might have the claim (every x (exists y (y > x))) (for every number x there is a larger one y). So we still need to move the unscoped quantifier to the "outside" of our formula. When we do this, we also introduce a variable (bound by the quantifier), and we use the 'person' part of the unscoped quantifier to *restrict* the quantification to persons: S' = (every x: person(x) love(x,Juliet1)). Note that the colon after the x indicates that the next subformula, here 'person(x)', restricts the values of x that we are considering. In other words, we're only considering *persons* x when making the claim 'love(x,Juliet1)'. In standard predicate logic notation, this is equivalent to S' = (every x [person(x) => love(x,Juliet1)]), i.e., for every x, if x is a person then x loves Juliet. (However not all natural quantifiers, such as "most", can be represented in standard predicate logic, whereas with the quantifier restriction notation, we can easily say (most x: person(x) love(x,Juliet1)).) So now we can also see how inference would work. For example, given the premises (every x [person(x) => love(x,Juliet1)]), (person(Romeo1), it's intuitively obvious that we can conlude love(Romeo1,Juliet1). The formal rule allowing this inference is "UI+MP", i.e., universal instantiation plus modus ponens. The name doesn't matter, but what does matter is that we can *justify* this rule on perfectly general principles -- the conclusion is guaranteed to be true if the premises are true. Showing this requires a theory of truth, which in turn requires denotational semantics -- the very thing McDermott rejects. Denotational semantics permits certain correspondences between symbols and entities in the domain we're considering. For example, 'Romeo1" could correspond to a particular boy, "Juliet1" could correspond to a particular girl, 'person' could correspond to a set of persons, and 'love' could correspond to all pairs of entities where the first loves the second. If (every x [person(x) => love(x,Juliet1)]) is true, then the set of pairs denoted by 'love' must include *all* pairs where the first element of the pair is in the set denoted by 'person', and the second element is the girl denoted by 'Juliet1'. Thus, if the boy denoted by 'Romeo1' is in the set of persons, the pair consisting of that boy and the girl denoted by 'Juliet1' must be in the set denoted by 'love' -- and that's why the conclusion love(Romeo1,Juliet1) above is true! The key point is that the argument will go through even if we are talking about numbers, planets, or whatever. For example, imagine that we are talking about celestial objects in our solar system, and 'person' refers to the planets, 'Juliet1' denotes the Sun, 'Romeo1' denotes the Earth, and 'love' denotes the relation of one thing orbiting around another. Then the conlusion love(Romeo1, Juliet1) still follows from the premises, but now it "means" that Earth orbits the Sun! Such inferential power is not to be dismissed lightly ...] McDermott does competently explain some of the difficulties involved in mapping language to an internal representation, but the reasons he gives for pessimism are entirely on the wrong track. One reason he gives is that understanding a sentence may require finding an interpretation that minimizes contradictions, which is very hard. [But that's just not the way language works. For example, we tend to interpret "He saw a monkey with yellow tail feathers" as if the monkey sported a bird-like feathery tail, even though we know perfectly well that monkeys are furry, not feathery -- and even though a consistent interpretation is available in which the monkey is HOLDING the detached tail feathers of a bird. Most of the time, interpretation seems based more on familiar patterns of language and predication, than on any consistency or plausibility checks.] Another reason he gives (p.67) is that a symbolic representation is unusable if you can't guarantee that every problem that can be expressed in it can be efficiently solved in it (and of course even boolean satisfiability is intractable unless P=NP...) [This is the most stunningly perverse claim that has hamstrung much AI research in the last 20 years. Obviously, PEOPLE can comprehend (hence presumably represent to themselves, somehow) problems that they find extremely difficult or even impossible to solve -- Fermat's last theorem, Goldbach's conjecture, the continuum hypothesis, P=NP? etc. So why should AI systems be prevented from even being able to REPRESENT such problems? The arguments that are made (e.g., by Brachman and Levesque) to support such a position are very much like saying: beware of using programming languages that allow recursion or looping, because then you can't guarantee fast termination of all problem-solving algorithms that you can implement in that language!] On p.68, he argues that (1) people don't do a significant amount of deduction [which I believe is a serious mistake, as I argue in my reply to his "Critique of pure reason", in the same issue of Computational Intelligence as that paper]; (2) computers do a lot of nondeductive reasoning such as planning [true]; (3) both types of reasoning often lead to mistaken conclusions [true, in part because the premises FROM which we, or our computers, reason are often unreliable, being based on unreliable sources.]; THEREFORE "computers don't deduce, they calculate". [This is a complete non-sequitur.] On p.70 he then says the way out may be to use the kinds of domain-specific knowledge extraction techniques that have been the focus of much research in the last 15-20 years, such as extracting a date, location, perpetrator, victims, etc., from a terrorist incident report. He also points to the statistical MT techniques that have burgeoned in recent years. He recommends that we continue in that direction, and that we'll gradually figure out what *really* happens in language understanding. [What we'll find, I expect, is that we *really* construct subtle internal representations, and inferentially disambiguate, "canonicalize", and fill in missing information, and make inferences, where all of this depends on having TENS OR HUNDREDS OF MILLIONS of stored items of symbolic knowledge, in addition to imagistic, taxonomic, and a few other kinds of knowledge. It's depressing that so little research has been done on the requisite representations and inference methods and on the "knowledge acquisition bottleneck" in the last 20 years, thanks to negative attitudes like McDermott's, and the lack of interest of the founders of AI in language.] He continues with a good discussion of speech recognition methods, and then comes back to information extraction. He again suggests that perhaps human language understanding might be something like that, trying to make this plausible by suggesting that there's a limit to how novel a sentence can be, and still be understood by people the first time around. [Well sure, if we have no knowledge about the domain under discussion, we won't understand.] Mathematical reasoning (p. 80-86) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ First he counters impossibility arguments about machine intelligence based on Goedel's theorem (incompleteness of any consistent arithmetic logic). The main counterargument is that an intelligent system that can also cope with real-world situations would do a lot of things that aren't theorem-proving (and can even believe contradictions without believing everything), so Goedel's theorem simply doesn't apply. Putting it more abstractly (p.83), he points to the "formalism fallacy". This is the faulty reasoning (of those employing the Goedel argument) from the fact that the *operation* of a computer can be formalized in logic to the conclusion that the *outputs* of a computer are formal logical statements in some logic. The way he puts it is that while *some* algorithms are algorithms *for* domains where the desired results are formally interpretable statements (e.g., that the greatest common divisor of x and y is z), many other algorithms aren't algorithms *for* anything, or at least not for anything that can be viewed as logical statements. For example, a chess-playing program tries to make good moves, rather than outputting formally interpretable claims. Also, even a program that outputs formal claims may sometimes be wrong, or be lying, for whatever reason. He goes over William McCune's computer-generated (though human- guided) proof that Robbins algebras have a right identity element (and are thus the boolean algebras). This is very clearly and simply presented -- in itself making it worth reading the chapter! [It's interesting that he isn't struck by the disconnect between this AI success -- which of course depends entirely on formal reasoning in a formal representation -- and his carping about the irrelevance of formal logical representations and reasoning to AI.] Architecture of mind (p. 86-91) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ He remarks correctly that in AI "early hopes that a small number of good ideas would explain all of thinking have not panned out. ... We're not going to conquer the heartland before scaling the coastal ranges." [However, I don't agree that the essential heights to be scaled necessarily include speech recognition, motor activity, spatial localization, and map learning! I think an intelligent system with no more than keyboard input, plus perhaps a GUI and input from a video camera, is entirely conceivable.] Commenting on efforts to build large knowledge bases over expressive representations, such as Cyc, he reiterates his skepticism about any general representational and reasoning techniques. He thinks we need to just keep compiling an ever-larger collection of specialized methods. He declares that the intuition people have that we have a comprehensive internal knowledge representation (in particular, for the content of language) is an "illusion" based on introspection -- we are misled by our self-model. [I would counter that we have plentiful *external* manifestations of such an internal representation, namely human languages themselves! I cannot comprehend why someone who takes a serious look at the structure or human languages and the way they convey meaning could *fail* to be struck by the following expressive devices shared by ALL human languages: - predication, as in "Plato is a man", or "The cat is on the mat"; - connectives "and", "or", "not"; - generalized quantifiers ("all humans", "most Western democracies") - equality ("Lincoln's assassin was John Wilkes Booth") - predicate modification ("very smart", "dances divinely") - sentence modification ("Perhaps there is life on Europa") - predicate reification ("happiness", "humankind") - sentence reification ("The fact that Brutus stabbed Caesar") - event reference and modification ("Molly barked last night. This went on for an hour, and woke all the neighbors.") (Two or three further items could be added.) Surely this has some significance -- and the most straightforward assumption we could make is that these features are a reflection of our internal representational capabilities -- which suggests that our "mentalese" is a kind of enriched logic-like representation, the very hypothesis McDermott scoffs at.] Concerning creativity (such as Mozart's or Richard Feynman's) he discounts the "unconscious random search" idea put forward by Poincare and others, pointing out that in a large search space, random search can easily go on for eons without ever hitting on anything useful. Instead, the search must be guided by a lot of effective specialized techniques. However, we may never build AI systems of that order of creativity, because the occurrence of genius seems to be a rare chance occurrence, and there may be no mechanisms *other* than chance to duplicate the feat. [I am much more sanguine about the possibility of genius-level AI, and beyond. If we were to build a machine that is "only" as smart as, say, McDermott, and then speed it up a million-fold (remember Moore's law), I'd wager it would appear to have the capabilities of a human genius, or more. Note that electronic and photonic switches are already a billion times faster than neurons.]