Drew McDermott, Mind and Mechanism, MIT Press, 2001. Chapter 5. [Comments in square brackets are my own thoughts.] CHAPTER 5: SYMBOLS AND SEMANTICS (p.167-214) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The main goal here is to make plausible that it is possible to objectively define "computation", "symbols in the head", and the relation of some of these symbols to reality; and hence to show the question of whether or not an agent possesses a self- model (and under suitable further assumptions, is conscious) is also something to be ultimately determined objectively. Themes: Computation is not simply a matter of perspective on a system, contra philosophers like Fodor, Putnam, and Searle. McDermott argues that a system S1 functions (at least in part) as a computational system for another system S2, if the following sort of schematic view can be taken of their relationship (my diagram): ________S1 S2________ | |\ / | (He intends to apply this | | \___/ | schema to the part of the | |P1_|_ | visual & motor system enabling | | / \ | continuous object fixation. He |_______|/ \________| want to slice that subsytem into a computation part P1 and an execution part S2.) (The shapes here don't have any significance -- just a visual- ization device.) System S1 should have a part P1 whose states are determined by S1, and which causally influences S2. Furthermore, It should be possible in principle that all of S1 *except* the part P1 could be replaced by a system that causes the same behavior of P1 as when all of S1 is in place, so that the causal effect exerted by P1 on S2 is unchanged. In this way McDermott seeks to avoid the claims of the above philosophers that, e.g., the vestibulo-ocular reflex (VOR, that enables us to keep our eyes on an object even while our head is turning/moving) is no more computational than our digestive tract. The point is that a certain part of our visual processing pathway would correspond to S1, and our motor control system controlling the muscles of the eye corresponds to S2, and we can regard the part of the visual pathway that determines what eye movements are required to keep an object fixated as corresponding to P1. By separating the VOR as a whole into a causal part (determined by perceptual inputs) and an affected part (the motor system for the eyes), we can sensibly talk about one providing a computational input to the other (assuming that we can also imagine replacing the perceptual system, prior to the calculation of motor signals required for fixation, by some other device (e.g., an artificial eye). [At least, that's my imperfect understanding of what McDermott is saying!] But while this may be a plausible schema, the question remains on what grounds we might consider a system component like P1 as a *computational* component. He now tries to address this ... Computers (for a particular function f), p.170: ~~~~~~~~~ i.e., we assume a fixed way C_I of decoding certain states of the entity in question (the putative computer) as inputs in the domain of f, and a fixed way C_O of decoding certain later states (when we detect that the output is "available") as outputs in the range of f. 'A' is the output availability signal, which is some (observable?) condition of the entity. The claim is that the entity computes f iff the following holds: whenever a state S is decodable as some x in domain(f) (i.e., C_I(S) = x), then the entity is caused subsequently to enter a state S' where the availability signal is on, and the state decodes as f(x), i.e., C_I(S') = f(x). [Fallacy: can hide computation in decodings, as we'll show.] First we'll note that under this definition, just about any object is a computer for some trivial function. We'll then try to see whether the definition can identify computers for non- trivial functions. For this, keep in mind that a "function" f (of one variable -- we could generalize this) is any subset of the set of pairs D x R (i.e., { | d in D, r in R}, where - D and R are sets (the domain and range of f), where if D is nonempty, so is R; - for every d in D, there is a *unique* r in R such that is in f; and - nothing else is in f. | | <5,25> E.g., f(d) = d^2 20| (d = 1, 2, 3, ...) | | <4,16> 10| | <3,9> | <2,4> 0|___<1,1>________________________ 0 1 2 3 4 5 We now note, and try to repair, a series of flaws in McDermott's definition: 1. McDermott doesn't say D should be nonempty -- but then it follows that every object computes a function, namely the empty one. That's trivial, so let's focus on functions for which D is nonempty (and reasonably large). 2. McDermott also doesn't rule out *constant* functions -- but then every object for which we can somehow distinguish different inputs at different times (e.g., the temperature of the object) is a computer for every constant function on those input values (proof sketch: reading the temperature d provides an input value; assume that the output availability state always holds, so we can "read" the output at once; let C_O(S) = c for some constant c, and all states S; so f(d) = c for all d). So let's assume that f should be nonconstant, in order for an object to be a computer in a nontrivial sense. 3. McDermott doesn't rule out having distinct input and output codes. But then any object for which we can somehow distinguish different inputs at different times (e.g., as above, the temperature of the object), is a computer for any function f we care to define on those inputs: Assume that the output is always available; whenever the measured input is C_I(object) = d, let C_O(object) = f(d). Voila! The object computes f(d) (by leaving the "work" to the output decoder). So let's try to make this sort of "cheating" (i.e., pushing the computation of f into the decoder) harder by requiring a single mapping from states of the object to values (e.g., numerical ones) to serve as both input and output decoding. 4. Unfortunately, this added constraint still lets us define objects as computers of any function f, when intuitively they don't actually compute those functions. A simple example is provided by a clock; we can claim the clock computes squares of natural numbers from 1 to 12, based on the following mapping from states of the clock to numbers (where we assume that the minute hand advances in 1-minute jumps): - when the minute hand is BETWEEN the hour markers 12 and 1 on the clock dial, the decoded value is 1; - for i = 2, 3, ..., 12, when the minute hand is BETWEEN markers i-1 and i on the dial, the decoded value is i; - when the minute hand is ON marker i (i = 1, 2, ..., 12), the decoded value is i^2; We also assume that when the minute hand is between the hour markers on the dial, the clock is in input mode, and when the hand is on 1, 2, ..., or 12, it is in "output available" mode. Clearly this meets McDermott's definition of a computer for function f = {<1,1>, <2,4>, ..., <12,144>}. Moreover, the choice of f here was arbitrary, so the clock is a computer for ANY desired function on domain {1, ..., 12}. We can also easily envisage simple devices that, on our intuitive understanding, don't compute arbitrary functions, yet can be construed as computers in McDermott's sense for functions with infinite domains -- even functions that are not computable in Turing's sense (and hence not computable in the standard sense on any digital computer, assuming the truth of the Church-Turing thesis)! Here is an example, similar to the clock example: We rig a bathroom scale so that a small "output available" light comes on 10 seconds after any weight between, say, 10 and 100 lbs is placed on it. (The light goes out when the weight is removed.) For any function f on domain [10,100] (allowing non-integer weights, of course), we define the following mapping from the state of the scale (and light) to the number it encodes as follows: - when a weight is on the scale and the light is "off", the encoded value is the weight indicated by the scale; - when a weight is on the scale and the light is "on", the encoded value is f applied to the weight indicated on the scale. Imagine placing a series of arbitrary weights on the scale; then the weights indicated might look like this, as a function of time (where the asterisks indicate when the light is on): ____ | | ____ ---- ____ | | | | ____ | | ____| **|____| **|____| **|____| **|____| **|__ Of course the weight doesn't change while we're waiting for the light to come on, so the value "decoded" when the light comes on will be whatever we assume is the value of f applied to the weight indicated by the scale. So the scale plus light is a computer in McDermott's sense for ANY f on domain [10,100], There are precision limitations in practice, but we can make the precision as high as we want (for any practical purpose) by careful precision engineering of the scale. Note the lack of any constraint on the Turing- computability of f! We surely don't want to allow such examples of "computers for function f". But how do we get around the fact that encodings can "hide" functions? You might want to say, "let's disallow encoding/decoding schemes that require computation!", but that gets us into an infinite regress: how do you tell whether a decoder is itself a computer for some particular function or not? A better answer is that we should disallow encoding/decoding functions whose computational complexity (in the formal sense used in CS complexity theory) is as great as, or greater than that of the function supposedly being computed by the object in question. A student (Vincent Yu) pointed out to me that Scott Aaronson makes an argument of this type in an article about the relevance of computational complexity to philosophy -- see section 6 of http://www.scottaaronson.com/papers/#surveys (There the question is addressed whether a waterfall can be viewed as computing chess moves!) Mind you, we can't really get around the fact that every object is a computer for some (possibly very complex) function, since objects respond in complex ways to the conditions to which they are subjected -- and we can view the reponses as functions of those conditions. For example, if you subject a rock to pressure, say by putting it in a vise, it will distort in subtle, complex ways depending on its shape and internal structure. It would require a powerful computer simulation to predict that distortion accurately -- but the rock does this complex "computation" almost instantaneously. But to follow through on McDermott's attempt, we would want at least to make the decodings of low complexity. This would prevent the rock, or bathroom scale, as being viewed as computers for arbitrary functions. Could we then say that the part of the visual and nervous system that determines motor signals in the vestibulo-ocular reflex is a computer for those signals? I fear not, because the result is in fact determined very quickly, indeed it seems in sublinear time (via parallel processing). So a digital "readout" device that hides the equivalent of the VOR computation could be used to make a wide variety of innocuous objects appear to compute the VOR signals... I do think, by the way, that it's possible to define objectively whether or not an object is a UNIVERSAL computer (in Turing's sense), but that is another matter. Causal influence of a computational system on another system ~~~~~~~~~~~~~~~~ (p.175-180). Notion of "largely computational". McDermott takes causality for granted, i.e., it's an objective fact whether some state S of a system causes some state S' of the same (or another) system. [Hmm...] Symbols: ~~~~~~~~ One reason why McDermott wants to formalize the concept of a "symbol" is that he wants to examine claims that computers use symbols while neural nets don't. Another reason is that once we understand symbols in a machine (or person), we are in a better position to discuss the *meaning* those symbols may (or may not) acquire, in virtue of an agent's connection to the world. [Symbols, or "signs" and their meaning are studied in the area of *semiotics*, associated with the name of Saussure.] We have to distinguish between symbol *types* and *tokens*. For example, the English alphabet consists of 26 symbol types (letters), but the number of tokens in books, etc., is huge. So a token is a physical thing, that seems to be a property of certain physical system or substrate for some period of time. So McDermott tries to define symbol tokens as causally connected sequences of such physical properties. He begins with the notion of "symbol sites". Symbol sites and symbols: symbol sites are sets of mutually exclusive states of some system at some particular time. [NB: To make sense of this, as in the case of Aaron Sloman, we have to understand "states" here as *sets* of basic states of the system (at a particular time), where basic states are understood as in computer science (e.g., in finite-state machine theory); i.e, a system's behavior at a given time is completely determined by its (basic) state at that time along with external inputs (influences). If we took McDermott's "states" to be basic states (at particular times), then a system could only have one "symbol site" at a given time, whereas it's clear from his further discussion that he wants to allow for multiple symbol sites in a system, at a give time. BTW, the "at a particular time" part of the definition of symbol site seems pointless and confusing: if part of the specification of a symbol site is a particular time, then symbol sites only exist for an instant, and each symbol site immediately gives way to a brand new one. What's the point of *that*? Don't we want symbol sites to be something we can specify once and for all, and that can contain different symbols at different times??] Since the states are mutually exclusive, the system can be in only *one* of the states of a given symbol site at any specified time; so the remaining states are just hypothetical alternatives -- the system *could* be in one of those other states [whatever that means!], but it isn't. Any of the states comprising a symbol site can be regarded as encoding a symbol "at that site" (though only one symbol is ever actually realized at a given symbol site). "Symbol tokens" correspond to a causal chain of states, each state being decodable as a symbol (the *same* symbol??!!), each being caused by its predecessor, and each being a member of a symbol site (thus, by definition, at a particular time). He tries to formalize this causal-chain relation as a "precursor relation" among symbol sites, but since symbol sites have hypothetical states as members, it's not at all clear what this means. I think what he really is trying to get at is that besides the actual causal chain that connects temporal instances of a symbol, there are also other hypothetical chains that could have occurred over the same time span, instantiating a *different* symbol. [But I find the details here incomprehensible. For instance at the bottom of p.184 he talks about "a token at a site" -- when he has just defined a token as a *set* of sites (ordered in time). He also talks about "the occupier of a site" even though his definition of a site says nothing of occupiers -- a site is just a set of alternative states at a particular time.] Analog symbol sites ~~~~~~~~~~~~~~~~~~~ He makes the reasonable point that analog symbolism can be approximated by discrete symbolism. If some continuous quantity in some analog device required arbitrarily high precision in order for the correct computational result to come out, then the tiniest disturbances would throw off the computations of the device, and such a "computer" would be biologically and practically useless. Natural causation in neural nets "vs" rule-governed behavior ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (criticism of Churchland). Algorithms, like neural nets, may not be interpretable in any clear way (Love-match program: you can't formally specify the criterion according to which some person y is a love-match for some person x). Neural nets are just as "symbolic" as digital computers. Syntax & semantics ~~~~~~~~~~~~~~~~~~ Structure, compositional semantics; e.g., OJ, JO might each denote one person, or might each denote 2 persons (Jack & Oprah), one positioned to the left of the other (or in love with the other, etc.) The latter requires a *compositional* semantics. (Each piece means something, and the arrangement of pieces adds further meaning, yielding an overall meaning for the entire symbolic expression.) How symbols get meanings ~~~~~~~~~~~~~~~~~~~~~~~~ He goes for a theory acc. to which the symbols that do the representing are "appropriately linked", under a compositional semantics, to objects and relationships in the world via the agent's perceptual system. [I disagree with the last part: it's a matter of there existing a denotational semantics under which the representations *provide information* about the world -- they are a "good fit"] How we could find out what symbolic representations mean, empirically. "Harmonious" semantics. Qualia -- primitives in the above symbol-world linkage ~~~~~~ Falsehoods -- e.g., Santa Claus ("almost" mappable to reality). ~~~~~~~~~~ He again denies the relevance of what symbols *could* mean (6-day war vs weather report); [an unfortunate position!] Appearance and reality ~~~~~~~~~~~~~~~~~~~~~~ Even introspections can be mistaken -- they too are physical processes, and our conclusions about what they yield may not be fully "harmonious" with what actually transpired. He speculates about how we might reject solipsism in the future, based on empirical accounts of how subjective experience works.