Drew McDermott, Mind and Mechanism, MIT Press, 2001. Chapter 6. [Comments in square brackets are my own thoughts.] CHAPTER 6: PHILOSOPHICAL CONSEQUENCES (p.215-241) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ McDermott begins with a summary of the arguments in the book. This is worth repeating here: "The brain appears to be a biological computer. Many mental skills can be analyzed in terms of computation... The most likely candidate for a noncomputational property is phenomenal consciousness, but careful thought shows that in fact we can expect consciousness to be a necessary component of a computational intelligence, not an inexplicable accident. In particular it seems as if any computational entity that dealt with a physical environment that included its own body would have to have a model of itself as a perceiver and decision maker; and in that model the entity and events involving it would have to be labeled as having the features of phenomenal consciousness. An entity with a model of this kind would exhibit what I called virtual consciousness; and then all we have to do is appeal to parsimony to identify virtual consciousness with the real thing." He then asks whether we should worry that viewing ourselves as biological computers would devalue human life; and whether robots, if they do reach a sufficiently high level of intelligence, would be a threat to us, for lack of a conscience. The souls of robots (217-224) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Addressing the latter question first, he suggests that it is hard to imagine robots, however smart, to be genuinely loving, moral, aesthetically sensitive, capable of being amused, or spiritual. The reason, in his opinion, is that given any robot with these putative properties, we can imagine tweaking a few parameters in its programs, and radically altering its character -- e.g., turn down a parameter for valuing human life, and it would suddenly condone homicide; change a parameter in the perception of what is ridiculous, and it suddenly finds the fact that people have legs [my example] hilarious. Tweak a parameter in its aesthetic evaluations, and it switches from high praise for Andy Warhol to disgust with all post-Impressionist art; etc. On the other hand, he mentions that there seems to be a place in the brain whose stimulation causes a person to giggle for no apparent reason. So in certain limited ways people's parameters can be tweaked as well. [And, as many have noted, brain stimulation or lesions or drugs can quite radically alter human character as well.] [I think he's a bit too hung up here on the notion of a robot as having a transparently hand-programmed mentality. I think the *really* smart robots of the future will have acquired not only much of the knowledge but also much of their "judgement" (on matters like ethics, aesthetics, and humor) through experience, not through hand-programming; so it won't be at all apparent what parameters to "tune" in order to change one judgement or another.] He also argues that it's hard to imagine what it would mean for a robot to know that it *should* do one thing, while it does another, and then feels guilty. After all, the robot is simply weighing alternatives and making decisions accordingly. [But I don't find this particularly problematic. I think we often find different *kinds* of criteria in conflict when we make decisions. The notion that we *should* do something is typically derived either from social norms -- we have an obligation towards other people to act in a certain way, or from deliberation about our own long-range well-being. The countervailing impulse is typically something more basic and immediate -- e.g., enjoying a tiramisu or a movie, rather than maintaining a healthy diet, working on a difficult task, or posting the money to starving children somewhere; or, for example, staying at a safe distance rather than trying to help a woman under attack by a ruffian on the street. Short-term gratification or self-interest often wins over the sense of social obligation or an enlightened life-strategy, and guilt derives from knowing that the social obligation or long-range considerations have lost out in one's decision-making. But this doesn't mean that we didn't *prefer* the decision we actually made (though our desire for a flattering self-image may distort this reality)! It just means that we were more *conscious* of the normative principles that were at odds with the decision, than of the less conscious but compelling preferences that won out... Actually, McDermott speculates in this direction anyway on p. 222. He also implies such a view in saying later, on p.232, "When [people] do wrong, it's not because they don't know what's right. They usually know what's right but they do what they WANT anyway" (my emphasis).] McDermott makes a couple of nice points about Asimov's 3 laws of robotics: - would people actually try to build them to obey these sorts of constraints? (rather than building them to kill people designated as enemies) - do we really *want* them to behave that way, e.g., the household robot might stop polishing the silver and go off to help starving children in Africa (for us humans, physical and especially psychological distance matters!) However, his conclusion (p.224) seems more optimistic than the commentary that precedes it -- we'll figure out how to patch up something like Asimov's laws and instill them in computers, by some sort of committee decision. Robots will be ethical, and will sometimes struggle with moral dilemmas. People as ethical robots [but no word on moral qualia!] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Turning to the question of whether the materialistic/computational view of ourselves might devalue us, he first reiterates that any argument of the form "We shouldn't do X because people don't have free will" is incoherent, because the notion of "should" is meaningless without free will. He then spends some time underscoring that humans seem instinctively disposed to *care* whether actions are in accord with moral principles or not (whatever the principles might be). Our preference for fairness -- equal rewards or treatment for everyone, regardless of social status -- may be the result of realizing that this is a precept everyone can agree to (whereas arguing that some should be favored is much harder). I think's he's on target with the comment (p.226) that "The other main intrinsic component of our moral instinct is our tendency to love some people and hate others [and be indifferent to the rest]." To some extent, people also include animals in these classifications (because they anthropomorphize). He predicts that people will feel likewise towards certain types of robots [which is probably true already -- Furby & such]. He then expresses worries that in the future, given present trends and future technological developments (including genetic engineering, cyborg-like enhancements, and robots) we may end up without a broadly shared philosophical and moral framework, and may even decide that our existence as a species is of no particular importance, and head happily towards extinction. [This I don't quite see -- why should we change our own most basic makeup in so drastic a way? We wouldn't do it precisely *because* the instinct for survival is so powerful. I'm more concerned about people who are so deluded as to think that by hastening our earthly demise, they can hasten their own ascent to an eternal and blissful life.] McDermott then beats the bushes for candidate philosophical/ moral frameworks that might be broadly acceptable and defensible. He rejects utilitarianism, because "the greatest good of the greatest number" begs the question of what "good" is. He is a little more sanguine about Kant's "categorical imperative" (living by rules that we would endorse even if we had to trade places arbitrarily with others), but points out that rules regularly come into conflict, thus requiring higher-level rules to resolve the conflict -- but what should the top-level rule be? He next excoriates Paul Churchland for his "nihilism" -- viz., for suggesting that morals are just part of our evolved and learned makeup, aimed towards social success. McDermott points out that Elvis Presley, Bill Gates, and Genghis Khan were socially successful on a grand scale, but not necessarily moral role models. [But it seems to me that here McDermott is turning a blind eye to an obvious corollary of his own theory of qualia, namely that our moral "primitives", just like our visual ones, are not further analyzable or derivable from something more fundamental. In other words, for better or worse, we have "moral qualia" that ground our more complex moral judgements. To ask for a foundation from which they follow is like asking for a foundation from which the qualities of our sensory perceptions follow. Granted, moral qualia are more malleable than perceptual qualia, but even perceptual qualia are not immutable -- for instance some foods or beverages considered delectable in some cultures are judged to be nauseating in others. Both kinds of qualia evolved to allow our species to thrive. It's true that there's a special problem with morals: our society has changed so much since our hunter- gatherer days that our moral instincts, and the ways we modify them in our children as they grow up, may be far from optimal for our global society and for our future as a species. So then the question to ask is, how can we best guide children's moral education, and shape society's laws and institutions, to optimize the outcomes *we* desire for the future, based on our *own* moral instincts? But in doing this, we have to accept that our moral injunctions will ultimately bottom out at "because it is the right thing for a good person to do -- one who cares about others and about human destiny", or "because it would be selfish or evil to do this". We should not feel compelled to dig for a logically prior reason, such as "God told this to our prophets thousands of years ago", because then we forfeit all chance of broad agreement.] He next rejects moral relativism: even if you think moral judgements are relative to a belief system, you can't both accept and reject a judgement conflicting with your own -- not if you take your own judgements seriously. Finally he rejects "enlightened self-interest", because, he says, it might be good for humans' enlightened self-interest to exterminate a species of intelligent extraterrestrials we might encounter (and it might be in their enlightened self-interest to exterminate us), but this wouldn't be moral. [I agree -- not because (as McDermott supposes) I am "taking God's point of view", but because my moral instinct says it would be wrong! In this connection, it is interesting to think about what our capacity for empathy towards "others" might consist of at birth. After all, we are not born with detailed knowledge of what people are like -- just a rough, cartoonish idea of what a human face is like, and mechanisms that will allow us, later, to ascribe desires, beliefs, and intentions to certain beings, based on their behavior. I think what is present at birth is a tendency to give preference to, take a strong interest in, and potentially feel empathy for, beings we perceive as RESEMBLING OURSELVES, in appearance and behavior (especially inferred mental behavior). Note that learning what we ourselves are like (self-modelling), and learning which other beings resemble us can both occur without significant prior knowledge of "what human beings are like". To see that appearance matters, imagine that sasquatches turned out to be real, but had the mental capacities of a chicken -- and tasted delicious. Would you feel right about breeding them for food?] God without dualism ~~~~~~~~~~~~~~~~~~~ [An interesting talk you might look at in this connection is psychiatrist Dr. Andy Thomson's "Why We Believe in God(s)", at http://richarddawkins.net/videos/641385-j-anderson-thomson-jr-m-d-why-we-believe-in-god-s-a-concise-guide-to-the-science-of-faith He discusses the role of various human social cognitive faculties and their evolution in the acquisition of a belief in God(s): the impulse to attribute events to causal agency, the need for "attachment" to parents, models of mind, the ability to imagine dialogues -- thus talking to God, etc.; an interesting point is that brain imaging indicates that when people are asked what God thinks/wants, the same region as is used for self-modelling is active (& this apparently is distinct from the region for modelling others); so when people are asked about God's attitude to, say, gay marriage, they respond with their own attitude; he also talks about the sources of altruism and moral feelings, e.g., experiments showing moral decision-making in infants 3 months of age; and how religion appeals to these human traits to bolster belief.] Having rejected all the major contenders (as he sees it) for a defensible moral framework without appeal to a God, McDermott considers ways in which the existence of a God might be compatible with a materialist outlook on the world. Note that for McDermott, rejection of mind-body dualism does not entail automatic rejection of God. We still may speculate that the universe, though governed by immutable physical laws, was created by a supernatural God, and perhaps that this God intervenes in physically undetectable ways in the functioning of the physical world. He then rejects a number of theories of the role God might play. First he reconsiders dualism, pointing out that once you accept a supernatural component in human nature, then the idea of a God who dictates morals and plays a causal role in the world is unproblematic. The common man's defense of this view is that "religion is concerned with matters of the spirit and science with empirical matters" (p.235). The simultaneous truth of both dualism and materialism is also endorsed by certain intellectuals (postmodernists, epistemological relativists) who maintain that truth of different worldviews is relative to the distinct cultural subcommunities that they serve. He finds this incoherent because if it were true, it follows that it might just as easily be false, depending on your cultural perspective! [I would add that science, unlike, say, literary criticism, uses rigorous logic and empirical validation to test its claims -- that is what gives science its special power.] There simply *is* an undeniable conflict between the belief that all events are governed by physical laws, and the belief that physical laws are regularly overruled by God. He also notes that alleged contradictions within science, such as those between Newtonian mechanics and quantum mechanics, are nothing like the contradictions that postmodernists would like to entertain. They are merely a reflection of the fact that both theories are approximate, and neither covers all the phenomena that the other covers. He concludes that "the reason most people see no conflict between religion and science is that most people don't understand science". Moving on to the notion that God might intervene in physically undetectable ways (e.g., Polkinghorne, Peacocke, and Kenneth Miller), McDermott rejects this as well, since physical intervention would either be detectable (at least as a shift in probabilities of certain kinds of events [such as recovery from illness of people who pray]), or, if it remains truly undetectable, then we are left with a God whose interventions have no detectable effect, and hence play no explanatory role whatsoever in our understanding of the world. Despite all that, McDermott turns out to be a deist with a wrinkle: God did not create the world and then abandon it, rather, he created the world and "poured himself into the world when he created it". He does not intervene, because he doesn't need to -- "the place where God intervenes in the world is *us*". Keep in mind that what drove McDermott to this view was his contention that our moral sense calls for an "overarching" moral framework, so this framework, for him, is God. He doubts that we'll ever outgrow belief in God, as "we will always be painfully aware of our finiteness, and will always yearn for the infinite." [I've pointed out that we could treat our most fundamental moral intuitions as "moral qualia", with the same kind of physical and evolutionary basis as sensory qualia or perceptions of pain and pleasure. This is an alternative (for those with skeptical leanings) to regarding them as an expression of God within us. Our yearning for the infinite (as experienced by people who think deeper thoughts than those who are content to follow football scores or celebrity love-lives) can, I think, be assuaged in other ways, e.g., creative activities. However, it is probably true that a large portion of humanity cannot exist comfortably without the certainty, solace and ceremony of religion.] He says "it is proper to be grateful to [God] that the universe is the way it is, and in particular to thank him for our existence and our occasional abilities to partake in his divinities." Though this may amount to praying to yourself, prayer may be "the proper way of addressing the divine in yourself" (p. 239). He disputes the view that religion is on balance a force for evil (e.g., see Steven Weinberg [or long before, Bertrand Russell]). He suggests that if religious development had stopped with something like Roman polytheism, democracy could never have evolved (p.238). [I wonder what he would say about pre-Roman Greek democracy, which, though flawed and incomplete, nevertheless actualized certain democratic ideals. I also wonder what he would say about Buddhism or Confucianism, which teach such virtues as humaneness and refraining from harm to others, without appeal to a God.] He also addresses the Problem of Evil by saying that "the only way for us to exist is for us to exist in the world as it is... The physical world is a set of axioms, and we are a theorem." To wish for a world without evil, with an easygoing Lenin, a hearing Beethoven, etc., "is to wish for nonexistence". [This sounds a lot like Leibnitz's view that the world as it is is "the best of all possible worlds".] When will there be conscious robots? (p.240-241) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ He doesn't expect anything like intelligent (and hence conscious) robots in his lifetime, but expects to see "the terms of the debate [about machine intelligence and consciousness] begin shifting". "People tend to believe that almost any organism that can move has experiences; it won't take much autonomy on the part of their machines to trigger their 'animate organism' detectors. ... By the time robots actually do have experiences, if they ever do, it will be an anticlimax." Meanwhile, he urges a search for "a coherent synthesis of our religious intuitions and our scientific discoveries," as something more important than focusing on continued technological progress.