First class, Jan 17/13 ~~~~~~~~~~~~~~~~~~~~~~ Instructor/TA introductions (Len Schubert, CSB 733, http://www.cs.rochester.edu/~schubert/ Course page http://www.cs.rochester.edu/~schubert/191-291/ Ian Perera, CSB 627, http://www.cs.rochester.edu/~iperera, Course page http://www.cs.rochester.edu/~iperera/csc191_291.html) 191 prerequisites: sophomore or higher, some programming experience; 291 prerequisites: 173 (recommended), one of 212 (HCI), 249 (machine vision), 260 (topics in NLU), or 242 (AI) (or perhaps a theory course -- 280 or 282). Course logistics - see web pages; Ian also has web page; office hours LS: Tue. and Thu. 4:50-5:50 pm, or by appointment via email; IP: Mon. 2:00-3:00, Fri. 2:30-3:30 - texts (start with Blackmore -- ch. 1-2) - Pinker, etc. articles - approx. 5-6 homeworks (answer questions, write essays, Lisp exercises for 291 students) 60%/40% - possibly teams presenting/leading discussion of some of the course material - several pop quizzes (lengthier one at the end) 25% - participation (15%) - projects (teams of 1-3?) 25% handouts: Some web pages from 191/291 website: course description, incl. goals of the course; tentative schedule; Pinker, "Could a computer ever be conscious? (a talk) [Sci fi, "raw feels" vs self-monitoring, independence of materials used, duplication, matter-transporter; can you murder a robot, inflict pain on a baby or a lobster or a worm? We won't know for quite some time.] Pinker, "The mystery of consciousness", TIME Archive, Jan 19, 2007. ["unresponsive" may not indicate "brain-dead" or even "unconscious"; easy and hard problems; physical basis of consciousness; illusion of control; binocular rivalry; brain waves; hard problem; biology of consciousness and morality -- placing higher value on a conscious self independent of biology may have dire consequences] "Who do you think you are?" [overview of brain theory in 10 easy, well-written pages; thorough coverage of the evidence that mental phenomena have a physical basis: Phineas Gage, Broca's and Wernicke's areas, drugs, fMRI; role of emotions, brain structure and function: brain stem & cerebellum, midbrain (esp. limbic system), cortex (80%!), memory & learning, language and mind reading, consciousness] Introductory remarks ~~~~~~~~~~~~~~~~~~~~ To be conscious is to be conscious/aware *of* something" -- (i) ourselves, or (ii) a sensory impression (redness of a rose, its fragrance; the sound of a bell; a pain in the foot); an emotion or a thought or mental image. Especially with respect to (ii), we are interested in what we *experience* when conscious, including in states when we are dreaming or hallucinating, or just imagining something. What's the mystery? Imagine walking from a windowless lecture auditorium into the bright sunlight, looking up at the blue sky; you don't just learn a *fact* (It's sunny outside) but *experience* certain specific sensations (qualia -- brightness, blueness, etc.) Puzzling questions concerning (i) and (ii): - Can robots be self-aware? (A *relatively* easy question) - What are experiences ("qualia") and how are they caused? - How are the subjective ("first person") and objective ("third person") views related? - Can robots have experiences? Could they feel pain? Does it make sense to ask "What is it like to be a robot?" (much as T. Nagel asked, "What is it like to be a bat?", generating a voluminous "thread" of consciousness philosophy thereby). Could it be immoral to "hurt" or "kill" future advanced robots? The "traditional" -- and still the popularly believed -- answer: mind-body dualism, i.e., mind is nonphysical. Most modern scientists reject dualism; however, the problem has spawned thousands of publications, and the flood continues; [Aside: Richard Feynman's Nobel story about the Scandinavian princess who inquired about his field, and thought because it was physics, they couldn't talk about it because no-one at the table knew anything about it ... he replied that they couldn't talk about it because someone DID know something about it, whereas people can talk endlessly about things they know nothing about, like the weather, or poverty & crime, or or economics & politics, etc. So it is with consciousness] The "easy" problem and the "hard" problem (Chalmers). ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Easy: produce a detailed functional, causal description of how particular external stimuli impinge on the senses, and together with the internal brain processes determine the behavior of the organism (all aspects, at the microscopic and macroscopic level, including what you might say you feel, your pronouncements about consciousness, etc.!) So all "objective" (third-person) accounts are "easy". Hard: explain why there is any experience at all, and why experiences (visual, auditory, taste, pain, etc.) have the particular character that they have. One assumes that worms, trees, viruses, ... don't have experiences, and surely current robots don't either, even if they fake them... These questions are ones arising from a "subjective" (first- person) perspective. The "explanatory gap" ~~~~~~~~~~~~~~~~~~~~~ The vexing issue is that if we imagine having solved the "easy" problem, then in a sense there's nothing left to explain -- from an objective perspective! Yet, sensory experience is so vivid, so directly real, that its existence and character seem to demand an explanation. But if we try to add subjective experiences to the third-person account, we find no causal role for it! So it's emphatically, undeniably *there*, but plays no role?! This is "the explanatory gap". It has led some to claim that experiences (qualia) are simply identical with brain states (even though they don't seem to be, intuitively), or are "emergent" macroscopic properties of neural activity (Searle), or epiphenomenal (separate but lacking causal force), or quantum effects (Roger Penrose), or a hitherto unexplored aspect of information (Chalmers) -- or even beyond human grasp (Colin McGinn), or even an utter "illusion" (Susan Blackmore)! My own view is that one shouldn't mix the two perspectives: they are equally valid perspectives on the same reality. (I have an "artificial gravity" analogy to illustrate the point.) To me, the most pressing problem -- the *general* problem-- is to develop a general theory of sentient agency that predicts whether an agent -- however constructed, out of whatever materials, with whatever architecture -- will have particular kinds of conscious experiences... (Note: framed as an objective problem, this is in a sense part of the "easy" problem -- but I have upped the ante by asking for a general, abstract, hardware- independent theory of sentience.) Two philosophers whose views on the hard problem I find pretty congenial are M. Tye and J. Foss, though many others (including D. Chalmers, J. Searle, etc.) make interesting, thought-provoking points. Self-awareness ~~~~~~~~~~~~~~ Self-awareness is an aspect of the easy problem, because it is relatively easy to give a third-person account of how a computer might examine its own internal knowledge and processes, and use this to check for malfunction, detect knowledge gaps, talk about what it knows and doesn't know, etc. That particular "easy" problem is one that my students and I have been working on, and I'll be foisting some of our work and perspectives on you in the later parts of the course. Note that mere self-*monitoring* is surely not enough for an interesting sense of "self-awareness". E.g., thermostats sense their own state and "act" accordingly, but they are not self-aware in any cognitively interesting sense. Even an operating system's capacity for keeping track of the various processes and resources it manages isn't a cognitively interesting form of self-awareness. So what is? That's part of what we'll talk about later. Topic overview ~~~~~~~~~~~~~~ Our very selective trajectory through the consciousness landscape will be something like this (interspersed with CSC 291 Lisp exercises that should help provide a concrete idea of how a system might display self-awareness): - overview of the traditional concerns & theories of philosophers, and more recently neuroscientists and cognitive scientists, concerning consciousness, free will, dualism, etc. (PS. Blackmore doesn't concern herself seriously with AI.) - Baars' global workspace theory -- a high-level account of human consciousness positing a "theater of consciousness" (in a way, contra Daniel Dennett, one of the most visible and influential philosophers writing on consciousness and AI). Baars is not directly concerned with AI or robots, but an intellectual disciple of his, Stan Franklin, builds AI systems constructed in general conformity with Baars' ideas.) - Fellous and Arbib's collection of articles on emotion in people and robots (after all, emotion requires consciousness!) - Drew McDermott's solution to the (conceptual part of) the qualia problem, preceded by a quick AI state-of-the-art survey, and followed by metaphysical remarks - some extra notes/papers, somewhat technical; a primer on knowledge representation and reasoning (in preparation for materials that follow); 2 systems ("Homer" and "ME") that show some degree of self-awareness; and finally some readings from Marvin Minsky's and Doug Hofstadter's and/or Ray Kurzweil's latest thoughts and work (three authors whose approaches are quite far removed from one another and from everything else we've seen, but are interesting -- partly for that reason).