S. Vere and T. Bickmore, "A basic agent" (Computational Intelligence 1990) [Comments in square brackets are my own thoughts. Some remarks on planning AI will be provided in class, if not previously discussed.] VERE & BICKMORE, "A BASIC AGENT" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Their simulated agent HOMER is a submarine-like being (smaller than a codfish) operating in a "Seaworld" containing an island with a dry dock, pier, and fueling station, a couple of icebergs, a submarine patrolling back and forth, a breakwater, and various other, nonmoving objects -- some boats, a barge, some waterbirds, a big codfish, mines, rocks and buoys. HOMER can - converse in English (800 word vocab), accepting information, answering questions, and obeying instructions; - "see" and identify objects in its world when encountering them; - pick up and drop off "packages"; - "take pictures" of objects; - plan its actions, with allowance for time constraints - remember where it has been, what it has seen, what it has done, what its current plans are. Important ideas ~~~~~~~~~~~~~~~ @ temporal planning (borrowing from Vere's 1985 work on DEVISER); replanning; e.g., can comply with "be at the dry dock tomorrow at noon", incorporating this into its other plans; @ recognizing the connection between action verbs and planning operators (with preconditions and effects); see e.g., the definition of "give", p.50 (using "antecdent" and "consequent"). [However, the verb representations have less information than the action representations actually used for planning, which contain details like time and fuel requirements -- and are in part hard to read]; @ recognizing the need for *episodic memory* as essential for *consciousness* (see p.51); [I'm not aware of any other existing end-to-end systems that explicitly address consciousness issues]; @ recognizing the need for "summarizing" perceptual experience, if it is to be communicable; e.g., "reaching" a place is a summary inference from the perception of first "not being at" some object, and then "being at" that object; similarly "passing" an object involves the observation that the object lies *between* a prior location and a later location reached by HOMER; (planning memory is similar to episodic memory, but not NL- accessible; e.g., the user can't ask, "When did you decide...?") @ interpreting adjectives like "large" in terms of a range of expected values for objects of a given type; @ emphasis on an integrated range of abilities (a "decathlon" system, rather than one that excels in a selected specialty, see p.88); @ recognition that eventually an imagistic reasoning capability will be required (p.59) Some problems ~~~~~~~~~~~~~ - ad hoc symbolic representations (their reply: building a working system is bound to require some sacrifice of theoretical hygiene); - mixing of domain symbolism with internal book-keeping symbols in their knowledge representations; e.g., p.55, the definition of "go" (from one place to another, as a state transition) contains the line (.token = (VERE.GENSYM)); the symbols here have nothing to do with going from one place to another, but rather refer to internal variable-management; this is the sort of thing that has given "procedural representations" a bad reputation, in comparison with "declarative" ones (but ordinary language shows that procedural specifications can be just as comprehensible as declarative ones -- e.g., consider recipes, travel directions, or assembly instructions); - the authors found the system harder and harder to extend, as it grew; this probably is related to the previous two points! - the only type of inferencing done seems to be planning and the "summary" inferences mentioned above; a really interesting agent would need to be able to "see the obvious" (e.g., that ramming a manned submarine could cause some people to die); but again, the kinds of KR used here would make this difficult; - as the authors recognize, HOMER lacks initiative, preferences, desires, values -- all those things that "bring an agent to life".