-- LenSchubert - 2011-09-29

Wish list for EPILOG enhancements (in no particular order):

0. Allow arbitrary quoted expressions (possibly containing metavariables) as terms. [Already done, I think.]

1. Allow "flat" formulas involving predicate arguments; e.g., "Mary made John leave" [Mary make leave John] <=> [Mary (make leave) John] <=> [Mary ((make leave) John)] <=> (((make leave) John) Mary) Here 'make' is a "subj-adding-op", transforming a monadic predicate into a 2-place predicate (i.e., with an "added" subject). Linguistically, this is an "object-control verb". Another kind of example where we use such forms is "John has Mary as neighbor" [John have-as neighbor Mary], where 'neighbor' is monadic, but we consider (have-as neighbor) as the inverse of 'neighbor-of', a binary predicate. I.e., [John have-as neighbor Mary], or [John (have-as neighbor) Mary] is equivalent to [Mary neighbor-of John].

In the following, we also have predicate arguments, but the main verb is an "obj-adding-op" (linguistically, a subject-control verb): "John promised Mary to leave", "John seemed smart to Mary", "John impressed Mary as smart",. We take the LFs to be [John promise-to Mary leave], [John seem-to Mary smart], [John impress Mary smart], or equivalently, [John (promise-to Mary) leave], or [John ((promise-to Mary) leave)], or (((promise-to Mary) leave) John), etc.

2. By the same token, allow unflattened formulas (as in the last 2 wffs above); But perhaps this has unpleasant consequences for retrieval? I guess retrieval should always be preceded by normalization, which perhaps should always produce maximally flat forms?

3. Recognize syntactic extensions of atoms as indicating the general type of the atom; thus, e.g., (with no significance to capitalization), House.name indicates a term house.n indicates a (nominal) predicate house.v indicates a (main verbal) predicate Me.pro indicates a (pronominal) term No.det indicates a determiner (quantifier) No.s indicates a wff (sentence) out.p indicates a prepositional predicate ("walked out the door") out.v indicates a verbal predicate out.prt indicates a particle (though this will normally be attached to the verb, e.g., "find out" becomes 'find-out.v' out.a indicates an adjectival predicate ("The doctor is out") can.aux indicates a predicate modifier ("can play the piano") fast.a indicates an adjectival predicate fast.adv indicates a predicate modifier ("runs fast") perhaps.adv indicates a sentence modifier (should we use 'perhaps.sadv' since '..adv' also indicates a predicate modifier?) that.comp indicates a sentence reification operator; but we also use 'that' that.det indicates a (deictic) determiner (quantifier) that.pro indicates a (pronominal) term that.rel indicates a relativizer -- but it won't appear explicitly in LF These extensions should help produce unambiguous parses of EL formulas, regardless of whether or not it is in "flattened" form.

4. It would be good if the wff parser printed informative error diagnostics.

5. It would be nice if adding normalization rules were easy (declarative form)

6. We need a complete set of equivalences relating quantifiers and *, *, @ E.g., we can move type predications and other atemporal formulas out of (or into) *, *, @-environments. (Rules to be supplied by LKS.)

7. We need to allow for use of equality (viz., substitution of equals) in inference processes; this needs to be done "circumspectly", as it can generate huge numbers of possible actions. The EPILOG equality specialist should be used to help, if possible.

8. We need to re-intergrate the specialists into EPILOG 2. The idea of relying on EPILOG deliberately invoking specialists that it knows to be relevant to a goal formula was ill-considered. For example, arithmentic simplification should take place automatically, without such simplification. Perhaps we need to reintroduce specialist indicators for appropriate predicates and functions; or perhaps the indicators can be generated automatically from the code for the specialists (i.e., we look automatically to see which predicates and functions a specialist handles, and flag these as indicating the specialist.)

9. Make sure that EL test examples in the epi2 files are ones relevant to EPILOG 2. E.g., the QQ examples use the substitutional quantification syntax from EPILOG 1, not EPILOG 2.

10. Enable "reason-giving" in answering yes-no questions. These (to a first approximation) should be the leaves of the proof graph that delivered the answer. (The non-leaves will generally be generic pieces of "common knowledge", thus not needed by the question-asker). However, we should also be able to give the complete proof or disproof.

11. (This is a biggie.) We need REASON MAINTENANCE in EPILOG, which provides links from each "believed" wff backward to its parents used in its derivation, and forward to its children, i.e., direct inferences. This is essential for belief revision when something we learn is at odds with something already believed. Even when we simply want to say "No, scratch that, it's not true", we need to be able to retract not only the wff at issue, but also any inferences that may have been generated from it. I think this will be crucial not only in simple belief retraction, but in just altering certainties. For example, when we find a new fact supporting some conclusion that was already somewhat certain, we want to update the certainty of the conclusion. But this requires taking account of how it was derived -- certainties require taking account of the logical and statistical interdependencies among the formulas used to arrive at them (cf. Bayesian networks).

This will also be crucial in re-evaluating the effects and certainties and utilities in a plan after making a change to the plan or executing a step or bringing to bear additional information (e.g., the user's response to a speech action). If we have the inference paths that led to our previous conclusions and estimates, then updating can avoid re-doing those parts of the inference paths that are not affected.

12. (Another biggie.) We need a systematic forward inference capability, taking account of salience and inherent interestingness of individuals, predicates, and formulas, and carrying forward an "inference budget" in a forward inference chain, such that if we start a chain with a big bugdet (it's a very salient, interesting starting formula), then "uninteresting" wffs later in the chain are more likely to be pursued further than if the initial budget was small. Each step forward costs a bit of the budget, but interesting wffs derived along the way may add to the inference budget (perhaps more than they cost).

13. We seem to need "throw" and "catch" mechanisms for building a dialogue agent based on EPILOG. The idea is that all decisions to act will be made by the planning executive, but this executive by itself doesn't come up with any ideas what to do. Rather, it relies on the inference engine -- which contains the plan in some form such as ( Life-plan = '( ... )) -- to throw out "suggestions" of what may be appropriate and beneficial things to do, and what the probable effects and utilities or costs of various actions and indeed the plan as a whole might be. The planning executive needs to "catch" these suggestions, and modify the plan accordingly, either by making trial changes in the plan, or executing a currently executable intitial step. In other words, EPILOG proposes, and the planning executive disposes. As a simple example, when the user inputs "Hi", we store the assertion that the user said "Hi". (That's a perceptual event, so to speak -- the system "hears" the user's input). This should trigger an inference rule to the effect that if the user just said X, then the user conveyed the content of X to the system, where 'content-of' or something like it is an evaluable function, which essentially parses and interprets what has been said. (Maybe it provides both a parse tree and an interpretation). This in turn should feed into forward inference rules that say things like (in proper EL form) "User greets me with 'Hi", or "User tells me that ...", or "User asks me whether ...", or "User requests that I ...", i.e., some surface speech act. This in turn triggers rules that make both factual inferences in conjunction with other contextual and background knowledge, and proposals to the effect that a felicitous response may be so-and-so (more generally, one conveying such-and-such content), so that the effect will be this-or-that; i.e., a suggestion annotated with its purpose or goal. These are then among the inferences that EPILOG needs to "throw", and the planner needs to "catch", presumably based on their syntactic form. So a simple case might be that EPILOG infers that a felicitous response may be to say "Hi" back to the user (if the system hasn't greeted yet, according to the conversations record -- another one of those parameters in EPILOG, e.g., Discourse-record = '(...)). It may also suggest asking for the user's name, if it isn't yet known.

The planner promptly places these tentatively at the beginning of the plan, i.e., in a form something like '(Perhaps [Me do so-and-so])', and this change to the plan should immediately trigger plan prediction and evaluation rules ; i.e., EPILOG should work out the (new) consequences of the modified plan, i.e., the anticipated consequences of actually doing what the plan now says will perhaps be done. The inferences should culminate in throwing (a) a change in expected cumulative utility engendered by doing the action (vs not doing it?!), along perhaps with an estimated standard deviation in that utility, and (b) suggestions as to what might be felicitous changes to the plan, such as further additions, or deletions, or swaps, or promotions/demotions, or expansions of high-level actions or goals. Again the planner catches these, puts them in its agenda, and decides whether to "try" these suggestions first, or, if the evaluation of the "perhaps ..." step was positive, committing to that step (removing the "perhaps"), and only then taking another idea from the agenda. It can be seen that apart from the throwing and catching mechanisms, we also will require reason maintenance (point 11), so that updates to predictions from tentative steps and to estimated utilities can be reasonably efficiently carried out, rather than re-starting prediction and evaluation from scratch.

14. We need documentation! E.g., for the syntax, what atoms require type declarations so that the parser won't make wrong assumptions? And more generally, what are the most important facilities, how do you put in facts and inference rules, how do you ask yes-no or wh- questions, how do you get extra information about the basis for inferences, etc.


This topic: Main > TWikiGroups > EpilogGroup > EpilogWishList
Topic revision: r2 - 2011-10-01 - JonathanGordon
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding URCS? Send feedback