Difference: TEDR (1 vs. 2)

Revision 22006-12-21 - FabrizioMorbini

Line: 1 to 1
 
META TOPICPARENT name="SKIL"

Paper

Toward efficient default reasoning by David Etherington and James Crawford.
Line: 15 to 15
 
  • a formula f in CNF is consistent if all the literals in not f don't occur in the KB. Or if a literal in not f appears only in formulas where the other literals are pure literals. A pure literal is a literal whose complement never appears in the KB.

Proposed solution: two approximations,

Changed:
<
<
  • consider a subset of the Kb (called context) as the KB against which to test consistency
>
>
  • consider a subset of the Kb (called context) as the KB against which do the consistency checks.
 
  • use approximate fast consistency checks
Added:
>
>
"What a context should contain?" open question, the principle they suggest is a short memory principle. A context should contain at least:
  • ground facts about the constants referred by the formula whose consistency we have to check.
  • the rules that have as (part of the) antecendent or consequent a literal matching with the formulas whose consistency we have to check, and literals in the context.

The two extremes of possible contexts are:

  • perfect context: contains exactly the formulas needed to check if a given formulas and the context are consistent.
  • random context: if the formula to check is consistent, it will be declared consistent, if it is not, it may be detected as inconsistent depending on the knowledge in the context.

They use complete consistency checks in the limited context.

Common sense defaults seems to have these properties (and so cooperative with the limited consistency check approach here described):

  • inconsistency are easy to detect
  • the default assumption is almost always true (inconsistency are rare)
The other extreme (of badly behaved defaults) is: default rules rarely true and inconsistency are common and difficult to detect.

Results

They test their approximate method in randomly constructed 3-SAT problems. The problems are in the underspecified region (number of clauses is twice the number of variables), otherwise defaults will be mostly inconsistent and it is assumed that human knowledge is underspecified (i.e. sparse). Just to make it explicit, these experiemnts are on propositional KB.

The radius roughly specifies how long will be the chains included in the context. the contexts are built with the goal of detecting inconsistency (there is no need to build anything to detect consistency).

Parameters being changed: V: number of variables C: number of clauses L: number of additional literals. What is the relation between clauses and literals? Literals are like facts, clauses are like rules.

Changing V:

  • accuracy (of detection of inconsistencies): about constant.
  • size of the context: about constant. The length of the chain useful to detect inconsistencies depends on C/V.
  • time use to do inconsistency check: the more variables the more important is to use limited contexts. Because the size of the limited context remains stable.

Changing L:

  • accuracy: minimum at L=0.4V. Many L, easier problem, less L, harder problem (but inconsistencies are more rare, some even if mistakes are made, they are not too frequent).
  • run time: the advantage in using limited context is evident with less L.
 
Added:
>
>
Changing C:
  • accuracy: when the problem becomes over specified the accuracy drops. Because to find inconsistencies is probably needed a large context.
 

Comparison

Added:
>
>
Even if this experiment is done in propositional logic, the results of how the use of limited contexts influences the correctness of the results is interesting, tough the behavior is not unexpected. The limited contexts could be related to the retrieval mechanism of Epilog.
 

Observations (starting points for discussion)

Added:
>
>
the formula at page 4 defining how the radius is used to compute the context seems wrong. C_l,0=not l if in the KB otherwise {}. C_l,r=C_l,r-1 U {(x V y V not l) U C_x,r-1 U C_y,r-1 : if (x V y V not l) is in KB} basically in the context there is what can be used to easily prove that l is inconsistent.
 

Further readings

Added:
>
>
check out Crawford's notion of accessible portion of the KB and Levesque's notion of limited inference.
  -- FabrizioMorbini - 20 Dec 2006

Revision 12006-12-20 - FabrizioMorbini

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SKIL"

Paper

Toward efficient default reasoning by David Etherington and James Crawford.

Summary

Default reasoning aims to formalize the "notion of jumping to conclusions". A brief but complete introduction to non monotonic reasoning is available here.

Harder then classical logic. Even though it was supposed to imitate human reasoning (jumping to conclusions, and therefore fast). Two things make it hard:

  • need to check for consistency of a formula respect to a KB many times during a proof.
  • the order in which default rules are used is fundamental
The first is more important than the second. So, the authors, in this paper, tackle the first problem.

There are sufficient conditions to check for consistency (that are fast), but they are useless in practical environment because they can be used only under conditions that normally are not satisfied:

  • a formula f in CNF is consistent if all the literals in not f don't occur in the KB. Or if a literal in not f appears only in formulas where the other literals are pure literals. A pure literal is a literal whose complement never appears in the KB.

Proposed solution: two approximations,

  • consider a subset of the Kb (called context) as the KB against which to test consistency
  • use approximate fast consistency checks

Comparison

Observations (starting points for discussion)

Further readings

-- FabrizioMorbini - 20 Dec 2006

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding URCS? Send feedback