Logic programming (7 and 12 December 2006) final exam Tues. 19 Dec., 8:30-11:30am, 209 CSB, cumulative BRING QUESTIONS TUESDAY SCOQs will also be on Tuesday ================================= Logic programming predicate calculus predicates -- building-blocks P(a1,a2,...,aK) limit(f,infinity,0) enrolled(you,csc254) These are interesting because we attach meaning to them, but within the logical system they are simply structural building blocks, with no meaning beyond that provided by explicitly-stated interrelationships. operators conjunction, disjunction, negation, implication universal and existential quantifiers statements sometimes true, sometimes false, often unknown axioms -- assumed true theorems -- provably true hypotheses (goals) -- things we'd like to prove true example statements: all f, x0, l [ limit(f, x0, l) <=> (all e [ e > 0 => (exists d [ d > 0 and all x [ ((|x-x0| < d) => (|f(x)-l|) < e)]])])] (the limit of f as x approaches x0 is l) all f, g [f = O(g) <=> (exist c, n0 [ all n [ n > n0 => f(n) < cg(n)]])] (definition of big O) Most statements can be written many ways. That's great for people but a nuissance for computers. It turns out that if you make certain restrictions on the format of statements you can prove theorems mechanically. That's what logic programming systems do. Unfortunately, the restrictions that we will put on our statements will not allow us to handle most of the theorems you learned in math, but we will have a surprising amount of power left anyway. Prolog is not the only logic language, but it is by far the most widely known and used. Prolog can be thought of declaratively or imperatively. I'll emphasize the declarative semantics for now, because that's what makes logic programming interesting. We'll get into the imperative semantics if time permits. Prolog allows you to state a bunch of axioms. Then you pose a query (goal) and the system tries to find a series of inference steps (and assignments of values to variables) that allow it to prove your query starting from the axioms. We insist that all statements be in the form of HORN CLAUSES consisting of a HEAD and a BODY. The head is a single TERM; the body is a list of terms. A term can be a constant, variable, or STRUCTURE consisting of a FUNCTOR and a parenthesized list of arguments. A structure can play the role of a data structure or a predicate. A constant is either an ATOM or a NUMBER. An atom is either what looks like an identifier beginning with a lower-case letter, or a quoted character string. A number looks like an integer or real from some more ordinary language. A variable looks like an identifier beginning with an upper-case letter. There are no declarations. All types are discovered implicitly. The meaning of the statement is that the conjunction of the terms in the body implies the head. A clause with an empty body is called a FACT. A clause with an empty head is a QUERY, or top-level GOAL. A clause with both sides is a RULE. The Prolog interpreter has a collection of facts and rules in its DATABASE. Facts are axioms -- things the interpreter assumes to be true. mother(mary, fred). % you can either think of this as an predicate asserting that mary % is the mother of fred, or a data structure (tree) in which the % functor (atom) mother is the root, mary is the left child, and % fred is the right child. fat(albert). rainy(rochester). Rules are theorems that allow the interpreter to infer things. To be interesting, rules generally contain variables. employed(X) :- employs(Y,X). can be read: for all X, X is employed if there exists a Y such that Y employs X. The scope of a variable is the clause in which it appears. Variables whose first appearance is on the left hand side of the clause have implicit universal quanitfiers. Variables whose first appearance is in the body of the clause have implicit existential quantifiers. Note the direction of the implication! The example does NOT say that X is employed ONLY IF there is a Y that employs X. Similarly, grandmother(A, C) :- mother(A, B), mother(B, C). can be read: for all A, C [A is the grandmother of C if there exists a B such that A is the mother of B and B is the mother of C]. We probably want another rule that says grandmother(A, C) :- mother(A, B), father(B, C). Variables introduced in the LHS are universally quantified. Variables introduced in the RHS are existentially quantified. To run a Prolog program, one asks the interpreter a question. This is done by stating a theorem -- asserting a predicate -- which the interpreter tries to prove. If it can, it says 'yes'. If it can't, it says 'no'. If your predicate contained variables, the interpreter prints the values it had to give them to make the predicate true. The interpreter works by what is called BACKWARD CHAINING: it begins with the thing it is trying to prove and works backwards looking for things that would imply it, until it gets to facts. It is also possible in theory to work forward from the facts trying to see if any of the things you can prove from them are what you were looking for, but that can be very time-consuming. Fancier logic languages use both kinds of chaining, with special smarts or hints from the user to bound the searches. The predicate you ask for is the interpreter's original GOAL. In an attempt to SATISFY that goal, it looks for facts or rules with which the goal can be UNIFIED. Unification is a process by which compatible statements are "merged". Any variables that do not yet have values but which correspond to constants or to variables with values in the other clause get INSTANTIATED with that value. Anyplace where uninstantiated variables correspond, those variables are identified with each other, but remain without values. The interpreter starts at the beginning of your database (this ordering is part of Prolog, NOT of logic programming in general) and looks for something with which to unify the current goal. If it finds a fact, great; it succeeds. If it finds a rule, it attempts to satisfy the terms in the body of the rule depth first. This process is motivated by the RESOLUTION PRINCIPLE, due to Robinson. It says that if C1 and C2 are Horn clauses, where C2 represents a true statement and the head of C2 unifies with one of the terms in the body of C1, then we can replace the term in C1 with the body of C2 to obtain another statement that is true if and only if C1 is true. When it attempts resolution, the Prolog interpreter pushes the current goal onto a stack, makes the first term in the body the current goal, and goes back to the beginning of the database and starts looking again. If it gets through the first goal of a body successfully, the interpreter continues with the next one. If it gets all the way through the body, the goal is satisfied and it backs up a level and proceeds. If it fails to satisfy the terms in the body of a rule, the interpreter undoes the unification of the left hand side (this includes uninstantiating any variables that were given values as a result of the unification) and keeps looking through the database for something else with which to unify. This process is called BACKTRACKING. If the interpreter gets to the end of the database without succeeding, it backs out a level (that's how it might fail to satisfy something in a body) and continues from there. We can visualize backtracking search as a tree in which the top-level goal is the root and the leaves are facts. The children of the root are all the rules and facts with which the goal can unify. The interpreter does an OR across them: one of them must succeed in order for the goal to succeed. The children of a node in the second level of the tree are the terms in the body of the rule. The interpreter does an AND across these: all of them must succeed in order for the parent to succeed. The overall search tree then consists of alternating AND and OR levels. --------------------------------- PROLOG IS *NOT* PURELY DECLARATIVE. The ordering of the database and the left-to-right pursuit of subgoals gives a deterministic imperative semantics to searching and backtracking. Changing the order of statements in the database can give you different results. It can lead to infinite loops. It can certainly result in inefficiency. -------------------------------- Example: tree relationships parent(a,b). % a is the parent of b parent(a,d). parent(a,k). parent(k,l). parent(k,m). parent(b,e). parent(b,f). parent(f,g). parent(f,h). parent(f,i). << draw tree >> a b e f g h i d k l m ancestor(X,Y) :- parent(X,Y). % X is an ancestor of Y ancestor(X,Y) :- parent(Z,Y), ancestor(X,Z). Then the question ?- ancestor(U,h). generates the answers U = f; U = b; U = a; no The question ?- ancestor(b,U). generates all nodes in the subtree rooted in b, but it will do it in a very inefficient way: because U is uninstantiated, the first subgoal in the 2nd ancestor rule has _two_ uninstantiated arguments, and will run through all the parental relationships in the database, looking for pairs (Z,U) for which it can prove that b is an ancestor of b. If we change the order of the two ancestor rules, we get different execution orders. Consider our initial query: ?- ancestor(U,h). U = a; U = b; U = f; no If we change the order of the subgoals in the compound rule, ancestor(X,Y) :- ancestor(X,Z), parent(Z,Y). we run into an infinite loop when X is uninstantiated! (This is worse than the inefficiency we get with the original order when Y is uninstantiated.) --------------------------------- Another example: sorting Prolog has a built-in short-hand notation for lists. [a, b, c] is short for '.'(a, '.'(b, c)), which is a perfectly good structure (fact) but rather hard to read. We can sort as follows: sort(L1,L2) :- permutation(L1,L2), sorted(L2). sorted([]). sorted([X]). sorted([X,Y|L]) :- X== H, partition(A, T, P, S). partition(A, [H|T], P, [H|S]) :- A =< H, partition(A, T, P, S). Note that this partitions using the first element of the list, which is not a very good idea (it leads to O(n^2) time on an already-sorted list!), but there's no easy way to look at other elements in the list, since lists aren't random access. ------------------------------------- Arithmetic The '=' operator determines whether its operands can be unified: ?- A = 37. A = 37 yes ?- 2 = 2. yes This is shorthand for =(A,37) and =(2,2). Math operators are functors (structure names), not functions. ?- (2+3) = 5 no That's shorthand for =(+(2,3),5). To do math you have to use the built-in operator 'is': ?- is(X, 1+2). X = 3 yes ?- X is 1+2. X = 3 yes ?- 1+2 is 4-1. % LHS of 'is' must be as-yet uninstantiated no ?- X is Y. % RHS of 'is' must already be instantiated ------------------------------------- One more example: tic tac toe. This program finds the next move, given a board configuration, with the computer as player 'x'. It does not play a whole game (see the book for an extended version that does). It depends on the ordering of rules. move(A) is the root rule; A is a result parameter. ordered_line(1,2,3). ordered_line(4,5,6). ordered_line(7,8,9). ordered_line(1,4,7). ordered_line(2,5,8). ordered_line(3,6,9). ordered_line(1,5,9). ordered_line(3,5,7). line(A,B,C) :- ordered_line(A,B,C). line(A,B,C) :- ordered_line(A,C,B). line(A,B,C) :- ordered_line(B,A,C). line(A,B,C) :- ordered_line(B,C,A). line(A,B,C) :- ordered_line(C,A,B). line(A,B,C) :- ordered_line(C,B,A). empty(A) :- not x(A), not o(A). same(A,A). different(A,B) :- not same(A,B). move(A) :- good(A), empty(A). % strategy: good(A) :- win(A). good(A) :- block_win(A). good(A) :- split(A). good(A) :- strong_build(A). good(A) :- weak_build(A). % defaults: good(5). good(1). good(3). good(7). good(9). good(2). good(4). good(6). good(8). win(A) :- x(B), x(C), line(A,B,C). block_win(A) :- o(B), o(C), line(A,B,C). split(A) :- x(B), x(C), different(B, C), line(A, B, D), line(A, C, E), empty(D), empty(E). strong_build(A) :- x(B), line(A, B, C), empty(C), not(risky(C)). risky(C) :- o(D), line(C, D, E), empty(E). weak_build(A) :- x(B), line(A, B, C), empty(C), not(double_risky(C)). double_risky(C) :- o(D), o(E), different(D, E), line(C, D, F), line(C, E, G), empty(F), empty(G). ------------------------------------- | Imperative control flow | | To avoid unnecessary search (and even in some cases to get the logic to | work right), the user is allowed in Prolog to outlaw specific instances | of backtracking. The cut ('!') is a special subgoal that always succeeds | the first time, and CANNOT succeed again. Putting it in a right hand | side prevents attempts at re-satisfaction of subgoals to its left, | including unification with the head of the rule. Prolog programmers | consider cut to be a necessary evil; it is as controversial as 'goto'. | Consider | | member(X, [X|T]). | member(X, [H|T]) :- member(X, T). | | prime_candidate(X) :- member(X, candidates), prime(X). | | If prime is really expensive to compute, we don't want to consider X twice | just because it appears in candidates twice. | | member(X, [X|T]) :- !. | % cut commits us to this rule; won't let us try the next rule | member(X, [H|T]) :- member(X, T). | | Prolog also provides a 'not' operator. not(P) succeeds if the interpreter is | unable to prove P. Note that this is different from saying that P is false: | | takes(jane_doe, csc254). | | ?- not takes(X, csc254). | no | | In other words, "I was able to find an X such that X takes 254 (the nested | query succeeded); therefore the negated query fails." If we were hoping to | get | | X = ying_li | | or maybe | | X != jane_doe | | we'll be disappointed. | | Not can actually be defined in terms of the cut and two other built-ins, | call and fail: | | not(P) :- call(P), !, fail. | not(P). | | Call tries to satisfy P as a goal. (Prolog is homoiconic: programs are | structures. A Prolog program can write new pieces of itself on the fly and | execute them with call. It can also add to and subtract from its own | database; more on this in the book. Lisp is also homoiconic.) Fail always | fails. | | The cut and fail can be used to implement imperative control-flow constructs. | To build an if-then-else: | | statement :- condition, !, then_part. | statement :- else_part. | | To enumerate: | | print_partitions(L) :- append(A, B, L), | write(A), write(' '), write(B), nl, | fail. | | ?- print_partitions([a, b, c]). | [] [a, b, c] | [a] [b, c] | [a, b] [c] | [a, b, c] [] | no | | Because of the fail, the query is ultimately doomed. Along the way, | though, it prints exactly what we need. | | To iterate: | | natural(1). | natural(N) :- natural(M), N is M+1. | | my_loop(N) :- natural(I), I =< N, | write(I), nl, % or whatever you want for loop body | I = N, !, fail. | | This idiom is called "generate and test". The test at the end of the | first line covers the case when the loop should not be executed at all. | The cut prevents future failures from causing us to backtrack into the | loop. The final fail is optional: it makes my_loop as a whole fail, so | the whole clause fails the same way print_partitions does; without it, | my_loop succeeds when its test succeeds.