Functional programming (17 and 22 Sept. 2009) T2 due Today (Thursday the 17th) A2 due Monday Sept. 28, 11:59pm A3 will be semantic analysis, in Scheme ======================================== ] In the new assignment, you'll need to really get your mind into the ] functional groove. DON'T TRY TO GET BY WITHOUT! Here's a good starting ] exercise. Grammars are represented like this: ] ] > calc-gram ] (("P" ("SL" "$$")) ] ("SL" ("S" "SL") ()) ] ("S" ("id" ":=" "E") ("read" "id") ("write" "E")) ] ("E" ("T" "TT")) ] ("T" ("F" "FT")) ] ("TT" ("ao" "T" "TT") ()) ] ("FT" ("mo" "F" "FT") ()) ] ("ao" ("+") ("-")) ] ("mo" ("*") ("/")) ] ("F" ("id") ("num") ("(" "E" ")"))) ] ] And parse tables are represented like this: ] ] > (parse-table calc-gram) ] (("P" (("$$" "id" "read" "write") ("SL" "$$"))) ] ("SL" (("id" "read" "write") ("S" "SL")) (("$$") ())) ] ("S" (("id") ("id" ":=" "E")) (("read") ("read" "id")) (("write") ("write" "E"))) ] ("E" (("(" "id" "num") ("T" "TT"))) ] ("T" (("(" "id" "num") ("F" "FT"))) ] ("TT" (("+" "-") ("ao" "T" "TT")) (("$$" ")" "id" "read" "write") ())) ] ("FT" (("*" "/") ("mo" "F" "FT")) (("$$" ")" "+" "-" "id" "read" "write") ())) ] ("ao" (("+") ("+")) (("-") ("-"))) ] ("mo" (("*") ("*")) (("/") ("/"))) ] ("F" (("id") ("id")) (("num") ("num")) (("(") ("(" "E" ")")))) ] ] Given the way I chose to structure the code, there are times when it's ] handy to be able to extract the grammar back out of the parse table. ] Here's a routine that does so: ] ] (define grammar-of ] (lambda (parse-tab) ] ; Extract grammar from parse-tab, so we can invoke the various routines ] ; that expect a grammar as argument. ] (map (lambda (L) (append (list (car L)) (map cadr (cdr L)))) ] parse-tab))) ] ] If you understand how it does that, you're probably in good shape. ] If you don't understand it, you need to study it more carefully, review ] Sec. 10.5 in the text, talk to the TA or the instructor, etc. ================================= Functional programming Functional langugaes such as Lisp/Scheme and ML/Haskell/OCaml/F# are an attempt to realize Church's lambda calculus in practical form as a programming language. The key idea: do everything by composing functions. No mutable state; no side effects. So how do you get anything done in a functional language? --------------------------------- RECURSION Takes the place of iteration. Some tasks are "naturally" recursive. Consider for example the function { a if a = b gcd(a, b) = { gcd(a-b, b) if a > b { gcd(a, b-a) if b > a (Euclid's algorithm). We might write this in C as int gcd(int a, int b) { /* assume a, b > 0 */ if (a == b) return a; else if (a > b) return gcd(a-b, b); else return gcd(a, b-a); } Other tasks we're used to thinking of as naturally iterative: typedef int (*int_func) (int); int summation(int_func f, int low, int high) { /* assume low <= high */ int total = 0; int i; for (i = low; i <= high; i++) { total += f(i); } return total; } But there's nothing sacred about this "natural" intuition. Consider: int gcd(int a, int b) { /* assume a, b > 0 */ while (a != b) { if (a > b) a = a-b; else b = b-a; } return a; } typedef int (*int_func) (int); int summation(int_func f, int low, int high) { /* assume low <= high */ if (low == high) return f(low); else return f(low) + summation(f, low+1, high); } More significantly, the recursive solution doesn't have to be any more expensive than the iterative solution. In Scheme, the gcd function would be written (define gcd (lambda (a b) (cond ((= a b) a) ((> a b) (gcd (- a b) b)) (else (gcd a (- b a)))))) << explain define, lambda, cond, Cambridge prefix >> Note that the recursive call is the LAST thing gcd does -- no further computation after the return. This is called TAIL RECURSION. Scheme implementations will translate this as, roughly: gcd(a, b) { top: if a == b return a elsif a > b a := a - b goto top else b := b - a goto top } Scheme programmers get good at writing functions that are naturally tail recursive. For example, instead of (define summation (lambda (f low high) (if (= low high) (f low) ; then part (+ (f low) (summation f (+ low 1) high))))) ; else part we could write (define summation (lambda (f low high subtotal) (if (= low high) (+ subtotal (f low)) (summation f (+ low 1) high (+ subtotal (f low)))))) To get rid of the unfortunate extra parameter, we then wrap it: (define summation (lambda (f low high) (letrec ((sum-helper (lambda (low subtotal) (let ((new_subtotal (+ subtotal (f low)))) (if (= low high) new_subtotal (sum-helper (+ low 1) new_subtotal)))))) (sum-helper low 0)))) << explain let, let*, letrec, lexical nesting >> (This tail recursive code exploits the associativity of addition; a compiler is unlikely to do it for us automatically. There exist automatic mechanisms to turn non-tail-recursive functions into tail-recursive ones, using what's known as *continuation passing style*, but this wouldn't be as efficient in this case.) -------- Sometimes you'll hear someone argue that recursion is *algorithmically inferior* to iteration. Fibonacci numbers are sometimes given as an example: (define fib (lambda (n) (cond ((= n 0) 1) ((= n 1) 1) (else (+ (fib (- n 1)) (fib (- n 2))))))) This takes O(2^n) time, where O(n) is possible. In a von Neumann language we are taught to write int fib(int n) { int f1 = 1; int f2 = 1; int i; for (i = 2; i <= n; i++) { int temp = f1 + f2; f1 = f2; f2 = temp; } return f2; } But there's no reason why we have to do this. In Scheme we write (define fib (lambda (n) (letrec ((fib-helper (lambda (f1 f2 i) (if (= i n) f2 (fib-helper f2 (+ f1 f2) (+ i 1)))))) (fib-helper 0 1 0)))) Thinking about recursion as a direct, mechanical replacement for iteration is the wrong way to look at things. One has to get used to thinking in a recursive style. NB: One can actually do better than O(n). In particular, F(n) is the nearest whole number to phi^n/sqrt(5), where phi = (1 + sqrt(5))/2, but this has high constant-factor costs and problems with numeric precision. For modest n, the O(n) algorithm is perfectly respectable. NB2: Scheme has imperative features, so we *can* write the iterative version. It runs against the grain of the language, however (like writing C-like code in C++, only worse), and you won't be allowed to do it for the upcoming assignment. NB3: Recursion isn't enough by itself to create a really useful functional language. You also need of *higher-order functions* (functional forms). More on this later. --------------------------------- A more complete list of necessary features for functional programming, many of which are missing in some imperative langs: recursion 1st class and high-order functions (including unlimited extent) serious polymorphism powerful list facilities fully general aggregates structured function returns garbage collection Lisp also has homoiconography self-definition read-eval-print ML/Haskell/F# have Milner type inference pattern matching implicit currying syntactic sugar: list comprehensions, monads these are not nec. present in other functional langs LISP Pure (original) Lisp Interlisp MacLisp Emacs Lisp Common Lisp Scheme others Pure Lisp is purely functional; other Lisps have imperative features. All early Lisps dynamically scoped. Not clear whether this was deliberate or if it happened by accident. Scheme and Common Lisp statically scoped (Common Lisp provides dynamic scope as an option for explicitly-declared "special" functions). Common Lisp now THE standard Lisp. Very big; complicated. The Ada of functional programming. Scheme is a particularly elegant Lisp. Only dialect in the list above to implement functions as *truly* first-class objects: can be manipulated in *any* way that objects of other types can (passed as parameters, returned as values, stored in variables or data structures, read/written, etc.). Scheme also makes *continuations* first-class objects. (A continuation is a code pointer and a referencing environment. It encapsulates a potential future computation. A subroutine call can be described as a goto that passes a continuation for the return point as a parameter, to be used as the target of a future goto-continuation call.) Scheme is a very good teaching language (used for first courses at MIT and elsewhere). There are lots of other functional programming languages. Lisp and ML are the roots of the two main trees. ML was developed by Robin Milner in the mid-to-late 1970s. The two main dialects of ML today are SML and OCaml. Microsoft's new F# is an OCaml dialect. Haskell is also increasingly popular, and is the leading language for research in functional programming. -------------------------- Introduction to programming in Scheme. This is still R5RS. R6RS was codified last year; it's quite a lot bigger, but we won't be using it. Interpreter runs a read-eval-print loop. Things typed into the interpreter are evaluated (recursively) once. Anything in parentheses is a function call (unless quoted). Parentheses are NOT just grouping, as they are in Algol-family languages. Adding a level of parentheses changes meaning. For example: (car '(1 2 3)) ==> 1 ((car '(1 2 3))) ==> (1) => error The '==>' arrow means 'evaluates to'. It's part of my notes, not part of the language. Evaluating a lambda expression produces a function. Built-ins Boolean values #t and #f #f is equivalent to the empty list () in some implementations Numbers Lambda expressions Quoting (+ 3 4) ==> 7 (quote (+ 3 4)) ==> (+ 3 4) '(+ 3 4) ==> (+ 3 4) Mechanisms for creating new scopes (let ((square (lambda (x) (* x x))) (plus +)) (sqrt (plus (square a) (square b)))) let* letrec Mechanisms for creating bindings in outer scope (define hypot (lambda (a b) (let ((square (lambda (x) (* x x))) (plus +)) (sqrt (plus (square a) (square b)))))) [ There is a shorthand syntax for function definitions that omits the word 'lambda'. Most LISPS have such a shorthand, but it varies from one dialect to another. ] Conditional expressions (if (< 2 3) 4 5) ==> 4 (cond ((< 3 2) 1) ((< 4 3) 2) (else 3)) ==> 3 [ Most LISPS use the literal 'T' to preface the default clause. ] Imperative stuff assignments sequencing (begin) iteration I/O (read, display) (load "file") Few dozen standard functions (this is not a complete list) arithmetic boolean operators equivalence eqv? always terminates; is guaranteed to work "right" on Booleans, symbols, numbers, characters, the empty list, and references to the same object (pair, vector, string, function) in memory eq? may be faster than eqv?; typically implemented with simple pointer comparison; may not work "right" on numbers and characters, but will still work right on Booleans, symbols, the empty list, and references to the same object; will always return #f if eqv? and/or equal? would return #f; may return #f when eqv and/or equal would return #t) equal? (recursive structural compare; may not terminate) You can think of equal like this: (define equal? (lambda (l1 l2) (cond ((atom? l1) (eqv? l1 l2)) ((atom? l2) nil) ((equal? (car l1) (car l2)) (equal? (cdr l1) (cdr l3))) (else #f)))) list operators car cdr cadr, cddr, etc. cons append list memq, memv, member takes an object and a list as arguments returns the longest suffix of the list, if there is one, whose first element is eq/eqv/equal to the given object else returns #f assq, assv, assoc takes an object and a list as arguments the list is expected to be a list of 2-element lists returns the first 2-element list, if any, whose first element is eq/eqv/equal to the given object else returns #f pair? null? symbol? number? complex? real? rational? integer? NB: These question-mark predicates are used by built-ins like arithmetic to do run-time type checking. User-defined code can use them also, to catch errors earlier and produce better messages. -------------------------- | You may notice that programs look like lists. In fact, they ARE | lists, and a Scheme program is free to construct lists on the fly and | use them as functions -- extending the program as it goes along. | Formally, lists are really nested series of pairs, which can be | written with 'dot' notation: | A.(B.(C.(D.()))) | In a "proper" list the second element of the last pair is always the | empty list. | This notation is really ugly, so the alternative: | (A B C D) | is almost always used instead (though it doesn't work for improper lists). | | Example of lists as programs: | | (define foo (lambda (a b) | (lambda (x) (a (b x))))) | ((foo car cdr) '(1 2 3)) ==> 2 | | (define bar (lambda (a b) | (eval (list 'lambda '(x) (list a (list b 'x))) | ()))) | ((bar car cdr) '(1 2 3)) ==> 2 | | We say that Lisp is HOMOICONIC (so is Prolog). | It can be implemented with a METACIRCULAR INTERPRETER. | | Most functional languages are NOT homoiconic. | | We can use meta-circularity to define the semantics of Scheme, formally. | Suppose M is a denotational function mapping Scheme expressions to their | meaning, where the meaning is a mathematical object. | | Also suppose I is the Scheme interpreter (itself a Scheme expression). | For all Scheme expressions E, M(E) = (M(I)) (E), or put another way, | M(I) = M. | | Now let H(F) = F(I) where F is any function that takes a Scheme | expression as its argument. We have H(M) = M(I) = M, so M is a *fixed | point* of H. We can use H and the tools of denotational semantics to | obtain a rigorous definition of M. (Beyond the scope of this course.) -------------------------- Example program in Scheme. Simulation of a DFA. We'll invoke the program by calling a function called 'simulate', passing it a DFA description and an input string. The automaton description is a list of three items: the start state, the transition function, and the set of final states (which in this case has only one element, but might in general have any number). The transition function is represented by a list of pairs. The first element of each pair is another pair, whose first element is a state and whose second element in an input symbol. If the current state and next input symbol match the first element of a pair, then the finite automaton enters the state given by the second element of the pair. For example, if we type in (define zero-one-even-dfa '(q0 ; start state (((q0 0) q2) ((q0 1) q1) ((q1 0) q3) ((q1 1) q0) ; transition function ((q2 0) q0) ((q2 1) q3) ((q3 0) q1) ((q3 1) q2)) (q0)) ; set of final states [This DFA accepts strings containing an even number of 0s and an even number of 1s.] (simulate zero-one-even-dfa '(0 1 1 0 1)) ; input string then the Scheme interpreter should print out the list (q0 q2 q3 q2 q0 q1 reject) If we change the input string to 010010 it should print (q0 q2 q3 q1 q3 q2 q0 accept) Here is a program that does it: (define simulate (lambda (dfa input) (cons (current-state dfa) ; start state (if (null? input) (if (infinal? dfa) '(accept) '(reject)) (simulate (move dfa (car input)) (cdr input)))))) ;; access fns for machine description (define current-state car) (define transition-function cadr) (define final-states caddr) (define infinal? (lambda (dfa) (memq (current-state dfa) (final-states dfa)))) (define move (lambda (dfa symbol) (let ((cs (current-state dfa)) (trans (transition-function dfa))) (list (if (eq? cs 'error) 'error (let ((pair (assoc (list cs symbol) trans))) (if pair (cadr pair) 'error))) ; new start state trans ; same trans fn (final-states dfa))))) ; same final states The basic idea is this: simulate takes a DFA and an input string as argument. If the input string is empty, it checks to see if the start state of the DFA is a final state. If the input string is not empty, simulate calls itself recursively, passing a one-symbol-shorter input string and a DFA that has been modified to have a different start state, namely the one that the old DFA moved to when given the initial input symbol. -------------------------- Evaluation order Applicative order what you're used to in imperative languages usually faster Normal order like call-by-name: don't evaluate arg until you need it sometimes faster terminates if anything will (Church-Rosser theorem) In Scheme functions use applicative order defined with lambda special forms (aka macros) use normal order defined with syntax-rules A *strict* language requires all arguments to be well-defined, so applicative order can be used. A *non-strict* language does not require all arguments to be well-defined; it requires normal-order evaluation. Lisp and ML are strict. Haskell is non-strict. Lazy evaluation gives the best of both worlds. But not good in the presence of side effects. delay and force in Scheme delay creates a "promise" (define naturals (letrec ((next (lambda (n) (cons n (delay (next (+ n 1))))))) (next 1))) (define head car) (define tail (lambda (stream) (force (cdr stream)))) (head naturals) => 1 (head (tail naturals) => 2 (head (tail (tail naturals))) => 3 -------------------------- Higher-order functions Take a function as argument, or return a function as a result. Examples apply (apply + '(1 2 3)) ==> 6 map (map * '(2 4 6) '(3 5 7)) ==> (6 20 42) compose (not pre-defined in some implementations) (define compose (lambda (f g) (lambda (x) (f (g x))))) (compose car cdr) '(1 2 3)) ==> 2 for-each Much like map, but executed for its side effects; has an undefined return value. Used only in imperative programs. call/cc With call/cc, recursion is just something you can do with higher-order functions. Great for building things: (define fold (lambda (f i l) ;; aka reduce (if (null? l) i ; initial i is commonly the identity element for f ; in nested calls it's a subtotal (fold f (f i (car l)) (cdr l))))) (fold + 0 '(1 2 3 4 5)) => 15 (fold * 1 '(1 2 3 4 5)) => 120 (define total (lambda (l) (fold + 0 l))) (total '(1 2 3 4 5)) => 15 (define total-all (lambda (l) (map total l))) (total-all '((1 2 3 4 5) (2 4 6 8 10) (3 6 9 12 15))) => (15 30 45) Currying (after Haskell Curry, the same guy Haskell is named after): (define curried-plus (lambda (a) (lambda (b) (+ a b)))) ((curried-plus 3) 4) ==> 7 (define plus-3 (curried-plus 3)) (plus-3 4) ==> 7 (define curry (lambda (f) (lambda (a) (lambda (b) (f a b))))) (((curry +) 3) 4) ==> 7 (define curried-plus (curry +)) (map (curried-plus 3) '(1 2 3)) ==> (4 5 6) NB: This function requires UNLIMITED EXTENT (define inc ((curry +) 1)) ML, Miranda, and Haskell have especially nice syntax for curried functions. fun curried_plus a = fn b : int => a + b; can be abbreviated fun curried_plus a b : int = a + b; Then juxtaposition makes use of curried functions really clean. Compare curried_fold plus 0 [1, 2, 3, 4, 5]; (* ML or Haskell *) to (((curried_fold +) 0 '(1 2 3 4 5))) ; Scheme Cf. pointer dereferencing syntax: a^.b^.c (* Pascal *) (*((*a).b)).c /* C */ ================================== Advantages of functional languges: lack of side effects makes programs easier to understand lack of explicit evaluation order (in some languages) offers possibility of parallel evaluation (e.g. MultiLisp) lack of side effects and explicit evaluation order simplifies some things for a compiler (provided you don't blow it in other ways) programs are often surprisingly short language can be extremely small and yet powerful Problems: difficult (but not impossible!) to implement efficiently on von Neumann machines lots of copying of data through parameters (apparent) need to create a whole new array in order to change one element very heavy use of pointers (space and time and locality problem) frequent procedure calls heavy space use for recursion requires garbage collection requires a different mode of thinking by the programmer difficult to integrate I/O into purely functional model leading approach is the monads of Haskell -- sort of an imperative wrapper around a purely functional program; allows functions to be used not only to calculate values, but also to decide on the order in which imperative actions should be performed. -------------------------- | Lambda calculus | | A notation/model of computation based on purely syntactic symbol | manipulation, in which everything is a function. | | Developed by Alonzo Church in the '30's as a model for computability | Church was one of a crowd that also included Chomsky, Turing, | Kleene, and Rosser | everybody heard of Church's thesis? | | [ We can define things like integers in terms of a distinguished | function (like the identity function) that represents zero, and a | successor function that gives us all the other numbers. This makes | it easy to define the arithmetic operators within the notation. In | practice this is kind of a nuissance; I will just assume the | existence of arithmetic and of distinguished "constant functions" | for numbers. ] | | example lambda expressions | identity Lx.x | const7 Lx.7 | plus Lx.Ly.x + y | square Lx.x * x | hypot Lx.Ly.sqrt (plus (square x) (square y)) | | Recursively, a lambda expression is | (1) a name, | (2) an abstraction consisting of a lambda, a name, a dot, and a | lambda expression, | (3) an application consisting of two adjacent lambda expressions | (juxtaposition means function application), or | (4) a parenthesized lambda expression. | | Usually application associates left-to-right, so f A B means (f A) B, | rather than f (A B). Also, application has higher precedence than | abstraction, so Lx.A B is Lx.(A B), rather than (Lx.A) B. Note that | ML follows these rules. | Parentheses are used for clarity, or to break the rules: by convention, | usually use them around any abstraction that is used as a function or | an argument: | (Lf.f 2) (Lx.plus x x) | and around any application that is used as an argument: | double (minus 5 2) | These rules mean that the scope of the dot extends right all the way | to the first unmatched right parentheses, or the end of the whole | expression if there is no such parenthesis. | | In (Lx.Ly.Lz.e) a b c, the initial function takes a single argument and | returns a function (of one argument) that returns a function (of one | argument). To reduce the expression, you substitute a for any x's in | Ly.Lz.e, then you substitute b for any y's in what remains, and then | c for any z's in what remains. | | (Lx.Ly.x + y) 3 4 | Ly.(3 + y) 4 | (3 + 4) | 7 | | free and bound variables | a variable is bound if it is introduced by a lambda. | For example, in Lx.Ly.(* x y) we have two nested lambda expressions. | x is free in the inner one (Ly.(* x y)), but bound in the outer. | Bindings have scopes, just like they do in programming languages. | | evaluation of lambda expressions through | (1) substituting in arguments (beta reduction) | (Lx.times x x) y => times y y | (2) renaming variables (alpha conversion) | (often to avoid naming conflicts) | (Lx.times x x) y == (Lz.times z z) y | (3) simplification "out of order" (eta reduction) | (Lx.f x) => f | | This last rule I found hard to understand. | It is NOT the same as beta reduction. | "(Lx.f x)" is a function. x is not its argument; x is inside. | But if we apply the function to, say, y: | (Lx.f x) y | we get the same result as we get by applying f to y. | | To make things concrete, suppose f = (Lz.times z z) ; squaring. Then | (Lx.f x) 3 == (Lx.(Lz.times z z) x) 3 | =>beta (Lz.times z z) 3 [outer] | =>beta times 3 3. | Using eta reduction, | (Lx.f x) 3 =>eta f 3 == (Lz.times z z) 3 =>beta times 3 3. | Same result. | | In English, f is the function "square your argument". | (f x) is the expression "square x", where x is a free variable. | (Lx.f x) is "square x" AS A FUNCITON OF X, or, put another way, | "square your argument". | The only real differences between the two expressions are | that f is less messy-looking, but (Lx.f x) reminds you explicitly | that f is a function (i.e. that it takes an argument). | | an example from the text: | | (Lf.Lg.Lh.f g (h h)) (Lx.Ly.x) h (Lx.x x) | This is a function -- (Lf.Ly.Lh.f g (h h)) -- applied to | three arguments -- (Lx.Ly.x), h, and (Lx.x x) | =>beta (Lg.Lh.(Lx.Ly.x) g (h h)) h (Lx.x x) | =>alpha (Lg.Lk.(Lx.Ly.x) g (k k)) h (Lx.x x) | This conversion is necessary to avoid capturing h in the | following step: | =>beta (Lk.(Lx.Ly.x) h (k k)) (Lx.x x) | =>beta (Lx.Ly.x) h ((Lx.x x) (Lx.x x)) | =>beta (Ly.h) ((Lx.x x) (Lx.x x)) | The function in this last step doesn't use its argument! | =>beta h | | Beta reduction is like substitution of macro parameters, except naming | conflicts aren't allowed. Alpha conversion serves to get rid of the | naming conflicts. The combination is like call-by-name parameters. | | Note that at the last line we had a function applied to an application. | We had the choice of substituting the application in for y (which we did) | or evaluating it first. If we'd tried the latter, we'd have ended up in | an infinite recursion. | | These options illustrate the difference between NORMAL ORDER evaluation | and APPLICATIVE ORDER evaluation. The Church-Rosser theorem states that | if any evaluation order will terminate, normal order will. Along the way, | however, it may do extra work. | | For reasons of efficiency, most functional languages use applicative | order evaluation. You can get normal order when you want it in Scheme | with 'delay' and 'force'. Also, most built-in primitives are | *functional forms*, rather than functions: they evaluate their | arguments when they have to [(+ 2 3) produces 5, not (+ 2 3)], but | they use normal order along the way. For example, if and cond don't | evaluate unnecessary arguments at all (neither do and and or -- | short-circuit evaluation). Common Lisp allows you to define new | functional forms. In lambda calculus, you can use normal order | evaluation whenever you want (evaluation order isn't part of the basic | rules), and built-in functions like math aren't special (we use | functions to represent numbers), so functional forms and functions | aren't different. | | ---------------- | Multiple parameters (tuples) | | Let | select_first == Lx.Ly.x | select_second == Lx.Ly.y | | Then let | cons == La.Ld.Lx.x a d | car == Ll.l select_first | cdr == Ll.l select_second | | car (cons A B) == (Ll.l select_first) (cons A B) | =>beta (cons A B) select_first | == ((La.Ld.Lx.x a d) A B) select_first | =>*beta (Lx.x A B) select_first | =>beta select_first A B | == (Lx.Ly.x) A B | =>*beta A | | cdr (cons A B) == (Ll.l select_second) (cons A B) | =>beta (cons A B) select_second | == ((La.Ld.Lx.x a d) A B) select_second | =>*beta (Lx.x A B) select_second | =>beta select_second A B | == (Lx.Ly.y) A B | =>*beta B | | Book shows how to do Booleans and control flow. | | ---------------- | Recursive functions | | Note that our usual specification of recursive functions uses names that | are referred to recursively: | | factorial(n) = if n = 0 then 1 else n * factorial(n-1) | | How do we do this in pure lambda calculus? | Depends on the notion of fixed point. | | Use beta abstraction to get | | factorial = (Lf.Ln if n = 0 then 1 else n * f(n-1)) factorial | | This is of the form factorial = F factorial | What we need is a *fixed point* of F. | | One can prove that Y F works, where Y == Lh.(Lx.h (x x))(Lx.h (x x)). | More on this in the text. | | ---------------- | Currying | | functions of several arguments v. nested functions of one argument | L[x,y].expr is equivalent to Lx.(Ly.expr). | currying (named after Haskell Curry) is the process of turning | the former into the latter. | Example: (plus x y) is really ((plus x) y). | (plus x) is a function of one argument that returns the | sum of its argument and x. | | We already saw how to do this in Scheme. | | ---------------- | higher-order functions (functional forms) | (apply above) | (curry above) | compose | map (apply-to-all) | construct (constrct (f g h) x) == (fx gx hx) | etc. | | Example: | compose f g == Lx.(f (g x)) | given f = Lz.(square z), g = Ly.(abs y) | compose f g | == Lx.(Lz.(square z) (Ly.(abs y) x)) | == Lx.(square (Ly.(abs y) x)) | == Lx.(square (abs x)) | | (f g x) == ((f g) x) | (compose f g) x == (f (g x))