Naming, scoping, binding, and related notions (2 and 4 October 2012) A3 due soon Monday 16 Oct., midnight Midterm in class on Thurs. Oct. 18 ------------------------------ A name is exactly what you think it is. Most names are identifiers, though symbols (like '+') can also be names. A binding is an association between two things, such as a name and the thing it names. The scope of a binding is the part of the program (textually) in which the binding is active. Binding Time is the point at which a binding is created or, more generally, the point at which any implementation decision is made. Examples include language design time program structure, possible types language implementation time I/O, arithmetic overflow, type equality (if unspecified in manual) program writing time algorithms, names compile time plan for data layout link time layout of whole program in memory load time choice of physical addresses run time value/variable bindings, sizes of strings subsumes program start-up time module entry time elaboration time (point a which a declaration is first "seen") procedure entry time block entry time statement execution time The terms STATIC and DYNAMIC are generally used to refer to things bound before run time and at run time, respectively. Clearly "static" is a coarse term. So is "dynamic" IT IS DIFFICULT TO OVERSTATE THE IMPORTANCE OF BINDING TIMES IN PROGRAMMING LANGUAGES In general, early binding times are associated with greater efficiency. Later binding times are associated with greater flexibility. Languages with lots of early binding tend to be compiled. Languages with lots of late binding tend to be interpreted. Today I want to talk in particular about the binding of identifiers to the things they name. I'll use the name "object," informally, for anything that can have a name. ------------------------------------------------------ Scope Rules -- control bindings ----------- Fundamental to all programming languages is the ability to name objects, i.e., to refer to data using symbolic identifiers rather than addresses. Not all data is named! For example, dynamic storage in C or Ada 95 or Fortran 90 is referenced through pointers, not names. Similarly, dynamic storage in Java or C# is referred to indirectly through references. Things we have to keep track of creation of objects creation of bindings references to variables (which use bindings) (temporary) deactivation (hiding) of bindings reactivation of bindings destruction of bindings destruction of objects if object outlives binding it's garbage if binding outlives object is's a dangling reference The period of time from creation to destruction is called the LIFETIME of a binding. The textual region of the program in which the binding is *active* is its SCOPE. In addition to talking about the "scope of a binding", we sometimes use the word 'scope' as a noun all by itself, without an indirect object. A "scope" is a program section of maximal size in which no bindings are destroyed. In most languages with subroutines, we OPEN a new scope on subroutine entry. We create bindings for new local variables, deactivate bindings for global variables that are redeclared (these variable are said to have a "hole" in their scope), and then make references to variables. On subroutine exit, we destroy bindings for local variables and reactivate bindings for nonlocal variables that were deactivated. Algol 68 introduced the term ELABORATION for the process of creating bindings when entering a scope. Ada re-popularized the term. Elaboration time is a useful concept. In Ada, storage may be allocated, tasks (threads) started, even exceptions propagated as a result of the elaboration of declarations. In some scripting languages (Python & Ruby among them), declarations really _are_ executed. class foo if A > B method bar() ... else method bar() ... REFERENCING ENVIRONMENT (of a statement or expression) - the set of active bindings. A referencing environment corresponds to a collection of scopes that are examined (in order) to find a binding. SCOPE RULES determine that collection and its order. With STATIC (LEXICAL) SCOPE RULES, a scope is defined in terms of the physical (lexical) structure of the program. The determination of scopes can be made by the compiler. All bindings for identifiers can be resolved by examining the program. Most programming languages today employ static scope rules. Typically, we choose the closest enclosing binding, as originally proposed in Algol 60. An identifier is known in the scope in which it is declared and in each enclosed scope, unless it is redeclared in an enclosed scope. To resolve a reference to an identifier, we examine the local scope and statically enclosing scopes until a binding is found. There are important variants on this; see "declaration order" below. Note that subroutines are permitted to nest in many languages, though not in C and its descendants. A slightly newer example of static scope rules (1980s) is the import/export strategies of modular languages such as Modula-2. Within a module, the block structure of ALGOL 60 is assumed. However, an identifier declared within a module may be referenced in the enclosing scope only if it is exported. Similarly, an identifier outside a module may be referenced within the module only if it is imported. The exact rules to control visibility of names across module boundaries vary from language to language. Some languages (e.g., Modula-2) require explicit export from the defining module AND explicit import into the using module. Some languages (e.g. C++, with its namespace mechanism) require only explicit import: everything in the namespace is implicitly exported. Modules that require explicit import are said to be CLOSED scopes. Subroutines and nested blocks in most languages are said to be OPEN scopes: identifiers that are not redeclared are automatically inherited from the enclosing scope. Classes in object-oriented languages are a generalization of modules. These have even more sophisticated (static) scope rules. In particular, methods of a class can generally access members of the class regardless of whether they are nested inside a common lexical (textual) construct. [ Euclid is an example of a language with lexically-nested scopes in which [ all scopes are closed. The Euclid rules were designed to avoid ALIASES, [ which complicate optimization and correctness arguments (more on this [ later). In general closed scopes make it less likely that you will use a [ variable by mistake. It forces you to document side effects by explicitly [ importing any external variables that are read or written. Euclid [ prevents you from passing a variable by reference to a procedure that [ imports the same variable. Note that the bindings created in a subroutine are destroyed at subroutine exit. Modules and classes give you closed scopes without the limited lifetime. Bindings to variables declared in a module are inactive outside the module, not destroyed. The same sort of effect can be achieved in many languages with 'own' (Algol term) or 'static' (C term) variables. [ Evolution of data abstraction facilities [ none [ Fortran, Basic [ subroutine nesting [ Algol 60, Pascal, ... [ own (static) variables [ Algol 68, Fortran ("save"), C, ... [ module as manager [ Modula, C files (sorta) [ module as type [ Simula (predates Modula; clearly before its time), Euclid [ classes, with inheritance and subtype polymorphism [ Simula, Smalltalk, C++, Eiffel, Java, C#, [ Python, Ruby, Perl 6, ... [ object-BASED (objects without classes) [ Self, JavaScript [ [ ** modern OO languages represent a re-unification of the encapsulation [ (information hiding) of module languages with the abstraction [ (inheritance and dynamic type binding) of Smalltalk. Both historic [ threads have their roots in Simula. ------------------ Declaration Order Does scope of a binding include the portion of "the scope" before the declaration? (should it?) Pascal says the scope of an identifier is its entire block, excluding sub-blocks in which the identifier is redeclared. Within its block, the identifier must be declared before it is used. Consider this: const A = 10; ... procedure P; const B = A; ... A = 15; Pascal rules say the second declaration of A covers all of P, so the declaration of B refers to A before it is declared. Where should you report the error? Or this: const foo = 10; ... procedure P1; ... procedure P2; var A : integer; begin ... A = foo; {illegal, because of dec. below!} ... end {P2}; ... procedure foo; Clearly the programmer meant to use the outer foo. Note that there can be arbitrary distance between the (illegal) use and the inner declaration. -------- Ada, C, C++, Java say scope extends from declaration to end-of-scope. -------- C++ and Java dispense with declare-before-use for members, but not locals. Java dispenses with it for classes. Modula-3 dispenses with it entirely. Python dispenses with declarations: a variable is local iff written. -------- Modula-3, C#, and Python resurrect Pascal whole-scope rule. Only C# has the gotchas above; the other two don't have declare-before-use. [ -------- [ Lisp has let, let*, and letrec [ The Scheme manual describes these operationally: [ let introduces all names (and values) at end of list [ let* introduces names _and values_ one at a time [ letrec introduces names _without_ values, then evaluates definitions, [ then introduces values at end of list [ But I prefer to do it declaratively, in keeping with all the [ discusion of static scope rules avove: [ the scope of a name introduced by let is the body of the let (only) [ the scope of a name introduced by let* is the rest of the [ declaration list AND the body [ the scope of a name introduced by letrec is the entire [ declaration list and the body, but the _definition_ is available [ only in the body (and not in the rest of the declaration list) [ [ Thus I can write [ (let* ((a 3) [ (b (* a 2))) [ b) ==> 6 [ but letrec returns an error in the same code: 'a' is not defined [ [ And I can write [ (letrec ((step (lambda (n) (cond ((= n 1) 0) ((odd? n) (f n)) [ (#t (g n))))) [ (f (lambda (n) (+ 1 (step (+ n 1))))) [ (g (lambda (n) (+ 1 (step (/ n 2)))))) [ (step 13)) ==> 6 [ but let* returns an error in the same code:A 'f' is not defined Declarations v. definitions Useful for getting around declare-before-use for recursive declarations C, C++, Ada, Scheme information hiding most OO and module-based languages ------------------------------------------------------ Storage Management ------------------ Static allocation for code globals "own" variables explicit constants (including strings, sets, other aggregates) scalars may be stored in the instructions themselves Central stack (chap. 8) for parameters local variables temporaries bookkeeping information Why a stack? allocate space for recursive routines reuse space minimize management overhead Heap (chap. 7) for dynamic allocation Maintaining the Run-Time stack Contents of a stack frame bookkeeping: return PC (dynamic link), saved registers, line number, static link, etc. arguments and returns local variables temporaries Maintenance of stack is responsibility of "calling sequence" and subroutine "prolog" and "epilog" (more on this in Chap. 8) space is saved by putting as much in the prolog and epilog as possible time *may* be saved by putting stuff in the caller instead, or by combining what's known in both places (interprocedural optimization) Local variables and arguments are assigned fixed OFFSETS from the stack pointer or frame pointer at compile time Access to non-local variables STATIC LINKS Each frame points to the frame of the (correct instance of) the routine inside which it was declared. In the absense of formal subroutines, "correct" means closest to the top of the stack. You access a variable in a scope k levels out by following k static links and then using the known offset within the frame thus found. It is also worth noting that many Algol-family languages allow you to declare nested scopes *within* the body of a subroutine. Declarations in these nested scopes hide outer variables with the same name, just as declarations at the tops of subroutines do. These nested scopes are generally considered to be a good idea. We will see later that they can be managed with zero run-time cost. ------------------------------------------------------ The key idea in static scope rules is that bindings are defined by the physical (lexical) structure of the program. With DYNAMIC SCOPE RULES, bindings depend on the current state of program execution. They cannot always be resolved by examining the program because they are dependent on calling sequences. To resolve a reference, we use the most recent, active binding made at run time. Dynamic scope rules are usually encountered in interpreted languages; in particular, early LISP dialects assumed dynamic scope rules. Such languages do not normally have type checking at compile time because type determination isn't always possible when dynamic scope rules are in effect. example: static vs dynamic scope rules int a proc first: a := 1 proc second: int a first() a := 2; second(); write(a) If static scope rules are in effect, the program prints a 1. If dynamic scope rules are in effect, the program prints a 2. Why the difference? At issue is whether the assignment to the variable "a" in procedure "first" changes the variable "a" declared in the main program or the variable "a" declared in procedure "second". Static scope rules require that the reference resolve to the most recent, compile-time binding, namely the global variable "a". Dynamic scope rules, on the other hand, require that we choose the most recent, active binding at run time. At run time we create a binding for "a" when we enter the main program. Then we create another binding for "a" when we enter procedure "second". This is the most recent, active binding when procedure "first" is executed. Thus, we modify the variable local to procedure "second", not the global variable. However, we write the global variable because the variable "a" local to procedure second is no longer active. ---------- Alternatve model of dynamic scope: instead of saying that newly encountered definition hides existing definition, say it saves the current value for later restoration, and creates a new *value* for the existing object (or makes it undefined if the definition provides no value). The two ways of thinking correspond closely to the two standard implementations. Perl encourages you to use this alternative (central reference table) way of thinking. Modern instances of dynamic scope: Tcl, Perl 5 (programmer chooses), Unix environment variables. ---------- Why dynamic scope? simple implementation for interpreted languages probably why Lisp did it; not clear they thought about it carefully implicit "parameters" to subroutines but better alternatives exist: e.g., static variables that can be modified by auxilliary routines, or default and optional parameters. lack of static structure (e.g. Unix environment variables) Why not? confusing What to use instead? static variables default parameters The PLP CD describes one attractive symbol table organization, due to LeBlanc and Cook. ================================================ Binding Rules ------------- Recall that a REFERENCING ENVIRONMENT of a statement at run time is the set of active bindings. A referencing environment corresponds to a collection of scopes that are examined (in order) to find a binding. SCOPE RULES determine that collection and its order. BINDING RULES determine which instance of a scope should be used to resolve references when calling a procedure that was passed as a parameter, returned from a function, or stored in a variable. *** they govern the binding of referencing environments to formal procedures SHALLOW BINDING - the nonlocal referencing environment of a procedure instance is the referencing environment in force at the time it (the procedure) is invoked. Original LISP works this way by default. DEEP BINDING - the nonlocal referencing environment of a procedure instance is the referencing environment in force at the time the procedure's declaration is elaborated. For procedures passed as parameters, this environment is the same as would be extant if the procedure were actually called at the point where it was passed as an argument. When the procedure is passed as an argument, this referencing environment is passed as well. When the procedure is eventually invoked (by calling it using the corresponding formal parameter), this saved referencing environment is restored. Original LISP 'function' worked this way, as do subroutines in most modern languages (Ada, Scheme, Common Lisp, ML, Haskell, Ruby, ...) Subroutine closure = code reference plus bundled ref. env. Several possible implementations. Simplest is code address plus static link. Example: ; Scheme: (define C (lambda (p) (let ((x 4)) (p 2)))) (define A (lambda () (let* ((x 3) (B (lambda (y) (+ y x)))) (C B)))) ; (A) --> 5, not 6, despite the "newer" definition of x as 4 -- Haskell: c p = let x = 4 in p 2 a = let x = 3 b y = x + y in c b -- a --> 5 ------------- First and second-class subroutines first: can pass, return, store second: can pass, but not return or store Why not return or store? Limited v. unlimited extent. Example: ; Scheme: (define G (lambda (p) (p 2))) (define D (lambda () (let* ((x 3) (E (lambda (y) (+ y x)))) E))) (define F (lambda () (G (D)))) ; (F) --> 5, despite the fact that D already returned -- Haskell: g p = p 2 d = let x = 3 e y = x + y in e f = g d -- f --> 5 Note 1: The difference between deep and shallow binding is not apparent unless you pass procedures as parameters, return them from functions, or store references to them in variables. Binding rules are therefore irrelevant in languages that lack formal subroutines: you don't need closures if you don't have formal subroutines. Note 2: To the best of my knowledge, no language with static (lexical) scope rules has used shallow binding. Some languages with dynamic scope rules offered only shallow binding (eg. SNOBOL). Others (eg. early LISP dialects) offered both. Hence, the issues are separable. Note 3: In a language with lexical scope, the difference (if anybody cared) would only be noticable for non-local references, that is, references which are neither local nor global. Binding rules would have no relevance to (lexical) local/global references since all local references are always bound to the currently executing instance and there is only one instance of the main program containing the global variables. Binding rules are therefore irrelevant in languages such as C, which lack nested subroutines, or Modula-2, which allow only outermost subroutines to be passed as parameters, and would also be irrelevant in a language with nested subroutines but no recursion (I'm not aware of any like that). So closures are trivial with static scope and no nested subroutines. Example of why deep binding matters for static scope: procedure C; begin end; procedure A (p1 : procedure; i : integer); procedure B; begin B writeln(i); end; begin A if i = 1 then A(B,2) else p1; end A; begin main A(C,1); end main. There are two activations of A when B is finally called. One of them, the deep version, is the one that was active when B was passed as a parameter. Under deep binding, this program prints a 1; under shallow binding, it prints a 2. We will cover implementation techniques for scope and binding rules when we talk about run-time storage management and the implementation of subroutines in chapter 8. NB: object-oriented languages without first-class subroutines can get some of the same effect using OBJECT CLOSURES: create an object whose fields hold values that would have been in the referencing environment of a subroutine closure; pass the object to somebody; let them invoke one of its methods. The operator() mechanism of C++ makes this look like ordinary subroutine invocation. -------------------------------------------------------------- Binding within a scope Aliases: using more than one name for the same thing Problems: potentially confusing inhibit code improvement (e.g., promotion to registers) What are aliases good for? consider uses of FORTRAN equivalence space saving -- modern data allocation methods are better multiple representations -- unions are better linked data structures -- legit Also, aliases arise in parameter passing as an unfortunate side effect. Euclid scope rules are designed to prevent this. ----------------------- Overloading: using the same name for multiple things some overloading happens in almost all languages integer + v. real + read and write in Pascal function return in Pascal some languages get into overloading in a big way Ada C++ overload norm; int norm (int a) { return a > 0 ? a : -a; ) complex norm (complex c ) { // ... overloading is also known as "ad hoc polymorphism" ----------------------- Polymorphism means, literally, "having many forms" There are several different variants. simplest is ad hoc polymorphism, which really doesn't deserve the name. subtype polymorphism in OO languages allows code to do the "right thing" to parameters of different types in the same type hierarchy by calling the virtual function appropriate to the concrete type of the actual parameter parametric polymorphism explicit (generics) you specify parameter(s) (usually type(s)) when you declare or use the generic. templates in C++ are an example of this typedef set::const_iterator string_handle_t; set string_map; ... pair p = string_map.insert(ident); // *pair.first is the string we inserted // pair.second is true iff it wasn't there before Implemented via macro expansion in C++ v1; built-in in Standard C++. (BTW: be warned when using nested templates in C++: pair> won't work, because >> is a single token; you have to say pair >. Yuck. Java and C# fix this problem.) Similar mechanisms in Clu and Ada (shown): generic type ITEM is private; with function "*"(U, V : ITEM) return ITEM; function SQUARING (X : ITEM) return ITEM; function SQUARING (X : ITEM) return ITEM is begin return X*X; end; function SQUARE is new SQUARING (MATRIX, MATRIX_PRODUCT); Usually, though not always, implemented by creating multiple copies of the code, one for each (set of) type(s) implicit (true) You don't have to specify the type(s) for which code works; the language implementation figures it out and won't let you perform operations on objects that don't support them. Functional languages generally support true parametric polymorphism, either in the runtime system (Lisp and its descendants) or in the compiler (ML and its descendants, inc. Haskell). More on this later in the semester.