Notes for CSC 2/454, Nov. 8 -- 15, 2012 A5 is up on the web. Trivia is due Tues. Nov. 13; main due Wed. 28 Nov. I've allotted extra time for this assignment in recognition of the fact that it spans Thanksgiving break. --------------------- We have all developed an intuitive notion of what types are. What's behind the intuition? - collection of values from a "domain" (the denotational approach) - internal structure of a bunch of data, described down to the level of a small set of fundamental types (the structural approach) - equivalence class of objects (the implementor's approach) - collection of well-defined operations that can be applied to objects of that type (the abstraction approach) What are types good for? implicit context (resolution of polymorphism and overloading) checking -- make sure that certain meaningless operations do not occur. Type checking cannot prevent all meaningless operations, but it catches enough of them to be useful. STRONG TYPING means, informally, that the language prevents you from applying an operation to data on which it is not appropriate. STATIC TYPING means that the compiler can do all the checking at compile time. Common Lisp is strongly typed, but not statically typed. Ada is statically typed. Pascal is almost statically typed. Java is strongly typed, with a non-trivial mix of things that can be checked statically and things that have to be checked dynamically. With the proliferation of scripting languages, static v. dynamic typing has become a controversial topic. The dynamic camp argues that static type declarations add too much noise and confusion to programs, making it harder to express what you want quickly. The static camp argues that you ought to catch as much as you can ahead of time. --------------------- A TYPE SYSTEM has rules for - type equivalence (when are the types of two values the same? -- that is, what exactly _are_ the types in the program?) - type compatibility (when can a value of type A be used in a context that expects type B?) - type inference (what is the type of an expression, given the types of the operands [and maybe the surrounding context]?) --------------------- Type compatibility / type equivalence Compatibility is the more useful concept, because it tells you what you can DO. The terms are often (incorrectly, but I do it too) used interchangably. Most languages say type A is compatible with (can be used in a context that expects) type B if it is equivalent or if it can be coerced to it. Two major approaches to equivalence: structural equivalence and name equivalence. Name equivalence is based on declarations. Structural equivalence is based on some notion of meaning behind those declarations. Name equivalence is more fashionable these days, but not universal. Structural equivalence depends on simple comparison of type descriptions substitute out all names; expand all the way to built-in types. original types are equivalent if the expanded type descriptions are the same [ Pointers complicate matters, but the Algol folks figured out how to [ handle it in the late 1960's. The simple (not quite correct) approach [ is to pretend all pointers are equivalent. The correct approach is [ to apply a "set of subsets" algorithm to the graph of types that point [ to each other, the same way one turns a non-deterministic FSM into [ an equivalent deterministic FSM. Name equivalence depends on actual occurrences of declarations in the source code. Languages vary some in the equivalence rules for type aliases (e.g. typedef int serial_number). This is the strict v. loose name equivalence talked about in the text. Algol-68 uses structural equivalence, as did many early Pascal implementations (the ISO standard uses name equivalence). Java uses name equivalence. ML is more-or-less structural (see below). C uses a hybrid (structural except for structs). Both forms of type equivalence have nontrivial implementation issues for separate compilation. timestamp header files? checksum header files? avoid comments, format? how handle compatible upgrades? finer grain? "name mangling" -- enforce with standard linker ----------------------------- Coercion When an expression of one type is used in a context where a different type is expected, one normally gets a type error. But what about var a : integer; b, c : real; ... c := a + b; Many languages allow things like this, and COERCE an expression to be of the proper type. Coercion can be based just on types of operands, or can take into account expected type from surrounding context as well. Fortran has lots of coercion, all based on operand type. C has lots of coercion, too, but with simpler rules: all floats in expressions become doubles short int and char become int in expressions if an operator is applied to an int and a double, the int is promoted if necessary, precision is removed when assigning into LHS In effect, coercion rules are a relaxation of type checking. Some languages (e.g. Modula-2 and Ada) forbid it. C++, by contrast, goes hog-wild with them. They're one of the hardest parts of the language to understand. Make sure you understand the difference between type conversions (explicit) type coercions (implicit) non-converting type casts (breaking the typing rules) Sometimes the word 'cast' is used for conversions, which is unfortunate. C is guilty here. ----------------------------- Type inference and polymorphism simple case: needed whenever operator takes operands of different types, and doesn't coerce one. What is the type of the result? E.g. strings, sets, subranges, expressions in Perl. complicated case: ML, Miranda, Haskell 1 fib :: Int -> Int 2 fib n = 3 let helper f1 f2 i 4 | i == n = f2 5 | otherwise = helper f2 (f1 + f2) (i + 1) 6 in 7 helper 0 1 0 i is Int, because because it is added to 1 at line 5 n is Int, because it is compared to i at line 4 all three args at line 7 are Int consts, and that's the only use of helper (given scope of let), so f1 and f2 are Int also 3rd arg is consistent with known Int type of i (good!) and the types of the arguments to the recursive call at line 5 are similarly consistent since helper returns f2 (known to be Int) at line 4, the result of the call at line 7 will be Int Since fib immediately returns this result as its own result, the return type of fib is Int % I'm glossing over how the let construct allows the compiler to use the % types of helper's actual parameters to deduce helper's own types -- % something it can't do at the global level. fib itself is of type Int -> Int helper is of type Int -> Int -> Int -> Int Polymorphism results when the compiler finds it doesn't need to know certain things. For example: compare x p q | x == p = if x == q then "both" else "first" | otherwise = if x == q then "second" else "neither" compare has type a -> a -> a -> string a is a *type variable*, so compare is polymorphic. Any time the ML or Haskell compiler determines that A and B have to have the same type, it tries to *unify* them. (Unification is used for a very different purpose in Prolog, which tries to unify RHS's of rules with LHS's of things that might imply it. In Prolog, unification assigns values to variables. In ML, it assigns types to type variables. Unification is also used to type-check C++ templates.) For example, in the expression if x then e1 else e2 x has to be of type Bool, and e1 and e2 have to be of the same type. If e1 is (so far) known to be of type (a, Int) (a 2-element tuple) and e2 is known (so far) to be of type ([Char], b), then a is [Char] (string) and b is Int, and the expression as a whole is of type ([Char], Int). Like Lisp, ML-family languages make heavy use of lists, but ML's lists are homogeneous -- all elements have to have the same type. Ex: append l1 l2 = if null l1 then l2 else head l1 : append (tail l1) l2 head is a built-in function of type [a] -> a (like car in Lisp); tail is a bulit-in function of type [a] -> [a] (like cdr in Lisp). : is a *constructor* -- used for piecing together values of composite types (like cons in Lisp). [ Other ML types: [ records (named fields; order doesn't matter) [ tuples (ordered records; names are just integers) [ datatypes (mainly used for recursive data structures) Ex: [ [ datatype tree = empty | node of 'a * 'a tree * 'a tree; [ val x_zw = node ('R', [ node ('X', empty, empty), [ node ('Y', [ node ('Z', empty, empty), [ node ('W', empty, empty))); [ [ Powerful *pattern matching* against constructors and values. Ex: [ [ compare (1, 2, 3); [ let val t = ("larry", "moe", "curly") in compare (t) end; [ let val d = (2, 3) in [ let val (a, b) = d in [ compare (1, a, b) [ end [ end; [ [ fun swap (a, b) = (b, a); [ [ Note that swap doesn't *change* anything -- it takes a tuple as argument [ and returns you one with elements in the other order. =============================== A smorgasbord of types ORTHOGONALITY is a useful goal in the design of a language, particularly its type system. A collection of features is orthogonal if there are no restrictions on the ways in which the features can be combined (analogy to vectors). For example, Pascal is more orthogonal than Fortran, (because it allows arrays of anything, for instance), but it does not permit variant records as arbitrary fields of other records (for instance). Orthogonality is nice primarily because it makes a language easy to understand, easy to use, and easy to reason about. common terms: discrete types -- countable integer boolean char enumeration subrange scalar types -- one-dimensional discrete real complex // yeah, this is 2-dimensional; so shoot me composite types records/structs (unions) arrays strings sets pointers lists files mappings, bags, tuples // common in scripting languages -------------------- Records usually laid out contiguously possible holes for alignment reasons permits copying but NOT comparison with simple block operations smart compilers may re-arrange fields to minimize holes (largest first works well) -- but C compilers promise not to [ "With" statements [ Give you some of the utility of pointers to elaborated objects in C (or [ references in C++), without some of the dangers. Save keystrokes and [ sometimes cycles (e.g. in "with A[f(x)] do"). In Modula-3: [ [ with e = whatever, f = whatever do [ e.field1 := f.field1; [ e.field3 := f.field3; [ e.field7 := f.field7; [ end; Unions (variant records) (cf. "equivalence" in Fortran) overlay space cause problems for type checking (1) lack of tag (disciminant) means you don't know what is there (2) ability to change tag and then access fields hardly better - can make fields "uninitialized" when tag is changed (this generally requires extensive run-time support) - can require assignment of entire variant (with tag), as in Ada Several languages (including Algol68, Ada, and Haskell) require access to variant portions of a record to be confined to a "conformity clause" (kind of a case statement) that ensures type safety. [ Some languages (e.g. Pascal) insist that all variant portions be at [ the end of records. This really isn't necessary. It can allow you [ to allocate smaller-size records if you can verify at compile time [ that certain variants will never be used. Alternatively, the [ compiler can re-order the fields itself to put the variants at the end. Pascal is noteworthy for syntactic integration of records and variants. this_employee.hourly_salary instead of this_employee.U1.S1.hourly_salary NB: Problem of uninitialized variables is more general than variant records. Some languages say variables start out with certain values (e.g. 0 for globals [but not locals!] in C). Most just say it's erroneous to use an uninitialized variable. A few actually try to prevent you from accessing one. In general, the only way to do this is to initialize them automatically with a special "uninitialized" value and check most references at run time. The compiler CANNOT catch all cases. -------------------- Arrays Two layout strategies for arrays: contiguous elements column major -- basically used only in Fortran row major -- used by everybody else; makes array [a..b, c..d] the same as array [a..b] of array [c..d]. row pointers an option in C; only option in Java and some scripting langs allows rows to be put anywhere -- nice for big arrays on machines with segmentation problems. avoids multiplication -- nice for 1970s machines with slow multiply nice for matrices whose rows are of different lengths e.g. an array of strings requires extra space for the pointers Descriptors (dope vectors) required when bounds not known at compile time. Dope vector is usually separate from pointer to data. When bounds are known, much of the arithmetic can be done at compile time. Suppose A : array [L1..U1] of array [L2..U2] of array [L3..U3] of glarch; Let D1 = U1-L1+1 D2 = U2-L2+1 D3 = U3-L3+1 Let S3 = sizeof glarch S2 = D3 * S3 S1 = D2 * S2 The address of A[i][j][k] is (i - L1) * S1 + (j - L2) * S2 + (k - L3) * S3 + address of A We could compute all that at run time, but we can make do with fewer subtractions: == (i * S1) + (j * S2) + (k * S3) + address of A - [(L1 * S1) + (L2 * S2) + (L3 * S3)] The stuff in square brackets is compile-time constant that depends only on the type of A. We can combine easily with records: Another example: Suppose A is a messy local variable. The address of A[i].B[3][j] is i * S1 - L1 * S1 + B's field offset + (3-L2) * S2 + j * S3 - L3 * S3 + fp + A's offset in frame lifetime (how long object exists) and shape (dimensions and bounds) common options: global lifetime, static shape Pascal, C globals local lifetime, static shape Pascal, C locals local lifetime, shape bound at elaboration Ada locals arbitrary lifetime, shape bound at elaboration Java arrays arbitrary lifetime, dynamic shape Icon strings, APL, Perl arrays The first two classes are just familiar global and local variables. With dynamic shape you need dope vectors. The third class can still be put in a procedure's activation record; you put the dope vector and a pointer at a fixed offset from the FP and the data itself higher up in the frame. The fourth and fifth have to be allocated off a heap. Note that deallocating a dynamic array on procedure exit requires some extra code -- doesn't happen automatically via pop of stack frame. Slices (Fortran 90) matrix(3:6, 4:7) columns 3-6, rows 4-7 matrix(6:, 5) columns 6-end, row 5 matrix(:4, 2:8:2) columns 1-4, every other row from 2-8 matrix(:, /2, 5, 9/) all columns, rows 2, 5, and 9 can assign into each other as if they were smaller arrays. similar features in APL ---------------------- Strings are really just arrays of characters. They are often special-cased, to give them flexibility (like polymorphism or dynamic sizing) that is not available for arrays in general. It's easier to provide these things for strings than for arrays in general because strings are one-dimensional and (more important) non-circular. ---------------------- Sets You learned about a lot of possible implementations in 172. Bit vectors are what usually get built into compiled programming languages. Things like intersection, union, membership, etc. can be implemented efficiently with bitwise logical instructions. Some languages place draconian limits on the sizes of sets to make it easier for the implementor. There is really no excuse for this. Scripting languages typically use hash tables. May use trees, or thread the hash table, for fast enumeration. ====================== Pointers and recursive types pointers serve two purposes: efficient (and sometimes intuitive) access to elaborated objects (as in C) dynamic creation of linked data structures, in conjunction with a heap storage manager Note that pointers are NOT the same thing as addresses. Pointers are an abstraction. Addresses are an implementation. Pointers are NOT always implemented as addresses: - machines with segments - error checks (e.g. locks and keys -- see below) - swizzling - cursors - C++ overloading of *, -> Several languages (e.g. Pascal) restrict pointers to accessing things in the heap. Pointers are used with a value model of variables. They aren't needed with a reference model. Ex: tree in Lisp: (R (X () ()) (Y (Z ()()) (W ()()))) (hard to draw in Ascii :-) ML equivalent: datatype tree = empty | node of 'a * 'a tree * 'a tree; val x_zw = node ('R', node ('X', empty, empty), node ('Y', node ('Z', empty, empty), node ('W', empty, empty))); implementation: node R | -------------, v v node X | | node Y | -------------, | | v v | | node Z | | node W | | \ \ | | / / \ \ v v / / ------------> empty <------- Good implementations of languages with a reference model of variables represent primitive (immutable) types the same way you would for a language with a value model of variables. Problems: syntax of pointer dereferencing C/Pascal -- always explicit Algol-68 style we do implicit dereferencing of r-values anyway; why not do it all the time? Ada style do implicit dereferencing for record field references; introduce special syntax to get at entire referenced object dangling pointers due to explicit deallocation of heap objects only in languages that *have* explicit deallocation implicit deallocation of elaborated objects only in languages that let you create pointers to these Algol-68 avoids second problem (in theory) by insisting pointer have lifetime no greater than referenced object. Requires run-time checks, not usually implemented two implementation mechanisms to catch dangling pointers: lock-and-key not an option for pointers to elaborated objects tombstones tombstones themselves live a long time possibility of including indication of outermost scope w/pointer garbage collection many languages leave it up to the programmer to design without garbage creation. This is VERY hard. others arrange for automatic garbage collection reference counting does not work for circular structures works great for strings should also work to collect unneeded tombstones, though you have to make sure that when you delete a struct containing pointers the compiler decrements the reference counts of the tombstones for those pointers. Key observation is that tombstones are used with explicit object deletion: ref. counts only fail to reclaim tombstone when user fails to reclaim object. tracing commonplace in Lisp dialects complicated in languages with rich type structure, but possible if language is strongly typed; achieved successfully in Cedar, Ada, Java, C#, Modula-3, ML, scripting langs., ... variants mark-and-sweep takes time proportional to total heap size can use pointer reversal for space-efficient tracing stop-and-copy takes time proportional to amount of space currently in use performs compaction, to cure external fragmentation generational (used in most production systems) avoids, heuristically, wasting time on memory that is unlikely to be unused has to be able to fall back to previous techniques requires "write barriers" in program code to track old-to-new pointers hybrids also possible: e.g., reference count most of the time, do a mark-and-sweep once in a while to catch circular structures. In general, garbage collection requires a type-safe language and an implementation that tags all objects at run time with a reference to a type descriptors. Conservative approximation possible in almost any language (Xerox Portable Common Runtime approach). C pointers and arrays The basic idea: an array variable is (in most respects) treated like a pointer to the array's first element; subscripting is defined in terms of pointer arithmetic: E1[E2] == (*((E1)+(E2))) = (*((E2)+(E1))) ! So given int n, *p; You can say not only n = p[3]; but also n = 3[p]; // surprise! Subscripting scales to the size of array elements in C precisely because pointer arithmetic does. When is an array not a pointer? (a) in a variable definition, where the array allocates space (b) in a sizeof, where the array represents the whole thing double A[10]; double *p = A; sizeof(A) == 80 // the whole array sizeof(A[0]) == 8 // one element sizeof(p) == 4 // a pointer Variable definitions: int *a[n] n-element array of row pointers int a[n][m] 2-d array Beware the difference between definitions, which allocate space, and declarations, which merely introduce names. Since function prototypes (headers) are just declarations, and don't allocate space, and since arrays are passed as pointers, the following parameter declarations are equivalent: int *a == int a[] pointer to int int **a == int *a[] pointer to pointer to int Note that these equivalences do NOT hold for definitions. Compiler has to be able to tell the size of the things to which you point. So the following aren't valid, even as parameter declarations: int a[][] bad int (*a)[] bad C declaration rule: read right as far as you can (subject to parentheses), then left, then out a level and repeat. int *a[n] n-element array of pointers to integers int (*a)[n] pointer to n-element array of integers int (*f) (int *) pointer to function taking pointer to integer as argument, and returning integer Choice between pointer arithmetic & subscripts is largely a matter of taste. Pointer arithmetic used to be faster with stupid compilers. With modern compilers it's often the other way around, particularly given the tendency of aliases to disable optimization. Cf. choice between row-pointer and contiguous layout: tradeoff has reversed with time.