Notes for CSC 2/454, Aug. 26 and 31, 2020 CSC 2/454 Programming Language Design and Implementation ========================================================= Course Introduction Language Design and Language Implementation go together an implementor has to understand the language a language designer has to understand implementation issues ** a good programmer has to understand both LOTS of programming languages Wikipedia's list has 671 entries as of Aug. 2020 those are just the "notable" ones Why are there so many programming languages? evolution -- we've learned better ways of doing things over time diverse ideas about what is pleasant to use orientation toward special purposes (SQL) orientation toward special hardware (assembly, CUDA) market factors: desire to control, or avoid what others control (COBOL, PL/I, Ada, Swift, ...) What makes a language successful? easy to learn (BASIC, Scheme, LOGO, Python) "powerful" -- easy to express complicated things (if fluent) (C++, Common Lisp, Haskell, Perl, APL) easy to implement (BASIC, Forth) possible to compile to very good (fast/small) code (C, Fortran) exceptionally good at something important (PHP, Ruby on Rails, R, SQL) backing of a powerful sponsor (COBOL, Ada, Visual Basic, C#, Swift) wide dissemination at minimal cost (Pascal, Java, Python, Ruby) market lock-in (Javascript) Why do we have programming languages? -- what is a language _for_? abstraction of virtual machine -- way of specifying what you want the hardware to do without getting down into the bits * languages from the implementor's point of view way of thinking -- way of expressing algorithms * languages from the user's point of view This course tries to balance coverage of these two angles. We will talk about language features for their own sake, and about how they can be implemented. * Knuth: Computer Programming is the art of explaining to another human being what you want the computer to do. This course should help you learn new languages more easily pick the right language for the task at hand (given a choice) choose among alternative ways to express things in a given language understand what a compiler does to your code for performance and (sometimes) correctness debugging emulate useful features in languages that lack them use language & compiler technology in your own projects almost every complex system has an input language prepare for 2/455 :-) Key to all of this is understanding the _concepts behind_ language design -- thinking about languages NOT in terms of syntax but in terms of naming & binding (early? late?) data types and abstraction mechanisms control flow closures concurrency ... Units on syntax semantics functional programming names scripting control flow type systems concurrency composite types subroutines objects run-time systems (see the web site) Traditional to group languages in terms of "paradigm" imperative von Neumann (Fortran, Ada, Pascal, Basic, C, ...) object-oriented (Smalltalk, Eiffel, C++, Java, C#, Swift, OCaml, ...) scripting (perl, Python, PHP, Ruby, Javascript, Matlab, R, ...) declarative functional (Scheme/Lisp, ML/OCaml/Haskell/F#) logic, constraint-based (Prolog, OPS5, spreadsheet, XSLT) Not clear this ever really made sense: categories are not mutually exclusive, and have been getting less so over time. Today, probably best to talk about paradigms a language _supports_ rather than "the" paradigm to which it belongs. We'll discuss all of this much more as the semester goes on. For now: Imperative languages emphasize computation by modifying variables. This allows you to do unbounded amounts of work in loops. Functional languages emphasize computation by creating, manipulating, and invoking functions. This allows you to do unbounded amounts of work via recursion. Object oriented languages emphasize structuring the code around abstract data types and their operations (methods). Scripting languages emphasize delayed decision making and programmer flexibility. Logic languages emphasize the search for values that satisfy certain constraints. We'll touch on them a few times this semester, but they won't get as much emphasis as the others (sorry!) So: paradigms sort of give us a Venn diagram: Haskell C OCaml C++ Ruby Scala Perl eLisp Imperative languages have historically dominated -- usually, today, with object oriented features. bulk of our attention in this course BUT one unit and lots of scattered attention to functional languages lots of functional features making their way into mostly-imperative languages -- Scala, Swift, Ruby, Python, ... lambda expressions functions as arguments and return values list comprehensions continuations The imperative and functional paradigms tend to encourage different ways of thinking about algorithms. I'll be talking about this a lot, and encouraging you to think in both ways (because neither is better) Consider insertion sort. In no particular languages: imperative sort(A): for i in len(A)-2 downto 0 v = A[i] for j in i+1 to len(A)-1 // A[j..len(A)-1] is sorted if A[j] > v break A[j-1] = A[j] A[j-1] = v functional sort(A): if len(A) < 2 return A else let v be A[0] and R be A[1..] return insert(v, sort(R)) where insert(v, S): if len(S) == 0 return { v } else let w be S[0] and T be S[1..] if v < w return v . S else return w . insert(v, T) These implement the same algorithm. They are likely to compile to nearly identical machine code. But (a) The functional version has no assignments. (b) The imperative version has a more obvious implementation. (c) when I wrote these in C and Scheme, I had to fix two bugs in the C version, but the Scheme one ran the first time. Will probably draw examples from about 40 languages this semester Will do projects in 6 or 8 of them By the time we're done, you should be able to pick up a new language in a weekend (though becoming an expert will still take time) Please watch the lecture on course administration. ======================================== Course Administration "Flipping" the course this semester; using the whole-class meeting times for unscripted Q&A over Zoom. OW similar to past years. NAVIGATION: All course materials will be online; no handouts. Combination of open web, Blackboard, Panopto (video), and Piazza (Q&A). The course home page is http://www.cs.rochester.edu/courses/254/fall2020/ Log in and browse. Pay particular attention to the course description, schedule, policies, and grading standards. Log into Blackboard and Piazza as well (links are on the course home page). The TAs and I will post announcements to Blackboard. Everyone should check both the announcements and the Piazza discussion board every day. Lecture notes are available on the web site. The hub of the course is the schedule page: http://www.cs.rochester.edu/u/scott/courses/254/schedule.shtml It will guide you through all requirements. << visit >> PREREQUISITES: CSC 173 and 252, or equivalent. If you have not had 252 (it's a new pre-req), you can get by, but read chapter 5 (on the web) on your own. Most of the students in the class are undergrads, but about 10% are grad students, who take it as 454 instead of 254. Grad students will be expected to do some extra work, and will be graded on a separate curve. The text for the course is _Programming_Language_Pragmatics_, 4th edition. Previous editions will not suffice. So-called "supplemental" sections (some of which I'll be assigning) are available online at Elsevier's web site (again, with a link from the course home page). FULL CLASS ZOOM MEETINGS: These will be held in the scheduled "lecture" time slot. Monday and Wednesday, 10:25-11:40 US Eastern time. To make this new online version of class work, WE NEED PIAZZA QUESTIONS. Before each whole-class meeting, you must - read the assigned sections of the textbook - watch the pre-recorded lecture segments - participate on Piazza - ask questions - improve questions - upvote questions WORKSHOPS: Required for 254; recommended for 454. UG TAs the leaders. Ignore what you signed up for at registration; we'll assign based on time and online/in-person preferences; see form on Blackboard. PROGRAMMING PROJECTS: 6 planned: (1) familiarization assignment (combinatorial search) in several different languages. (2) syntax error recovery (recursive descent review) (3) simple translation (tree traversal, OCaml) (4) cross indexer (scoping, scripting) (5) storage management (tombstones) (6) concurrency (probably in Java) These will be similar BUT NOT THE SAME as assignments I've used in the past. Expect to work hard. Comparable amount of code to 173 but MUCH more difficult. About two weeks per project NEEDED. Each project will begin with a pre-assignment ("trivia") whose goal is to force you to *look* at things early. Disproportionate share of final course grade (~10%) COMPUTING RESOURCES: Everyone (including grad students) will need a CSUG acct. CS majors should have one already. If you don't, contact one of the grad TAs. The Wegmans Hall majors lab and Hylan hall minors lab are likely to be closed all fall. Use the csug cycle servers (cycle1, cycle2, cycle3). Labstaff is looking into making the lab machines available for remote access as well. QUIZES and EXAMS: Quiz on Blackboard once per unit (~13 times this semester) based on the textbook and pre-recorded lectures Midterm and cumulative final exams (relatively low stakes) GRADING (tentative): Programming projects 10% "trivia" pre-assignments 36% main projects Exams 12% midterm 12% final Keeping up 10% weekly quizzes 10% workshop participation 10% Q&A contributions NO LATE ASSIGNMENTS OF ANY KIND WILL BE ACCEPTED. EXCEPTIONS ONLY UNDER THE MOST DIRE OF CIRCUMSTANCES. TURN IN WHAT YOU HAVE; I'M GENEROUS WITH PARTIAL CREDIT. COLLABORATION AND ACADEMIC HONESTY: Exams are individual effort only. On-line but time-limited, and closed book. Quizes are also individual effort, but open book. COLLABORATION ON IDEAS is encouraged, but you have to work through everything yourself. You can explore anything you want with a friend on explore whatever you want on a whiteboard, THEN ERASE IT NO NOTES -- just memories COLLABORATION ON ARTIFACTS (copying) is EXPRESSLY FORBIDDEN, unless you - have permission - clearly indicate in your README file which parts were copied and from whom - don't expect points for the copied parts (but may get the satisfaction of being able to see the whole thing work) Encouraged to help others: won't hurt your grade. Everybody gets an A if they deserve it. *** SEE FULL DETAILS ON ACADEMIC HONESTY ON THE WEB PAGE *** Apparent violations will be referred to the Honesty Board. (I tend to get several a year; don't be one of them.) GETTING HELP This is a hard course. Don't wait to seek help. - read the book and watch the lectures - attend the Q&A sessions and workshops - talk to fellow students - go to TA office hours or send them mail - make an appt to talk to me - check out the CSUG and CCAS tutoring services FIRST ASSIGNMENT (for Monday Aug. 31): (1) Get a copy of the book if you haven't already. (2) Finish reading (all of) chapter 1. (3) Explore both the website and the Blackboard site for the course. Make sure you can access Pizza. (4) Watch all the chapter 1 pre-recorded lectures. (5) Take quiz Q1 on Blackboard. (6) Make sure your CSUG account is up and working. (7) Complete "Initial Trivia assignment (T0)" on Blackboard. (8) Take the workshop time slot preference survey. (9) Check out the "Unix tools" assignment (A0) and work through it if it isn't all familiar material. That probably looks like a _lot_, but only #s 2 and 4 (and maybe 9) will be time-consuming. There will be similar requirements in future weeks. I won't be putting lists like this in future lecture notes; follow the schedule page on the web site. ======================================== Compilation and Interpretation Consider our insertion sort in C: void sort(int A[]) { for (int i = len(A)-2; i >= 0; i--) { int v = A[i]; int j; for (j = i+1; j < len(A); j++) { /* A[j..] is sorted */ if (A[j] > v) break; A[j-1] = A[j]; } A[j-1] = v; } } If we store this in a file, it's about 275 characters of text. How do you _execute_ that? Not immediately obvious, certainly, and a lot less obvious if it's 275 million characters. But suppose it's a tree data structure in memory: << draw AST (abstract syntax tree) >> Hopefully most of you believe that (given some time) you could write a program that would take any such tree and "execute" it. That's what an INTERPRETER does: after first translating the source program into a data structure that makes its meaning more obvious, it walks the data structure (in this case, a tree) and does "the obvious." Most scripting languages (Perl, Python, Ruby, Javascript) are implemented in roughly this fashion. source program ==\ > interpreter ===> output input ==/ Note that the interpreter stays around at execution time. Comparatively simple. Very flexible. But generally kind of slow. At the other extreme (as in, say, Fortran or C) we can translate a program to machine language ahead of time. source program ===> compiler ===> target program AND input ===> target program ===> output A common intermediate is to employ a non-machine-language _intermediate form_ (AST or other data structure) and to separate the creation of the internal form from the "execution" part. Java does this: source program ===> compiler ===> intermediate program AND THEN EITHER intermediate program ==\ > interpreter ===> output input ==/ OR intermediate program ===> compiler 2 ===> target program AND input ===> target program ===> output The former option (final step is interpreter) was common in early Java implementations. Most now do the second option: "just-in-time" (JIT) compilation. Advantages - Intermediate program (Java byte code) is significantly smaller than textual source: good for shipping over the web. - JIT compilation is faster than source-to-machine translation, because the intermediate program has lots of semantic information built in (doesn't have to be figured out again). - Intermediate program is completely portable and self-contained: "run anywhere" on VIRTUAL MACHINE. SO: Compilation & interpretation are more shades of gray than distinct alternatives. In some systems, you'll see "pre-processing" prior to compilation or interpretation. The key difference between pre-processing and compilation is that compilation entails semantic *understanding* of what is being processed; pre-processing does not. A compiler produces either error messages or output that will pass through further steps -- more compilation, assembly, interpretation, execution -- without syntactic or static semantic errors. A pre-processor will often let errors through. A compiler hides further steps; a pre-processor does not. How you view all this also depends on how deep you look. In some sense, execution on a microcoded machine or a modern x86 (with front-end translation) IS interpretation many compiled languages have interpreted pieces e.g. printf in C most use "virtual instructions" -- library routines that are called automatically by the compiler math I/O string manipulation set and map operations some compilers produce nothing but virtual instructions e.g. Pascal P-code, Java byte code, Microsoft CIL what makes compilation hard? -- late binding names to objects -- scope rules types to objects/names -- type rules programs to code -- dynamic classes in Java, new functions at run time in Scheme why might interpretation be desirable? necessary for late binding, which may increase programmer productivity small code size good diagnostics no (or reduced) compilation step -- fast startup from source code (possibly) enhanced portability automatic inclusion of the latest libraries commonly interpreted languages Scheme Prolog Shell most scripting languages (Python, Ruby, PHP, JavaScript) Compilers exist for some of these, but they aren't pure: selective compilation of compilable pieces and extra-sophisticated pre-processing of remaining source. Interpretation (or dynamic compilation) of parts of code, at least, is still necessary for reasons above. unconventional compilers text formatters silicon compilers database query language processors XSLT [ bootstrapping: running a tool through itself to get a better version. [ - incremental development of self-hosting compilers [ - early Pascal distribution ======================================== Phases of compilation: character stream scanner (lexical analysis) token stream parser (syntax analysis) drives the whole process (syntax-directed compilation) parse tree (concrete syntax tree) semantic analysis FE abstract syntax tree (AST) -------- intermediate code generation ME intermediate form (IF) of some sort optimization modified IF -------- target code generation BE assembler relocatable load-and-go (e.g. perl) assembler or machine code (or other target language) machine-specific optimization ------ symbol table difference between phases and passes some phases are grouped together into passes, and may not be clearly distinguished. Most compilers do not build an explicit parse tree. compiler families facilitated by front end / back end split: N + M compilers instead of N * M automatic tools scanner -- usefulness not certain; definitely handy during language development parser -- big BIG win (and yet gcc reverted to [impure] recursive descent a few years ago -- mainly for quality of error messages, I think) attribute evaluator -- big win for syntax-directed editors, incremental compilation, and language research; not used all that much in practice data-flow engine -- can simplify global optimization affine math framework -- great for loop optimizations code generator -- great for portability; fairly widely used ================================================ More on the various phases: All phases rely on a symbol table that keeps track of all the identifiers in the program and what the compiler knows about them. This symbol table may be retained (in some form) for use by a debugger or run-time optimizer, garbage collector, reflection mechanism, dynamic code generator, or virtual machine, even after normal compilation has completed. -------- Scanning divides the program into "tokens", which are the smallest meaningful units. This saves time, since character-by-character processing is slow and we can tune the scanner better if its job is simple. It also avoids complexity (lots of it) in later stages: you can design a parser to take characters instead of tokens as input, but it isn't pretty. The scanner also typically removes comments produces a listing if desired saves text of strings, identifiers, numbers evaluates numeric constants (maybe) tags tokens with line numbers, for good diagnostics in later phases Consider an (extremely simple) language to describe the input to a hand-held calculator. Tokens for such a language might include: id = letter ( letter | digit ) * [ except "read" and "write" ] literal = digit digit * ":=", "+", "-", "*", "/", "(", ")" $$ [end of input / off] (These are regular expressions.) -------- Parsing discovers the "context free" structure of the program. Mini theory lesson: Scanners and parsers are *recognizers* for regular and context-free languages, respectively. Regular expressions and context-free grammars are *generators* for regular and context-free languages, respectively. The generator is what you want to sucinctly describe the language. The recognizer is what you want to tell whether a given string is in the language. Scanner and parser generators like lex and yacc, or antlr, transform a generator (RE, CFG) into a recognizer (scanner, parser). << Who has had 280? >> [ scanning is recognition of a regular language, e.g. via DFA; parsing is recognition of a context-free language, e.g. via PDA ] Continuing the calculator example, let us suppose that All variables are integers. There are no declarations. The only statements are assignments, input, and output. Expressions get to use the four arithmetic operators and parentheses. Operators are left associative, with the usual precedence. There are no unary operators. Here's a grammar, in EBNF: -> $$ -> | E -> id := | read | write -> | -> | -> ( ) | id | literal -> + | - -> * | / The initial, "augmenting" production is for the parser's convenience. $$ is generated by the scanner; it isn't part of the user's program. You may recall from 173 that the "extra" levels of this grammar, and the choice of ordering within productions, serves to produce parse trees that capture precedence and associativity. By the way, there is an infinite number of grammars for any given language. This is just one. [ FWIW, the above grammar is SLR(1), but not LL(1). It can't be parsed [ top-down (e.g. by recursive descent). Even though you're going to be [ using a recursive descent parser for the second assignment, I am using [ SLR(1) here because it's more "natural", and captures precedence and [ associativity more clearly. See next set of notes for more on LL(1), [ SLR(1), etc. Using our grammar for the calculator language, consider the following input program to print the sum and average of two numbers: read A read B sum := A + B write sum write sum / 2 $$ There are 16 tokens in this program, which the scanner will pass on to the parser. The parser will discover the structure of the program and build a parse tree: << show >> -------- Semantic analysis is the discovery of *meaning* in the program. The compiler actually does what is called STATIC semantic analysis. That's the meaning that can be figured out at compile time. Some things (e.g. array subscript out of bounds) can't be figured out until run time. Things like that are part of the program's DYNAMIC semantics. The compiler has to generate code to perform dynamic checks at run time (or leave things unchecked). Static semantics includes things like making sure identifiers are declared before use type checking for assignments and operators checking types and numbers of parameters to subroutines making sure functions contain return statements making sure there are no repeats among switch statement labels and so on Semantic analysis for the calculator language is essentially non-existent. Because it is such a brain-damaged example, there are very few things that CAN go wrong. Since there are no branches in our control flow, however, we can check to make sure no variable is used before it is given a value, and that no variable is given a value that is never used. (Note that this is not possible in a more general language, unless you impose restrictions on merging code paths, as Java and C# do. A good compiler may catch some errors, even if it can't catch all of them.) << show examples of use of uninitialized variable and of variable with unused value (both at EOP and at overwrite) >> In addition to performing static checks, semantic analysis often simplifies the structure of the parse tree, getting rid of useless nodes, reducing the number of different kinds of nodes (e.g. expr v. term v. factor) and compressing vine-like list structures. Parse tree = concrete syntax tree. Abstract syntax tree (AST) is more useful to rest of compiler, but name is a mouthful. We generally talk about the "parse tree" and the "syntax tree". The latter is the AST. Annotations (decorations) on the nodes of a syntax tree contain information (e.g. types) required for semantic analysis. As a practical matter, semantic checks may be performed (a) while parsing, (b) during a later, separate traversal of the parse tree (assuming the compiler actually builds that tree explicitly), or (c) during a traversal of the syntax tree (assuming the compiler builds *that*). Real compilers vary. The syntax tree for our sum-and-average program might look like this: << show. include symbol table pointers. in symbol table entries, indicate that A, B, and sum are all set before use and used after setting >> Whenever we see an identifier during semantic analysis, we look it up in our symbol table. If it isn't there already, we add it. For each symbol, we keep track of whether it has (a) no value, (b) an unused value, or (c) a used value. Initially, every variable has no value. Whenever we give a symbol a value we check to see if it already has an unused value. If so, we print a warning message. In either case, we note that it now has an unused value. Whenever we try to use a symbol's value we check to see if it currently has no value. If so, we print an error message. Otherwise, we note that it now has a used value. At the end of the program, we scan the whole symbol table to see if anything has an unused value. If so, we print a warning message. -------- After semantic analysis (assuming the program passes all checks), the compiler generates code in some intermediate form (IF). Intermediate forms are often chosen for machine independence, ease of optimization, or compactness (these are somewhat contradictory). They come in many varieties, sometimes categorized as high level -- e.g., abstract syntax tree like Diana or GIMPLE medium level -- e.g., JBC/CIL/web assembly, or control flow graph with idealized assembler within basic blocks low level -- typically the assembler of the target machine, or something very close Many compilers move the code through more than one IF. The organization shown above is common, and roughly matches gcc: the semantic analyzer generates a syntax tree (gcc uses a format called GIMPLE); intermediate code gen produces pseudo-assembler (gcc uses a language called RTL); final code gen produces real assembler. For intermediate code generation, a typical compiler will again traverse the syntax tree, producing additional decorations. It can be hard to draw the line between the decorations that are for semantic analysis (figuring out what the program means) v. those that are for code generation (figuring out how it is supposed to work). Decorations for code generation might (in a real compiler) include sizes of variables locations of variables in memory (absolute, or offset in stack frame) whether values are known at compile time statistics on the range of case statement labels names and locations of temporary variables created to hold intermediate results of complicated computations -------- Optimization takes an intermediate-code program and produces another one that does the same thing faster, or in less space. (The term is a misnomer; we just *improve* code). Optimization phases are optional. Most compilers have multiple such phases, effecting various inter-related improvements. You'll learn more about this if you take 255 (or read chapter 17 on the PLP CS). THIS IS WHERE THE COMPILER SPENDS MOST OF ITS TIME. ALSO THE PART THAT ACCOUNTS FOR MOST OF THE COMPILER'S SIZE AND COMPLEXITY. -------- The target code generation phase produces assembly language or (sometimes) relocatable machine language. Or -- in pedagogical environments, scripting languages, or JITing virtual machines -- load-and-go absolute machine language. Certain machine-specific optimizations (use of special instructions or addressing modes, etc.) may be performed during or after target code generation. In the organization described in chapter 17 of the book, register allocation and instruction scheduling happen after target code gen. For pedagogical purposes, if we don't do any optimization, we can essentially merge intermediate and target code generation. The target code generator will have to decide on how to use the resources of the target machine. Certain registers may be dedicated to special purposes, for example. The layout of main memory will have to be established. Using standard x86-32 register conventions (but a very naive code generator), we might get the following code for our sum-and-average program: .data A: .long 0 B: .long 0 sum: .long 0 .text __start: call input movl %eax, A call input movl %eax, B movl A, %eax movl B, %ebx addl %ebx, %eax movl %eax, C movl C, %eax push %eax call output_int addl $4, %esp movl C, %eax movl $2, %ebx cltd idivl %ebx push %eax call output_int addl $4, %esp leave ret This is obviously not the best code for our program. You can see where it came from, though. Compilers without optimizers tend to produce code that is awful. To do a good job with this language, we would want at least to apply a "peephole" optimizer to the assembly code.