Review of assembly-level architecture (10-15 October 2002) skim chapter 5; read sections 9.1, 9.3-9.5; skim rest of chapter 9 -------------------------------- As I mentioned early in this course, a compiler is simply a translator. It translates programs written in one language into programs written in another language. This other language can be almost anything. Most of the time, however, it's the machine language for some available computer. Most of you have some exposure to assembly-level computer architecture, either from CSC 252 last spring, or from comparable experience elsewhere. As a review (and for those for whom it is new), I'd like to go over some of the material most relevant to language implementation, so that you can better understand what the compiler has to do to your program why certain things are fast and others slow why certain things are easy to compile and others aren't Just as there are many different programming languages, there are many different machine languages, although machine languages, especially in recent years, show considerably less diversity than programming languages. Traditionally, each machine language corresponds to a different computer ARCHITECTURE; the IMPLEMENTATION is how the architecture is realized in hardware. Formally, an architecture is the interface to the hardware -- what it looks like to a user writing programs on the bare machine. In the last 15 years the line between these has blurred to the point of disappearing: compilers have to know a LOT about the implementation to do a decent job. Changes in hardware technology (e.g., how many transistors can you fit on one chip?) have made new implementation techniques possible, but in many cases only if the architecture was also modified. An example of this sort of feedback from technology advances into architecture (and thereby compilers) can be seen in the so-called RISC (reduced instruction set computer) revolution ~18 years ago. In the discussion below, I will focus on modern RISC architectures, with a limited amount of coverage of their predecessors, the CISC architectures. CSC 252 looks at some length about how and why the RISC revolution took place, and what its impact has been on the implementation of programming languages. There is also a fair amount of architectural coverage, esp. wrt the memory hierarchy, in CSC 2/456. NB: Much of the information below has been heavily over-simplified, in order to fit into the small amount of time available to cover it. --------------------------------- Memory Hierarchy Memory is too big to fit on one chip with a processor. Because memory is off-chip (in fact, on the other side of the bus), getting at it is much slower than getting at things on-chip. Most computers therefore employ a MEMORY HIERARCHY, in which things that are used more often are kept close at hand. On a 400 MHz machine, typical memory hierarchy includes 1-2 ns registers 2.5-100ns one or more levels of cache (typically 2 or 3: one or two on-chip, one off) 50ns-1us main memory 5-15ms disk 1-50s tape Some of these levels are visible to the programmer; others are not. For our purposes here, the levels that matter are registers and main memory. Registers are special locations that can hold (a very small amount of) data that can be accessed very quickly. A typical RISC machine has a few (often two) sets of registers that are used to hold integer and floating point operands. It also has several special-purpose registers, including the program counter (PC) -- holds the address of the next instruction to be executed. Usually incremented during fetch-execute cycle. processor status register -- holds a variety of bits of little interest in this course (privilege level, interrupt priority level, trap enable bits). Also on some machines (not the MIPS), holds a collection of bits called the CONDITION CODES. These describe the results of the most recent arithmetic/logic operation, typically one bit each to indicate whether it produced a zero, a negative value, and/or arithmetic overflow Data Representation Memory is usually (but not always) "byte-addressable", meaning that each 8-bit piece has a unique address. Data longer than 8 bits occupy multiple bytes. Typically, an integer occupies 16, 32, or (recently) 64 bits, and a floating point number occupies 32, 64, or (recently) 128 bits. It is important to note that, unlike data in high-level programming languages, memory is untyped. Bits are just bits. *Operations* are typed, in the sense that different operations *interpret* the bits in memory in different ways. Typical DATA FORMATS include instruction integer (various lengths) floating point (various lengths) character (on some machines) Big endian v. little-endian 2's complement arithmetic Floating-point arithmetic IEEE standard, 1985 denormal numbers, graceful underflow NaNs Conditional branches Conditional branches are generally controlled in one of two ways. On some machines (e.g. the SPARC), they use condition codes. If for example we want to jump to a new location (to implement the else part of an if statement) when the value in R1 is greater than the value in R2, we can subtract one register from the other, and then branch if the condition code for a negative result is set. On other machines (e.g. the MIPS), the conditional branches themselves compare two registers (or compare a register against zero) and branch if the specified condition (greater, equal, less than or equal to, etc.) is satisfied. It is widely though not universally believed that condition codes are a bad idea, because they make it difficult to implement certain performance-enhancing hardware tricks. There are currently 5 major RISC architectures, about to become 3: PowerPC (IBM, Motorola) SPARC (Sun) MIPS (SGI) PA-RISC (HP) -- dropped in favor of ia64 Alpha (Compaq/HP) -- dropped in favor of ia64 Compiling for modern machines instruction scheduling usual goal: minimize pipeline stalls delay slots -- loads and branches take longer than "ordinary" instructions. Loads have to go to memory, which is slow. Branches disrupt the pipeline. Early RISC machines often provided "delay slots" for the second (maybe third) cycle of a load or store instruction, during which something else can occur. The instruction in a branch delay slot gets executed whether the branch occurs or not. The instruction in a load delay slot can't use the loaded value. As pipelines have grown deeper, people have generally realized that delay slots are more trouble than they're worth. Most current processor implementations interlock all loads, so you don't have to worry about the correctness issues of load delay. Some machines still have branch delay slots (so they can run code written in the late '80s), though later implementations usually provide a "nullifying" alternative that skips the instruction in the slot if static branch prediction is wrong. multiple functional units -- superscalar machines can issue (start) more than one instruction per cycle, if those instructions don't need the same functional units. For example, there might be two instruction fetch units, two instruction decode units, an integer unit, a floating point adder, and a floating point multiplier. register allocation -- because memory is so much slower than registers, (several hundred times slower at present) keeping the "right" things in registers is extremely important. Also, RISC machines often have at least two different classes of registers (so they don't have to support all operations on all registers), which the compiler has to keep track of. Some (e.g. the SPARC) have a complicated collection of overlapping REGISTER WINDOWS. Finally, good register allocation sometimes conflicts with good instruction scheduling -- code that makes ideal use of functional units may require more registers than code that makes poorer use of functional units. Good compilers spend a *great* deal of effort trying to make sure that the data they need most is in registers, and trying to minimize the amount of time spent moving data back and forth between registers and memory. Note that instruction scheduling and register allocation often conflict. limited instruction formats/more primitive instructions -- many operations that are provided by a single instruction on a CISC machine take multiple instructions on a RISC machine. For example, some RISC machines don't provide a 32-bit multiply; you have to build it out of 4-bit (or whatever) multiplies. Also, to make all instructions the same length, data values and parts of addresses are often scaled and packed into odd pieces of the instruction. Loading from a 32-bit address contained in the instruction stream takes two instructions, because one instruction isn't big enough to hold the whole address *and* the code for "load." The first instruction loads part of the address into a register. The second instruction adds the rest of the address into the register and performs the load. [Actually, this last bullet doesn't affect a compiler much; it's easy to generate the trickier sequences. What it does affect is the ease with which *human begins* can read and write assembler. It also makes linkers messier.] ================================== Overview of the MIPS instruction set (see the SPIM manual for more detail). First some general comments The MIPS processors no longer have load delay slots; these were eliminated in the second version of the instruction set. The processor has hardware *interlocks* to stall the pipeline if you try to use the result of a load before it is available. And as noted above, the assembler schedules branch delay slots so you don't have to. Many of the arithmetic/logic instructions come in several variants, depending on whether they take one of their operands from an "immediate" constant field within the instruction, as opposed to taking both from registers. Strictly speaking, these are different instructions (the processor doesn't have addressing modes), but the assembler simply picks the right one based on the syntax of the operands, e.g. add $1, $2, $3 add $1, $2, 0x12 'add' is a "pseudo-instruction" -- the assembler picks addi or add. Some pseudo-instructions actually generate multi-instruction sequences. Examples: div $1, $2, $3 Divides register $2 by regiser $3 and puts the result in register $1. Actually, the hardware puts the quotient in special register LO and the remainder in special register HI. 'div' is a pseudo instruction that calls the hardware divide instruction, followed by error checks and a 'mflo' instruction. The SPIM manual describes the pseudoinstructions (for the most part); you have to read between the lines to figure out what the hardware does. Registers: There are 32 general-purpose registers, plus hi and lo, some other special registers of interest to the kernel, and 32 floating-point registers, used by so-called co-processor instructions. Two of the general-purpose registers have special behavior: (1) return addresses are stored in $31 (ra); $0 (zero) always contains a zero: writing to it is a no-op; reading from it gets a zero; specifying it as the register in a register+offset computation gets you just the offset. There are *conventions* for use of the other general registers: temporary reserved by assembler for use in pseudo-ops: $1 (at) temporary for expression evaluation and function return: $2, $3 (v0, v1) arguments: $4..$7 (a0..a3) temporary (not saved across subroutine calls): $8..$15, $24, $25 (t0..t9) temporary (saved across subroutine calls): $16..$23 (s0..s7) reserved for the kernel: $26, $27 (k0, k1) base register for global variables: $28 (gp) stack pointer: $29 (sp) frame pointer: $30 (fp) Instruction categories: arithmetic, logic: add (no sub) abs - absolute value pseudo-op div rem mul or nor xor neg not ro[lr] s[lr][la] data movement li m[ft]{hi,lo} comparison (mostly pseudo-ops) seq sge sgt sle slt sne branch/jump b beq[z] bge[.uz] bgezal - and link bgt[z] ble[.uz] bgezal bltzal blt[.uz] bne[z] j[.r] jal[.r] load/store la l[bhwd] lw[lr] ul[hw] s[bhwd] sw[lr] us[hw] floating point special/privileged