Lecture notes for CSC 252, Tues. Jan. 21, 2014ff Read chapter 2 Assignment 1 is on the web: datalab (aka "bit twiddling") Pre-assignment ("trivia") due Thursday noon ------------------------------ Bits and bytes Almost all computers are binary. Natural representations: on/off, charged/discharged, high/low voltage, magnetic orientation, solid/burnt out, etc. Pain in the neck for people. Decimal/binary conversion non-intuitive. *** LEARN HEX *** We use hexadecimal because, while the binary pattern corresponding to a particular decimal number is not at all obvious, conversion back and forth between binary and hex is trivial: 0 0 0 0 0 0 0 0 1 1 0 0 1 0 2 0 0 1 1 3 0 1 0 0 4 0 1 0 1 5 0 1 1 0 6 0 1 1 1 7 1 0 0 0 8 1 0 0 1 9 1 0 1 0 a 1 0 1 1 b 1 1 0 0 c 1 1 0 1 d 1 1 1 0 e 1 1 1 1 f 0xabcd = 1010 1011 1100 1101b 0x400 = 100 0000 0000_b = 2^10 = 1024 = 1K (when measuring memory sizes) 0x100000 = 1 0000 0000 0000 0000 0000b = 2^20 = 1048576 = 1M ------------------------------ Memory Can be considered a large single-dimensional array Memory Address = index to that array starting at 0 (Actually an illusion; more on that later) Addresses at level of (8-bit) byte - smallest unit addressable bytes were chosen historically because 8 is the smallest power of 2 that allows you to hold all the characters of a Western character set. The bits of a base-2 number represent the number of 1s, 2s, 4s, 8s, etc. in the encoded number. The 1s bit is the "least significant bit (LSb)". The 2^k bit, where k+1 is the length of the number, is the "most significant. bit (MSb)" The byte containing the LSb is the "least significant byte (LSB)". The byte containing the MSb is the "most significant byte (MSB)". The question arises: if you store an n-byte quantity at location 0, is byte 0 the MSB or LSB of that n-byte quantity? Byte Ordering Little Endian byte ordering: x86, other legacy machines byte 0 is LSB Big Endian byte ordering: IBM z Series, other legacy machines byte 0 is MSB Most modern machines other than x86 & z are bi-endian; they can be set either way. Examples include Power, ARM, SPARC, MIPS, & Itanium. Nuisance when they need to communicate; have to agree on common "network byte order" (big endian) Consider the numbers 0x37 and 0x12345678 stored at addresses 432 and 436, respectively: Now consider memory as an array or words: big-endian: little-endian: 432: 00 00 00 37 :435 435: 00 00 00 37 :432 436: 12 34 56 78 :439 439: 12 34 56 78 :436 ---------------------> <--------------------- increasing addresses increasing addresses Advantage of big-endian: bytes are in obvious consecutive order if you read them left-to-right (of course, there is no left and right inside the computer -- the bits may not even be all in the same chip). Advantage of little-endian: small integers of different precisions, stored at same address, place the same bytes in memory. long n = 3; short *p = &n; /* type error, not caught by old compilers */ printf("%d\n", *p); ------------------- It is important to note that, unlike data in high-level programming languages, memory is untyped. Bits are just bits. *Operations* are typed, in the sense that different operations *interpret* the bits in memory in different ways. Typical DATA FORMATS include instruction integer (various lengths, signed and unsigned) floating point (various lengths) character (on some machines) You can point the program counter of your computer at a string of floating point numbers, and it will do *something* (probably not anything very useful). The computer doesn't care. Data Types and Sizes Bit: 0, 1 Bit String: sequence of bits of a particular length. Nomenclature for groups of bytes varies from machine to machine. On the x86: nybble = 4 bits byte = 8 bits word = 16 bits doubleword = 32 bits quadword = 64 bits On most other machines: word = 32 bits doubleword = 64 bits quadword = 128 bits Character: FieldData 6 bits (uppercase letters only) ASCII 7 bits EBCDIC 8 bits Unicode 32 bits; all natural languges in low 16 bits (basic multilingual plane -- BMP) UTF-8 ASCII-compatible Unicode encoding Unicode Transformation Format, 8-bit high bit zero: ASCII character high bits 110: two-byte character (5 sig bits in first word, 6 sig bits in second word) high bits 1110: three-byte character (4 sig bits in first word, 6 each in second & third -- enough for BMP) etc. (6 bytes enough for any UNICODE char) continuation words begin with 10 ASCII historically the most common. Unicode increasingly important with globalization of computing. Most web pages use UTF-8, though they may also represent non-ascii characters as "character entities", e.g.   or   String: varies from language to language. In C, sequence of bytes ending with a 0 (ASCII NUL) Integers: usually "2's complement" representation byte, half word, word, and maybe doubleword lengths Note that limited # of bits implies the certainty of overflow In C on a 32-bit x86: long long : 64 bits roughly 0..2e19, or -9e18..9e18 long, int : 32 bits roughly 0..4B, or -2B..2B short : 16 bits roughly 0..64K or -32K..32K char : 8 bits 0..255, or -128..127 Decimal (BCD) (rare): digits 0-9 encoded as 0000b through 1001b, two decimal digits packed per 8 bit byte. Useful for financial programs; gives same round-off behavior we expect longhand. BCD *is* supported on the x86; it just isn't used much. It's used a lot on z Series machines. Floating Point: usually IEEE 754 representation single precision 32 bits roughly 10^(+/-38) double precision 64 bits roughly 10^(+/-308) extended precision longer; architecture-specific value sign * (mantissa + 2^exponent) (There's now a 128-bit version as well. Not yet widely used.) Addresses (natural representation of a pointer or a reference): In standard parlance, addresses are *by definition* one word in length. -- That's what it means to have a 32-bit machine: addresses are 32 bits long. Modern Intel and AMD processors -- the x86 -- are (unfortunately) a 64-bit extension of a 32-bit extension (the 80386) of a 16-bit architecture (the 80286), which is itself an extension of an 8-bit architecture (the 8086), and the nomenclature got stuck at the 16-bit stage. As a result, an address is one "doubleword" (32 bits) in length on the x86-32, and a "quadword" on the x86-64. Most machines require aligned operands: to read/write a 2^k byte operand in a single instruction, must use an address evenly divisible by 2^k. Otherwise get a "bus error" (alignment fault). Some machines have a maximum alignment. On the SPARC, for example, 64-bit quantities have to be (merely) 32-bit aligned. Again the x86 is an exception. All operand types can be accessed at any alignment, but accesses will be faster if they respect the 2^k boundaries. The choice of addresses for data and code is machine and OS-dependent. More on this later in the semester. Instructions: RISC: 4 bytes (32 bits) CISC: 1-N bytes, where N varies from machine to machine (12 on the x86, much larger on the VAX) Encodings to be discussed later. ------------------- Bit-level operations in C (Gotta *really* understand these for the first assignment.) logical: 0 = false; non-zero = true; ops produce 0 or 1 || && ! bitwise | & ^ ~ >> << arithmetic v. logical shifts: depend on type in C Note that Java has no unsigned numeric types. That's why it has both >> (arithmetic) and >>> (logical) right shifts. Bitwise operations very nice for sets drawn from limited domain. E.g. set of (extended) ASCII characters fits in 256 bits = 8 words on x86. Only 8 instructions to perform union (OR), intersection (AND); 16 instructions for difference (AND NOT) (fewer on some machines). Treating non-zero as true is handy in C: if (!p) ... // null pointer test if (p && *p == 3) ... // safe dereference ------------------------------ The reality behind the illusion: The memory hierarchy Typical figures for a contemporary 2GHz processor latency capacity cost registers .5ns 512B L1 cache 1ns 128KB L2 cache 5ns 4MB L3 (off-chip) cache 20ns 16MB RAM 100ns 4GB 4c/MB flash 50us 256GB $2/GB disk 10ms 2TB 8c/GB \ tape 20s 1TB/cartridge 5c/GB \ approx CD 100ms 700MB 10c/GB / equal DVD 100ms 4.7GB 5c/GB / Flash, disk, and tape have comparable bandwidths: 15-50MB/s. Note that tape is no longer cheaper than disk. Consequently many organizations now use (off-site) disks for backup, rather than tape. But tape lasts longer, is more reliable, and has higher bandwidth once mounted. Registers and cache Registers are faster than cache (available in one cycle, as opposed to 2 or more) Registers are visible to the assembly-language programmer; caches aren't. Consequently registers can be named with only 3-5 instruction bits, while cached values require construction of a full memory address. Similarly, programmer can deliberately optimize the use of the registers, while the hardware's policy for cache management is usually unalterable. Registers hold data only; cache holds data and instructions. Cache requires extra space for tag and valid bits. The assembly-level programmer generally thinks in terms of registers and memory. Cache just makes memory look faster. Flash, disk, and tape usually look different, because they're accessed through the file system's I/O interface. The operating system also uses flash and disks to extend the virtual address space, using main memory as a cache for data that really resides on disk. -------- Technology Trends and their influence on design and organization DRAM density driven by shrinking feature size historically, increased capacity by factor of 4 (twice the density in 2D) every 3 years: 1980 64Kb 1983 256Kb 1986 1Mb 1989 4Mb 1992 16Mb 1996 64Mb 1999 256Mb 2002 1Gb 2010 4Gb (note delay) 2014 8Gb (anticipated) As of 2013, 4Gb chips are still fairly expensive, and mainly used for servers. 1Gb DRAM chips give you up to 8GB on one DIMM, and capacities beyond that in smaller form factors for laptops & phones haven't been in great demand given the slow move to 64-bit address spaces (not even possible on ARM-based tablets & phones until a few months ago). Also, error correction becomes a more significant problem at higher densities, due to cosmic rays. Not clear DRAM density can go much higher. Other technologies on the horizon. CMOS VLSI density has similarly doubled roughly every 18 months since first introduced in the 1960s. This observation was first made by Gordon Moore, co-founder of Intel, in 1965. It hasn't stopped yet, but again it's slowing down. Historically, many people extrapolated "Moore's Law" to predict a doubling of processor performance every 18 months; this isn't what Moore originally had in mind, but it held roughly true until about 2004. (More on this in a minute.) A little math shows this translating into a factor of 100 every decade. That's 1M X in 30 years. [ In graduate school I shared a VAX 11/780 with about 30 other students. [ The machine had 1 MIP processing power and was originally limited to 1MB [ or memory. It debuted in 1978 (my freshman year of college) and cost [ almost $200,000. [ 24 years (16 doublings = 65K) later, a 2002 PowerBook had a 667MHz G4 [ processor, capable of about 7GFlops, or perhaps 7,000X the processing [ power of the VAX, which is a little ahead of the curve. The PowerBook [ would of course be used by only one person, at about 1/30 the [ (constant-dollar) cost (which is about right given that a powerbook [ isn't shared). The PowerBook was limited to 1GB (1000X) the memory, [ which is a little behind the curve. Desktop machines of the same era [ were available with more. Processor speed improvements driven by a combination of shrinking feature size, which facilitates clock speed increase due to shorter wires and lower capacitance architectural innovations instruction-level parallelism (pipeline depth, width) speculation locality exploitation (caches) This all ended a few years back. Around 2004 we (a) ran out of new options for speculation and pipelining; (b) ran out of cooling. To first approximation, the power consumption of a processor is directly proportional to its area (in mm^2) and its clock frequency. A modern desktop processor can burn 150W, and clocks are stuck around 2-3GHz. (Laptops run cooler, to extend battery life and avoid burning laps.) High-end Macs are liquid cooled (like your car) -- too complex/expensive for cheap machines. (Some servers are refrigerated -- _way_ too complex/expensive for personal machines.) And that doesn't even consider the dollar and environmental costs of the power and cooling requirements. Modern data centers are limited by electric supply, not space. Note that memory speed has been going up at only about 10%/yr. (That's why caches have become so important.) Disk capacity has increased historically at something like 60%/yr, though this, too, is slowing down. When I came here in 1985 the total department disk farm was less than 2GB (total across all machines). The disk of choice was a CDC Eagle, 350 MB for $5,000 (and physically about half the size of a dishwasher) -- call it $15K/GB. Now you can hard disks at Best Buy for 5c/GB. That's about 300,000X in $/MB in 28 yrs. Implications: death of supercomputers, supergraphics engines, tape drives ============================== Integer arithmetic Unsigned arithmetic An N-digit base B number d... d2 d1 d0 has the value d0 + d1 * B + d2 * B^2 + ... + d * B^(N-1) This works in decimal or in binary. (Also in octal or hexadecimal.) Pretty straightforward. In C: signed and unsigned integers long long, long, int, short are by default signed char depends on the machine; signed on the x86 How to handle negative numbers? "Obvious" solution is to add a sign bit (this is called "sign-magnitude" notation) fairly straightforward, but requires separate add and subtract circuits, and logic to determine sign of product or quotient. Also has two zeros, which is kind of weird. The standard modern solution is called "2's complement". Consider the following intuition. If we work with unsigned numbers and ignore overflow, we're really operating on the ring of integers modulo 2^n (anybody taken abstract algebra?) Put another way, A "+" B is really (A + B) mod 2^n. But if we change our interpretation of some of the bit patterns, we can roll the ring along the numberline to wherever we want it. In particular, instead of leaving it at [0..2^n), we can put it at [-2^(n-1)..2^(n-1)) or (-2^(n-1)..2^(n-1)]. The former is particularly attractive, because all and only the "negative" numbers have a leading bit of one. We can call this the "sign bit", though it's really just an ordinary bit (in contrast to the special sign bit of sign-magnitude notation). Example with 4-bit numbers: 0 1 1 1 7 0 1 1 0 6 0 1 0 1 5 0 1 0 0 4 0 0 1 1 3 0 0 1 0 2 0 0 0 1 1 0 0 0 0 0 1 1 1 1 -1 1 1 1 0 -2 1 1 0 1 -3 1 1 0 0 -4 1 0 1 1 -5 1 0 1 0 -6 1 0 0 1 -7 1 0 0 0 -8 Notice there's a negative number that doesn't have a positive equivalent. But there's only one zero, which is nice. The unsigned sum of a number and its negative is 2^n, hence the name 2's complement. To create the negative of a number, flip all the bits and add one. Why does that work? Well, if you just flip all the bits and add you'll clearly get a string of n 1's, which is 2^n-1. If we want k + (-k) to be 2^n instead of 2^n-1, we need the extra 1. Note that the "negative" of -2^(n-1) is itself. This is the one number that doesn't have an inverse. To get a wider (more bits) representation of the same number, replicate the high bit left-ward: 0110 4-bit representation of 6 00000110 8-bit representation of 6 1110 4-bit representation of -2 11111110 8-bit representation of -2 Why does that work? It's clearly right for positive numbers. So consider the n-bit inverse of a positive number. It's obtained by flipping the n bits and adding one. The add doesn't overflow, because we know all non-negative numbers have a negative counterpart. So the negative of the zero-extended n+i-bit positive number has all ones (from flipping) in the leading bits. Really nice property: addition "works" on both positive and negative numbers (including mixed), so long as the result fits in the number of bits available. To subtract, compute the inverse of the second operand and add. (This is how subtracters really work. It's actually almost free. You flip the bits of the second operand and generate a "carry" into the least significant place of the adder.) Using 4-bit numbers, 4 - (-2) (decimal) is 0100 - 1110 1 carry 0100 + 0001 bits flipped ------ 0110 --> 6 OVERFLOW occurs when the value of the result doesn't fit in the number of bits available. For UNSIGNED arithmetic, it happens when we have a carry out of the leftmost place. That's obvious for addition. Consider 4-bit numbers: 1010 10 + 0111 7 ------ -- 1 0001 17 For SIGNED 2's complement arithmetic overflow detection is a little more complicated. It happens when we add two non-negative numbers, discard any carry out of the top-most place, and are left with an apparently negative result. Similarly, it happens when we add two negative numbers, discard any carry out of the top-most place, and are left with an apparently non-negative result. (Note that addition of a negative and a non-negative number is guaranteed not to overflow, because it ends up closer to zero than the larger-magnitude operand.) [ A less obvious (but equivalent and easier to implement) way to detect [ overflow is see if the carries out of the top *two* places are [ different. Let's look at examples. Consider 4-bit addition. We can [ specify numbers from -8 through +7. [ [ If we add a positive and a negative number we can't get overflow: [ we get a carry out of the most significant place (MSP) iff we have a [ carry into the MSP: [ [ 00 <--carry [ 1001 -7 [ + 0010 2 [ ------ -- [ 1011 -5 [ [ 11 <--carry [ 1010 -6 [ + 0111 7 [ ------ -- [ 0001 1 [ [ If both numbers are positive we can't get a carry out of the MSP; we [ have overflow if we have a carry into the MSP: [ [ 01 <--carry [ 0101 5 [ + 0110 6 [ ------ [ 1011 [ [ If both nubmers are negative we definitely get a carry out of the [ MSP; we have overflow if we *don't* have a carry into the MSP (so [ the sign bit ends up zero): [ [ 10 <--carry [ 1010 -6 [ + 1011 -5 [ ------ [ 0101 Subtraction of unsigned numbers is similar: the HW negates the subtrahend (the thing to be subtracted), adds with n+1 bits, and announces overflow iff the n+1 - bit sum appears to be negative: 1010 10 0 1010 10 - 1100 12 ==> + 1 0100 -12 ------ --- -------- --- 1 1110 bit #4 is set -> overflow 1100 12 0 1100 12 - 1010 10 ==> + 1 0110 -10 ------ --- -------- --- 0 0010 bit #5 would be set, if we kept it, but bit #4 is not -> no overflow How to check for overflow in assembler? Most computers (including the x86) have a single set of add and subtract instructions; you check for the right kind of overflow by checking the right combination of condition code bits. On the x86, the CF flag indicates unsigned overflow; the OF flag indicates signed overflow. So in the 12 - 10 example above, neither bit would be set; in the 10 - 12 example, the OF bit would not be set, but the CF bit would be. (We'll talk more about flags [condition codes] later.) The x86 also provides ADC (add with carry) and SBB (subtract with borrow) instructions that add or subtract two operands AND the pre-operation carry bit. These are used for extended precision arithmetic. Other machines have similar mechanisms. ------------------------------ Building an ALU ALU takes two operands A and B, and an opcode OP, and generates A OP B. The easiest way to implement this is to build circuits for *all* OPs, feed A and B into all of them, and use a multiplexor on OP to select the desired result. (Note that this isn't very energy-efficient. As energy and power become bigger concerns, designers can be expected to more carefully avoid unnecessary work.) Ripple-carry adder is trivial. Has size (in gates) linear in the size of the operands, but takes time linear in the size of the operands, too. Not so great. Addition is a combinational function. It's easy to show that we can implement any such function (including an adder) with only two logic levels. Unfortunately, it has size exponential in the size of the operands. Consider 2-bit addition: A B S --- --- --- 0 0 + 0 0 = 0 0 0 1 + 0 0 = 0 1 s0 = (~a1 & a0 & ~b1 & ~b0) 1 0 + 0 0 = 1 0 1 1 + 0 0 = 1 1 | ( a1 & a0 & ~b1 & ~b0) 0 0 + 0 1 = 0 1 | (~a1 & ~a0 & ~b1 & b0) 0 1 + 0 1 = 1 0 1 0 + 0 1 = 1 1 | ( a1 & ~a0 & ~b1 & b0) 1 1 + 0 1 = 0 0 0 0 + 1 0 = 1 0 0 1 + 1 0 = 1 1 | (~a1 & a0 & b1 & ~b0) 1 0 + 1 0 = 0 0 1 1 + 1 0 = 0 1 | ( a1 & a0 & b1 & ~b0) 0 0 + 1 1 = 1 1 | (~a1 & ~a0 & b1 & b0) 0 1 + 1 1 = 0 0 1 0 + 1 1 = 0 1 | ( a1 & ~a0 & b1 & b0) 1 1 + 1 1 = 1 0 A straightforward (if non-optimal) formula for s0 (the 1's bit of S) is shown in the right column above. This is a huge formula. The one for s1 is just as bad. For a 32-bit adder we're talking billions of gates. How about a compromise? Several possibilities. Examples include "carry lookahead," "carry select," and "carry forward" adders. These do addition in time O(log N) and space O(N log N), where N is the length of the operands. Basically we build an N-bit adder out of two N/2-bit adders and a little glue. Each sub-adder computes its result with and without initial carry. It also produces "generate" (g) and "propagate" (p) bits. Bit g says we'll produce a carry no matter what. Bit p says we'll produce a carry if given a carry. Note that g -> p. The glue looks at the output of the right half to decide which output to use from the left half. Glue from different levels has to operate sequentially, hence log time. Size of glue at each level is linear in the length of the operands; hence log N factor space blowup. We could in principle build an ALU that also does shifting, multiplication, and division, with a multiplexor choosing the desired result. Most processors have separate shifters and multipliers, however. The shifter is separate because it doesn't share any significant hardware with the rest of the ALU, so there's no advantage to combining it. The multiplier and (if provided) divider are separate because they tend to be significantly slower than addition/and/or/etc. If we insisted on multiplying all the time we'd slow everything else down. -------- Multiplication [ Several early RISC machines, including SPARC and PA-RISC, had "multiply [ step" instructions that would do, say, one iteration of Booth's [ multiplication algorithm (not covered here) in one cycle. You'd use [ this weird instruction to build a fast software multiplier. Other early [ RISC machines, including the i860 and the M88K, did integer multiply in [ the floating point unit, which already had a separate pipeline. Most modern desktop processors have a separate integer multiplier/divider, but it takes more cycles than addition/subtraction. How many more depends on how much chip area they devote to it. The naive algorithm for N-bit multiplication takes N times as long as addition. Some processors reduce the number of iterations (i.e., latency) by implementing a binary tree of adders, or by grouping bits of the multiplier and doing table lookup of partial sums. Or they may increase bandwidth by employing multiple adders so that multiplication can be pipelined (this does not reduce the latency of an individual multiply). Example: the Intel Core Duo can do an integer multiply in 3 cycles, but can do only one of them at a time. The x86 has a somewhat baroque set of integer multiply instructions. Oversimplifying a bit, there are (1) signed and unsigned versions, and (2) versions that truncate the result to 32 bits OR that place the upper and lower halves of the 64-bit result into a *pair* of registers. ------------------------------ Division Turns out you can use the same hardware for integer multiplication and division. (Explanation is subtle and non-obvious; see the P&H text if you're curious.) Signed division is most easily accomplished by converting to non-negative numbers, remembering the signs, and patching up at the end. One subtlety: sign of remainder. Usual rule: remainder has same sign as dividend, regardless of sign of divisor. This ensures that -(x/y) == (-x)/y. Example: -7 / +2 = -3, rem -1, NOT -4, rem +1. By this definition, the C % operator is remainder. In Python, a % b always has the same sign as divisor b, NOT dividend a. Ada has both: rem has the same sign as the dividend; mod has the same sign as the divisor. As with multiplication, the division operations on the x86 are somewhat baroque. There are signed and unsigned versions, and versions that read their dividend from a single register or a register pair. The result of a divide (of whatever length) is always placed in a pair of registers: the quotient in one and the remainder in the other. [ One particularly strange wrinkle: the product of a 16-bit signed [ multiply, and the dividend of a 16-bit signed divide, are split across [ two registers, even though all 32 bits *could* fit in a single [ register. This is a legacy of the 16-bit 80286, in which registers [ were only 16 bits wide. ============================== IEEE floating point Integer math is based on place value. 1's place, 2's place, 4's place, 8's place, ... Math with fractions can also be done with place value. Hand-held financial calculators do this in decimal (at least in the user interface), keeping 2 decimal digits to the right of the decimal point. This is FIXED POINT math. Basically you compute everything in pennies. We can do the same thing in binary, so we have a 1/2's place, 1/4's place, 1/8's place, etc. The problem is that given a fixed word length, for every fractional place we keep we lose an integer place. 32-bit unsigned integers run roughly from 0..4 billion. If we keep 10 fractional places we have only 22 integer places, and the range is reduced to roughly 0..4 million. FLOATING POINT math takes its cue from scientific notation. We decide how many SIGNIFICANT DIGITS (in binary, bits) we want, and we use extra bits to indicate the power of two. In other words, everything is mantissa X 2^P A NORMALIZED binary floating-point number has a mantissa between 1 and 2: +/- 1.sig X 2^P Because there's only one choice for the digit to the left of the binary point in binary, we can actually leave it implied, and have an extra bit to use in memory. The fractional SIGNIFICAND 'sig' above is what's actually stored. It turns out the power P isn't stored in its obvous form either; more on this in a minute. It used to be every machine had its own proprietary floating point representation. That changed with the 1985 IEEE 754 floating point standard, which almost all modern processors support. single precision double precision toy sign s 1 bit 1 bit 1 bit exponent e 8 bits 11 bits 3 bits significand f 23 bits 52 bits 4 bits bias 127 1023 3 ~range 10^(+/- 38) 10^(+/- 308) 2^(+/- 4) (There's also a 2008 update to the standard that defines 16-bit and 128-bit binary formats, plus 32-, 64-, and 128-bit decimal formats. I won't be discussing those.) In the IEEE standard, the sign bit comes first, followed by the exponent, and then the significand. sign first => normal integer sign test works (sign of 1 means negative), even though FP is sign-magnitude, rather than 2's comp. exponent next => big numbers look bigger than small numbers, but only if you're careful about negative exponents. BIAS ensures that big exponents look bigger than small exponents, regardless of sign. Remember idea of rolling mod-2^N numbers along the number line? If we roll them back up so that (for single precision), 0..127 are the negative exponents and 128..255 are the non-negative exponents, then integer comparison operations do the right thing (at least when the leading sign bit is zero). The bias is the value you subtract from the (unsigned) exponent to get the real power of two. It's always one less than a power of two: 127 in single precision, 1023 in double precision; in general 2^(|e|-1)-1. Single precision exponents: 8-bit toy (1+3+4): 255 special cases (more below) 7 --- 254 2^+127 6 2^3 ... 128 2^1 4 2^1 127 2^0 3 2^0 ... 001 2^-(126) 1 2^(-2) --- 000 special cases (more below) 0 Consider the 8-bit example. Largest possible value is + 1.1111 x 2^3 = 15 1/2. Smallest positive is + 1.0000 x 2^(-2) = 1/4. Next smallest is 15, then 14.5, all the way down to 8 by halves. Then all the way down to 4 by quarters. Down to 2 by eighths. Down to 1 by sixteenths. Down to 1/2 by 32nds Down to 1/4 by 64ths But then we have a big gap between 1/4 and 0 -- 16x larger than the gap between 1/4 and 17/64. The situation is much worse with wider fractions and exponents. SUBNORMAL numbers (formerly known as DEORMAL) allow for GRACEFUL UNDERFLOW: for values very near zero we admit the possibility of few significant bits, with an implied leading bit of zero. We reserve two exponents for special cases: Exponent of 0 means subnormal or zero; exponent of 2b+1 (255 in single precision) means NaN (not a number). With subnormals the smallest toy value is 0 000 0001 + 2^(1-3) x (0+.0001) = 1/4(1/16) = 1/64 That has only one significant bit, but there's no gap. The largest subnormal is 0 000 1111 + 2^(1-3) x (0+.1111) = 1/4(15/16) = 1/4 - 1/64 = 15/64 Representation of values, with special cases e f value normalized 1<=e<=2b +/- 1.f x 2^(e-b) zero 0 0 +/- 0 subnormal 0 !=0 +/- 0.f x 2^(1-b) infinity 2b+1 0 +/- oo NaN 2b+1 !=0 NaN Comments: - two zeros, one of which has all zero bits, just like an integer - bit patterns representing non-negative numbers are ordered the same way as integers, so regular integer comparison routines work. (This is actually kind of cool: it means there's an ordered mapping between the positive integers and the positive FP numbers. If you do an integer increment on a positive FP number, you get the next larger FP number.) - infinity behaves in the expected way. E.g. if N>0, N/0 = oo. Similarly, arctan(oo) = pi/2. - NaNs produced by erroneous computations. They propagate, so you do a single error check at end. Actually two kinds of NaNs: signalling and non-signalling. Signalling produce an exception if used; great for catching uses of uninitialized variables. - a double has more significant bits than a 32-bit integer -- namely 52. A standard programmer hack is to use a double when you need to count higher than 4 billion. When used to hold integers, a double gets you about +/- 2^51 ~= 2 x 10^15 = +/- 2 quadrillion -- a million times the range of a long. - while 2's complement addition and multiplication obey associative and distributive laws (at least if you ignore overflow of intermediate results), floating point addition and multiplication do not. Example: What is the decimal value of the following single-precision FP number: 1: 1000 0001: 0100 0000 0000 0000 0000 000 Clearly negative. Exponent is 129, implying a power of 129-127 = 2. Significand is 1/4d, implying a mantissa of 1 + 1/4 = 5/4. 5/4 x 2^2 = 5. Answer: -5d. Example 2: 0.15 decimal = what in binary FP? Sign bit is clearly 0. Units bit is clearly 0 (.15 < 1) Keep multiplying by 2 to find fractional bits: remainder X2 fractional bit ones 0. .15 .30 halves 0 .30 .60 quarters 0 .60 1.20 eighths 1 > .20 .40 sixteenths 0 > .40 .80 32nds 0 > repeats .80 1.60 64ths 1 > .60 1.20 128ths 1 ... Normalized, this is 1.00110011... X 2^(-3) 23-bit Significand is 0011 0011 0011 0011 0011 001 Power is -3; Exponent is -3 + 127 = 124 = 0111 1100b In double precision, the significand would be 52 bits instead of 23 (with the same repeating pattern), and the exponent would be 11 bits instead of 8: -3 + 1023 = 000 1111 1111b - 11b = 011 1111 1100b. Rounding Notice we had to make a rounding decision in the above example. We truncated when the next bit was 1. That's rounding down. We could have rounded up to 0011 0011 0011 0011 0011 010 More on this later. Floating Point addition align operands (Shift smaller-magnitude operand right until its exponent matches that of the larger-magnitude operand. This may sacrifice precision, or even cause the number to disappear entirely.) add repeat normalize sum by shifting left or right and incrementing/decrementing the exponent if overflow or underflow, generate exception round significand to appropriate number of bits until normalized Example from H&P: 0.5 + -0.4375 Rather than do full IEEE FP, let's assume we keep 4 bits of precision, including the implied bit. A = 0.5 = 1.000b x 2^(-1) B = -0.4375 = -7/16 = -7/2^4 = -111b x 2^(-4) = -1.110b x 2^(-2) First we find the smaller exponent (B) and shift to match: 1.000 b -1 1.000 b -1 -1.110 b -2 ==> -0.111 b -1 Then we add: 1.000 b -1 + 1.001 b -1 (neg plus pos) ----------- (-> no overflow) 0.001 b -1 (toss carry) Then normalize: 1.000 b -4 We note that the exponent (-4) is within the allowable range, implying there was no FP overflow or underflow. Finally we note that the sum fits in the number of available significand bits, so no rounding is required. Double check: 1/2 - 7/16 = 1/16. Floating Point multiplication don't have to align operands add exponents requires a little care with bias multiply significands repeat normalize sum by shifting left or right and incrementing/decrementing the exponent if overflow or underflow, generate NaN round significand to appropriate number of bits until normalized set sign, based on signs of operands More on rounding truncation is usually a bad idea IEEE standard requires that results be the same as if you computed the result exactly and then rounded. There are four available rounding modes: round toward 0 (truncate, if you really want it) round toward +oo round toward -oo round to even (default) The default can be shown (though I won't) to minimize mean error introduced by rounding. [ Correct rounding requires extra bits. [ 1st, we may need one more bit during the computation just to get all [ the significant bits we need for the result: [ [ 1.11 1.01 [ x 1.11 x 1.01 [ ------ ------ [ 111 101 [ 111 101 [ ----- ------ [ 10101 1.1001 [ 111 grs [ ------- [ 11.0001 In the case on the right we've ended up with [ grs one fewer digit to the left of the binary point; [ we have to shift left to keep even with the left-hand [ case. The bit we use for this is called the GUARD BIT [ (marked g in examples). [ [ If we don't need to shift the guard bit over we use it to round. [ If we do shift it over we need *another* bit to round. The IEEE [ standard calls this the ROUND BIT. [ [ Rounding to the nearest even is tricky in one case. [ The rule says: if the discarded bits are less than 1/2 ULP, round [ down. If more than 1/2 ULP, round up. If = 1/2 ULP, round [ whichever way makes the LSB 0 (nearest even digit). [ [ Suppose we want to round 1.01101 to 5 significant bits (4 plus the [ implied unit). The rule says round down (nearest even): 1.0110. [ But how about rounding 1.011010000000000001? Now the IEEE standard [ (give result of rounding an exact result) says we should round *up*. [ Handled by a 3RD rounding bit: the STICKY BIT. Set to one if any [ digits have been shifted off the end of our available precision. [ Allows us to distinguish between exactly half and slightly more than [ half. ---------------------- Floating point on real machines Most modern ISAs have 32 64-bit FP registers. The x86 has eight 80-bit legacy ("x87") "extended real" FP registers: sign bit, 15 bits of exponent, and 64 bits of significand. These convert back and forth to IEEE format in memory. Their original ISA was stack-based, to save bits. The x86-64 adds 16 IEEE format FP registers, which modern compilers use preferentially; the x87 regs are deprecated. The x87 stack-based instructions are sort of doubly-deprecated. x86 BCD support consists of special load and store instructions that convert between BCD (in memory) and extended real format in the x87 registers. For further information, read the Intel Architecture manual, available at the course web site. Section 3.14 in the textbook focuses on the x87 FP.