Lecture notes for CSC 252, Tues. Mar. 18 ff, 2014 Announcements A5 is on the web (there was no A4) trivia due 11am, Thurs. 20 Mar. main assignment due Monday, 31 Mar., 11:59pm If you didn't read chapter 5 carefully before, you'll want to do so now. You won't do well on this assignment without it. Also read chapter 6. You may also want to read the "web aside" on "blocking" right away. Link on the assignment web page and the "resources" page. I'm planning to skip most of chapter 7. I want the extra lecture for the current unit. -------------------------------- From the midterm Make sure you understand the questions you missed. I may reprise them, or put very similar questions on the final. Check out the note on Blackboard. Questions? ======================================== [ Greek prefixes [ zetta 10^21 ~= 2^70 ZB [ exa 10^18 ~= 2^60 XB [ peta 10^15 ~= 2^50 PB [ tera 10^12 ~= 2^40 TB [ giga 10^9 ~= 2^30 GHz, GB [ mega 10^6 ~= 2^20 MHz, MB [ kilo 10^3 ~= 2^10 KHz, KB [ [ milli 10^(-3) ~= 2^(-10) ms [ micro 10^(-6) ~= 2^(-20) us [ nano 10^(-9) ~= 2^(-30) ns [ pico 10^(-12) ~= 2^(-40) ps [ femto 10^(-15) ~= 2^(-50) fF (charge on DRAM cell) You need to get used to common units and develop a feel for their size. With current technology (these are moving targets): The clock on a 2.5GHz processor ticks 2.5B times/sec; its cycle time is 0.4ns = 400ps. A register can be accessed in one cycle. L1 cache can be accessed in about 1ns (2-4 cycles) L2 or L3 cache in 5-20ns. Main memory in 60-100ns. Disk in 5-12ms. L1 cache holds 32-256KB L2 and L3 cache holds 1-32MB Main memory on a PC holds 256MB to 8GB (much more on a mainframe or supercomputer) A single disk holds 100GB-2TB. Large databases are measured in PB. Annual stored content generated by the human race was estimated by the US Chamber of Commerce as approx. 1 ZB (10^21 bytes) in 2012. Up from about 1 XB at the turn of the century -- and growing. 90% of all stored data generated in the past 2 years. -------------------------------- The Memory (Storage) Hierarchy registers L1 cache L2 cache L3 cache main memory local disk remote or tertiary storage (tape, optical jukebox, etc. -- seldom used anymore; disks are too cheap) *** The goal of the memory hierarchy is to give the appearance of a uniform store with the latency of the top levels and the capacity of the bottom levels. How can we do it? Use CACHING to exploit LOCALITY temporal locality if we use something we're likely to use it again soon spatial locality if we use something we're likely to use something nearby soon Caching keep WORKING SET (things we're using a lot at present) in the smaller, faster levels of the storage hierarchy, and other stuff in the larger, slower levels. [ REUSE DISTANCE: rd(a), where a is a memory reference, is the amount of [ data accessed one or more times since the most recent previous reference [ to the datum accessed in a. Best known quantification of locality. Can [ be defined for arbitrary units of data granularity, e.g. bytes, words, [ cache lines, pages. << draw working set diagram: cache size v. cache miss rate >> Caching happens at every level of the hierarchy. In most cases it's handled automatically, so your program doesn't know what's going on. But the programmer, compiler, OS, etc. can often help improve locality, and thus cache effectiveness. We'll concentrate in particular on (1) caching of main memory in L1/2/3 cache, and (2, later) caching of disk in main memory. cache virtual memory technology SRAM DRAM management HW HW/OS block size ~64B ~8KB -------------------------------- memory technology SRAM DRAM (many variants) EEPROM (Flash) PCM, MRAM, STT-RAM, ... DRAM 1 transistor and a capacitor per bit must be refreshed after read, and every few ms even when not read used for main memory on most machines different process technology; difficult to put on same die as logic SRAM 6 transistors per bit 8-16X faster than DRAM 8-16X more expensive 4-8X less dense doesn't require refresh used for cache, supercomputer memories (also registers, TLB, branch prediction table, etc.) Conventional DRAM chip has r X c cells, each of which contains w bits. A 2Gb chip might have 256M = 2^28 cells, each with 8 = 2^3 bits. To minimize pin count and simplify internal circuitry, the chip must be "strobed" (fed address bits) twice: once for row and once for column. If we have 14 rows and 14 columns, we'd feed in 14 bits, then another 14, then read out 8 bits. (There are also usually ECC bits -- increasingly important as size of each cell goes down. Our 8 bits might be encoded as 10.) Memory is commonly packaged on DIMMs (dual-in-line memory modules). These produce 64 bits at a time. We could build a 2GB DIMM from 8 of the chips above. The DIMM controller feeds the row and column bits to all 8 chips simultaneously, then assembles the 8B output. Emerging technologies: PCM, MRAM (magnetoresistive), others ------------------------------ [ bus architecture [ [ Processor, memory, and devices connected by a collection of BUSes: [ shared communication lines. [ [ The more I/O connections you have, and the more general (i.e. simple) [ they are, the slower the bus tends to be. Really cheap systems may have [ a single bus. Fancier systems (including most current PCs) have [ multiple buses, with more general I/O interfaces available on slower, [ subsidiary buses. [ [ Common arrangement [ [ system bus memory bus [ CPU <-----------> I/O bridge <-----------> memory [ ^ [ | [ <----+-------------+-----------------> [ | PCI bus [ | [ | USB bus [ v [ [ *** I/O bridge allows CPU and memory to talk to each other without [ bothering devices, and disk (on PCI bus) and memory to talk to each [ other without bothering the CPU(s). [ [ bus design alternatives [ expensive cheap [ fast slow [ physically short physically long [ wide narrow [ synchronous asynchronous [ complicated interface simple interface [ multiple masters single or no master [ block transfers word at a time [ split transaction single transaction [ [ Clock skew limits the physical length of synchronous buses. [ All devices on synchronous bus must run at same speed. [ Asynchronous buses can be very long if you're willing to tolerate long [ handshaking delays: witness (non-switched) Ethernet : ~1km length. [ [ bus arbitration: who is allowed to send when? [ central arbiter (e.g. PCI) [ generally requires extra control lines [ daisy chain (e.g. SCSI, USB, FireWire) [ simple, but slower, and not as fair [ collision detection (e.g. Ethernet) [ not used in backplanes (too slow); works well for LAN [ [ Memory-mapped I/O (usual case) v. special I/O instructions [ (antiquated option on the x86) -------------------------------- disk technology platter (1 to maybe 15) surface (one side of a platter) sector (typically 512 bytes, hundreds per track) track (10s of thousands per platter) grouped into zones: each track in a given zone has the same number of sectors; tracks in outer zones have more than those in inner zones cylinder: corresponding tracks on different surfaces -- can be accessed without seeking. capacity has grown tremendously over time, though the rate has slowed down recently typical PC hard disk today: 1TB capacity 10,000 rpm ==> 3ms average rotational latency (0.5 revolutions / 10000 rpm * 60s/min) 4ms typical seek latency (varies from 0 to maybe 10) maybe 80MB/s transfer rate All these numbers are moving targets, though rotational speed has grown only about 4X since the 1960s, mostly enabled by reductions in physical size. contributors to access time: seek time rotational latency transfer time controller overhead OS overhead flash typical read time: 25us typical write time: 250us +1500us to erase a block read bandwidth: 40MB/s write bandwidth: 8MB/s wears out after 10^4 to 10^5 writes to a given block Trends Improvements in the 30 years between 1980 and 2010: CPU speed 3000X 1MIP - 3GIP SRAM $/MB 300X 20K - 60 latency 200X 300ns - 1.5ns DRAM $/MB 160,000X 8K - .05 latency 10X 400ns - 40ns disk $/GB 5,000,000X 500K - .10 latency 15X 90ms - 6ms Implication: caching is a LOT more important than it used to be. Note, however, that the memory/CPU latency gap has more or less stopped growing. At the same time, the memory/CPU _bandwidth_ gap is still growing, and we're at the limit of physical pins. Future memory will have to move on-chip, or into a 3D stack. ---------------------------------------- Optimizing performance (A5) improve locality traverse memory in cache-line order fuse loops that traverse the same array block for temporal locality reduce number of operations move invariants out of loops eliminate redundant computations unroll loops bookkeeping instructions redundancy across iterations increase computational ILP multiple accumulators (software pipelining) re-association ================================ Caches terminology cache miss page fault hit rate miss rate hit time miss penalty thrashing multi-level inclusion common for levels shared across cores wastes space, but simplifies coherence (don't have to check upper level) allows lower level to use larger block size with inclusion, lower-level cache must have at least as much associativity as the sum of the levels above, or those levels will be effectively restricted as well with exclusion, _swap_ on miss split v. unified Typical modern machines have separate I and D caches at L1 Split L1 cache makes it easier to read both instructions and data in every cycle. Code is read-only in most systems. If you know you can't modify code you never have to worry about pipeline hazards due to writes into the instruction space. And if you have separate I & D L1 caches you never have to kick anything out of L1 because of a write. blocks, blocksize lines, linesize Text uses "line" to mean "block plus tag and valid/dirty bits"; other authors use the terms interchangably. associativity fully associative N-way associative direct-mapped sets ways indexing (set selection) via middle bits of physical address (ignore virtual memory for now) word selection via low-order bits M = 2^m byte address space m-bit address Divide address into t-bit tag, s-bit set selector, b-bit byte selector; m = t + s + b B = 2^b bytes per block (line) S = 2^s sets E ways = # of lines per set T = 2^t blocks (in maximum potential address space) map to the same set C = B x E x S cache capacity In a direct-mapped cache, E = 1, so C = B x S. In a fully-associative cache, S = 1, s = 0, and C = B x E. Some machines use different block sizes at different levels. The 4-core Intel Core i7 (described in the book) has L1-I (on-chip, per core) B = 64B, E = 8, S = 64, C = 32KB, 4-cycle L1-D (on-chip, per core) B = 64B, E = 8, S = 64, C = 32KB, 4-cycle L2 (on-chip, unified, inclusive, per core) B = 64B, E = 8, S = 512, C = 256KB, 11-cycle L3 (on-chip, inclusive, shared) B = 64B, E = 16, S = 8192, C = 8MB, 30-40-cycle Notes: The i7 is now available with 6 cores. The number in Core i3, i5, i7 is NOT the number of cores; it's just marketing. The i5 has "turbo boost," which the i3 does not. The i7 has 2-way hyperthreading, which the i5 does not. Shared caches need high associativity to avoid conflicts between cores. Cache coherence and consistency are a big problem; more on this later in the semester. Intel likes to build L1 with high associativity to minimize the number of sets, and thus the number of bits used as index. This facilitates virtual indexing; more on this in chapter 9. write-back v. write-through v. uncached (I/O) Write buffer Holds to-be-completed writes within the processor core. The pipeline doesn't have to wait for these to reach memory, but reads have to check the write buffer to see if there is any as-yet-uncompleted write to the location we're reading. Does this in parallel with access to L1. Store buffers Similar to write buffer. May appear at each level of hierarchy, to hold write-back data that has not made it to the next level yet, but whose original space has already been used. Write merging (in write buffer or store buffers) misses (the 4 Cs) cold-start (obligatory) capacity need a bigger cache conflict need more associativity, or reorganized program coherence need less sharing (true or false) Design tradeoffs cache size bigger --> higher hit rate, higher hit time block size bigger --> higher hit rate due to spatial locality bigger --> lower hit rate due to temporal locality bigger --> higher miss penalty associativity higher --> higher hit rate and lower likelihood of thrashing, due to fewer conflict misses higher --> higher hit time higher --> smaller cache size, due to associative circuitry and longer tags write strategy write-through --> simpler, with lower miss penalty due to lack of dirty lines write-back --> better use of bus and memory bandwidth In general, caches higher up in the hierarchy are more likely to be small, with small blocks, low associativity (even direct-mapped), and write-through; caches lower in the hierarchy are more likely to be large and more associative, with large blocks and write-back. Also, shared caches tend to be inclusive. But these are not hard-and-fast rules. Also, the trend over time is toward larger caches and larger blocks -- an attempt to use density (which is increasing rapidly) and bandwidth (which is increasing moderately) to tolerate latency (which isn't decreasing much at all). -------------------------------- The "Memory Mountain" Read throughput. Consider the following program: double A[8M] // 64MB for elems = 256 to 8M by powers of 2 do // 2KB to 64MB for stride = 1 to 16 do for i = 0 to elems by stride do read A[i] // warm cache start timer for i = 0 to elems by stride do read A[i] end timer plot (bytes moved / time) for each pair The result is a "memory bandwidth mountain", as in fig. 6.43 in the text. From Core i7 system. Cache sizes as above. cycles latency to L1, L2, L3, main memory: ... - Four levels corresponding to working sets that fit in L1, L2, L3, and main memory (don't explore far enough out to induce paging). - Slopes on L2, L3, and memory levels, due to effect of spatial locality in higher levels. - Perpendicular to these levels, there's a flat ridge at strides of 1 and 2, apparently due to HW prefetch engine. - Artifacts due to loop overhead when working set is small. In particular, that accounts for the "back slope" and for the fact that the ridge isn't level. Other minor artifacts, too (e.g. bump at right of L2 level); not sure why. Cuts through the surface (figs 6.44 and 6.45) also interesting. Can really see temporal and spatial locality. Note the "in-between" bars at 32K, 256K, and 8M in fig. 6.44. While the indicated caches are big enough to hold all the data at these points, they aren't big enough to hold all the data *and* all the instructions, stack, kernel structures, etc. ================================ Cache-friendly code [ Bad temporal locality: [ [ for (i = 0; i < N; i++) { [ row_sum[i] = 0; [ } [ for (i = 0; i < N; i++) { [ for (j = 0; j < N; j++) { [ row_sum[i] += A[i][j]; [ } [ } [ [ If A is big it's likely that row_sum[i] won't be in the cache anymore [ (at least not in the L1 cache) the 2nd time we use it. Better version: [ [ for (i = 0; i < N; i++) { [ row_sum[i] = 0; [ for (j = 0; j < N; j++) { [ row_sum[i] += A[i][j]; [ } [ } [ [ Bad spatial locality: [ [ int grand_total = 0; [ for (j = 0; j < N; j++) { [ for (i = 0; i < N; i++) { [ grand_total += A[i][j]; [ } [ } [ [ This walks through A in the "wrong order" -- column-wise. [ (Unless we're writing in Fortran.) [ Better version reverses the loops: [ [ int grand_total = 0; [ for (i = 0; i < N; i++) { [ for (j = 0; j < N; j++) { [ grand_total += A[i][j]; [ } [ } [ [ *** A really good compiler will do a lot of this for you. With just a [ so-so compiler you need to do it yourself. It's almost always worth [ doing if it doesn't mess up the readability of the code. If it does [ make the code uglier, do it only if profiling indicates you have [ performance problems in this particular place. [ [ Much more on cache-friendly code coming up below. ---------------------------------------- Consider matrix multiplication: double A[], B[], C[] for i = 0 to N-1 for j = 0 to N-1 C[i][j] = 0 for i = 0 to N-1 for j = 0 to N-1 for k = 0 to N-1 C[i,j] += A[i,k] * B[k,j] Ignore initialization of C from now on; focus on the nested loops. Notice that the i, j, and k loops are completely symmetric. So we can permute them into any of six orders: for i for j for j for i for k for k for j for k for k for j for i for i for k for i for i for k for j for j In the original (i, j, k) version of the loop nest, a decent compiler should keep C[i,j] in a register during the inner loop. It might not be able to if A, B, and C were by-reference parameters, so that aliases are a potential problem. We can force it to (since we know we won't create an alias) by introducing a local variable: for i for j sum = 0 for k sum += A[i,k] * B[k,j] C[i,j] = sum There are similar optimizations possible in the other 5 cases. The (j, i, k) version looks like the above. In both cases k is the inner index, and the inner loop accesses A and B but not C. The (j, k, i) and (k, j, i) versions look like this: for j for k r = B[k,j] for i C[i,j] += A[i,k] * r Here i is the inner index, and the inner loop accesses A and C but not B. The (k, i, j) and (i, k, j) versions look like this: for k for i r = A[i,k] for j C[i,j] += r * B[k,j] Here j is the inner index, and the inner loop accesses B and C but not A. These three main variants have different memory reference patterns and different locality for large arrays. If we assume a 64-byte (8-double) block size and a matrix size large enough that the L1 can't hold a whole row, we have: inner loop behavior loads stores tot-mem A-miss B-miss C-miss tot-miss AB (k inner) 2 0 2 0.125 1 0 1.125 AC (i inner) 2 1 3 1 0 1 2 BC (j inner) 2 1 3 0 0.125 0.125 0.25 Note that the table in the book (Fig. 6.47) has the block size for the i7 wrong. The AC variant (i inner) is clearly bad. It doesn't access B in the inner loop, but it walks both A and C in column-major order, for really bad locality. The AB variant wins on total memory accesses, but the BC variant wins on cache misses, and is likely to run faster. It definitely does on the Core i7 tested by the authors (Fig. 6.48). On the i7, the best version (BC -- j inner) suffers no slowdown as the rows get very large. That's because the hardware prefetcher detects and exploits the stride-1 access pattern, and has enough bandwidth to keep up. For machines that don't prefetch as well, or for loop nests that the prefetcher doesn't detect, blocking (tiling) may be needed. -------------------------------- Blocking (tiling) Works for many codes based on multi-dimensional arrays. Really messes up the appearance of the code. Can be done by some really good high-end compilers. Not done by most C compilers. Worth doing by hand in truly performance-critical code on some machines (but realize it's going to make maintenance a *lot* harder). The idea: treat the arrays not as arrays of elements (ints, doubles), but as arrays of little arrays. Iterate over the little arrays, which fit in the L1, doing (as much as possible) all the work on a given set of little arrays before moving on to the next set. In an unfortunate overloading of terminology, the little arrays are called "blocks" -- not to be confused with cache blocks (the things that reside in lines). They are also sometimes called "tiles". We can apply blocking to any of the 6 variants of matrix multiply above. It introduces additional levels of loop nest, which go around the existing loops. We could choose to block A and/or B, and then nest the new, outer loops in any order (the access pattern for C is induced by the other two) . There are a *lot* of resulting possibilities -- too many to enumerate here. One option that works well is to use the (i, j, k) version above, and apply blocking to B (which would otherwise be walked in column-major order). We get something like this: const bsize = 25 // or whatever -- right value depends on size of L1 cache init: for i, j { C[i,j] = 0 } for kk = 0 to N-1 by bsize for jj = 0 to N-1 by bsize for i = 0 to N-1 for j = jj to jj+bsize-1 sum = C[i,j] for k = kk to kk+bsize-1 sum += A[i,k] * B[k,j] C[i,j] = sum The inner (k) loop moves across a 1 X bsize sliver of A and down one column of a bsize X bsize block of B. The next loop out (j) iterates over the columns of that block of B, using the same sliver of A over and over, and updating the elements of a 1 X bsize sliver of C. The next loop out (i) moves down one sliver in A and C, and repeats the above, using the same bsize X bsize block of B. The outer two loops iterate over the blocks of B, repeating the above. Figure 2 in the web supplement helps make this clearer. When studying it, keep the order of the loop nests in mind: (1) k varies fastest, followed (moving outward) by j, i, jj, and kk (2) We never return to a block of B, but we do return to A and C. Specifically, for every new block of B we scan an entire N X bsize vertical stripe of A, and update the corresponding N X bsize vertical stripe of C. (3) A given element C[x,y] gets updated in n/bsize iterations of the outer kk/jj loops -- specifically the ones corresponding to the blocks of B containing column y. So if the L1 is too small to hold a whole row of any matrix, we fault B into the L1 exactly once, and the blocks of A & C n/bsize times, for a total of n^2(1+(2n/bsize)) L1 misses. The bottom line: as shown in figure 3 of the web supplement, blocking virtually eliminates the performance degredation associated with increases in array size on the old Pentium III: we get a nice flat performance curve. (For small arrays, however, the flat curve is *above* the sloping curves for simpler algorithms, due to the extra loop overhead. And on some machines [e.g., the i7], the HW is smart enough that blocking doesn't always help.)