Lecture notes for CSC 252, Thurs. Feb. 27 2014 Announcements A4 due tomorrow night. Midterm this coming Tuesday. Bob Colwell a week from today in the business school. A5 will be assigned after break Read (rest of) chapter 5 (you read 5.7 for the previous unit) -------------------------------- Performance Latency time to do a task v. Bandwidth/throughput number of tasks per unit of time "Real world" example Boeing 747 v. the Concorde speed 610 mph 1350 mph time, NY->London 5.7 hrs 2.6 hrs max passengers 470 132 Kpmph 287 178 Networking example time to transmit cross-country latency bandwidth 1TB to west coast T1 phone line 20ms 1.5Mb/s 2 mo. satellite link 0.5s 0.5Gb/s 4.5 hrs Fed Ex 16 hrs ~ unlimited 16 hrs Almost mothing beats the bandwidth of a truck full of hard drives. For this course we'll concentrate on time to execute a single program or program fragment. The most common metric for this is cycles per instruction (CPI) or, in user-level loops, cycles per iteration or per array element. (Sometimes also talk about the inverse: instructions per cycle [IPC]. A superscalar machine typically has a CPI of < 1 and an IPC of > 1.) *** Performance is notoriously difficult to measure in a meaningful way, *** and performance claims are notoriously prone to misrepresentation and *** abuse. Caveat emptor! And while processor performance is a multidimensional issue in and of itself, overall system performance depends on many things, only one of which is the processor. Others include memory, bus, disk, network, operating system, and the overall job mix. ------------ Given what we learned about processor architecture in the previous chapter it should be clear that clock rate alone is not a good predictor of performance. Clock speeds for recent laptop processors fall in the 1--3 GHz range. A 1GHz clock (as in an iPad) gives you a CYCLE TIME of 1ns. My laptop has a 1.8GHz clock, so its cycle time is 556ps. Clock rate can be increased via deeper pipelining, at the expense of higher percentage overhead for latches (pipeline registers). ------------ [ Recall that 2^10 ~= 10^3. Computer scientists play fast and loose with [ these. When measuring RAM sizes, 1GB usually means 2^30 bytes. 1GHz [ usually means 10^9 Hz. The difference is small enough not to matter for [ back-of-the-envelope calculations, though it adds up at the high end. [ [ 1K: (2^10-10^3)/10^3 = 2.4% [ 1M: (2^20-10^6)/10^6 = 4.8% [ 1G: (2^30-10^9)/10^9 = 7.3% [ 1T: (2^40-10^12)/10^12 = 9.9% ------------ The key performance equation: CPU time = # instructions X average CPI X cycle time Your program will run faster if it has fewer instructions, or if the average instruction takes fewer cycles, or if the clock is faster. Instruction count depends on (at least) your program the compiler the ISA CPI depends on (at least) the program and compiler (really!) the ISA the microarchitecture pipelining out-of-order execution speculation (branch prediction, value prediction, speculative synchronization in threaded code) Clock rate depends on (at least) the microarchitecture the VLSI design technology how much heat you're willing/able to tolerate; how much energy you're willing to burn A superscalar processor might have a BEST CASE (minimum) CPI of .25 -- it can start four new instructions every cycle. How close it comes to this depends on a LOT of factors, many of which we touched on in chap. 4. Typical achieved CPI is not nearly so good, maybe 0.6 (IPC of 1.7). -------------------------------- How do you get fast code? In order of importance: 1. Use asymptotically good data structures and algorithms. Nothing gives you a bigger win. A naive implementation of quicksort beats the pants off the world's most super-tweaked bubble sort for all but the smallest inputs. 2. Use a good language and compiler. Gcc is a _lot_ better than it used to be, but commercial compilers are still often better. C and C++ are usually faster than equivalent Java or C# code, but the gap has narrowed a _lot_ in the last decade, and depending on the choice of compiler/VM, Java or C# may actually be _faster_. Don't make assumptions about relative quality of code without trying it out. Note that compiler technology is a moving target. Pointer arithmetic was put into the C language largely because 1970s compilers couldn't generate as good code from array-based source. As B&O'H note, modern compilers generally do just as well (sometimes better!) with the array-based source. Many good modern compilers (including the latest versions of gcc) will do many of the optimizations B&O'H suggest doing by hand in chapter 5. Don't assume you have to do it by hand until you verify that your compiler can't. 3. Don't do things that prevent the compiler from doing its job. There's some good advice in this vein in sections 5.3-5.6 (and in these notes, below). 4. Once you have fully developed, documented, and debugged your code, decide whether it's fast enough. If not, profile it to find out where it's spending its time. I used to cover profilers in this class, but there isn't really enough time. 5. If truly necessary, hand-tune the hot code segments, PERHAPS at the cost of readability, modularity, abstraction, maintainability, safety, etc. Much of the time you'll find that you don't really have to sacrifice anything. There will be one or two places that take unreasonably large amounts of time because of goofy problems that just tripped the compiler up. The key to fixing those problems is to apply your skill at reading the output of the compiler, recognizing major mistakes, and finding small ways to fix them. And you probably don't have to do even this very often. ----------- Chapter 5 of the text is one of the few places where I'm uncomfortable with the emphasis of the authors' presentation. I don't think they place sufficient emphasis on the caveats in step 5 above. In particular, I STRONGLY caution against *premature optimization*. Many many pieces of software have been butchered by programmers who decided to sacrifice readability, modularity, abstraction, maintainability, safety, etc. in an attempt to improve performance before they knew that they needed to -- sometimes before they even knew that their changes *would* improve performance. RESIST THIS TEMPTATION. ----------- Common mistakes that can prevent the compiler from doing its job. void to_lower(char *s) { int i; for (i = 0; i < strlen(s); i++) { if (s[i] >= 'A' && s[i] <= 'Z') { s[i] -= ('A' - 'a'); } } } Here strlen takes time proportional to the length of the string, and it gets called on every iteration of the loop, leading to quadratic running time, instead of linear. Any halfway decent compiler tries to hoist invariant computations from the bodies of loops, but most compilers aren't smart enough to tell that strlen() has no side effects, and that its behavior won't change as a result of activity within the loop. But you the programmer know this, so write the following instead: void to_lower(char *s) { int i; int len = strlen(s); for (i = 0; i < len; i++) { if (s[i] >= 'A' && s[i] <= 'Z') { s[i] -= ('A' - 'a'); } } } In general, the compiler can't tell when it's safe to eliminate subroutine calls. Another example (similar to one in the book): void sum(int A[], int len, int *result) { int i; *result = 0; for (i = 0; i < len; i++) { *result += A[i]; } } Here we'd like the compiler to accumulate the sum into a register and then write it back at the end, eliminating unnecessary loads and stores, but it can't because is doesn't know whether result might point at an element of A. (An extremely smart compiler might generate code to test that, and to execute faster code after verifying that no aliasing has occurred, but that's too much to expect in practice.) But you the programmer are probably comfortable insisting on a lack of aliasing. In fact, if there *were* any aliasing, you'd almost certainly want the result to get updated only at the end! So write the following instead: void sum(int A[], int len, int *result) { int i; int res; res = 0; for (i = 0; i < len; i++) { res += A[i]; } *result = res; } Things you can do to help the compiler: - hoist non-obvious invariants out of loops - avoid aliases (pointers in particular) that inhibit caching in registers - toward that end, use subscripts instead of pointers when either would do - in C++, put small method bodies in your header files so they can be inlined - in C and C++, label parameters and methods /const/ whenever appropriate (then the compiler knows they can safely be cached in registers) - in Java, label methods /final/ whenever appropriate (then the compiler knows it do static method dispatch) - in C, use the /restrict/ qualifier to indicate that there is no alias for the pointed-at data in the current scope -------------------------------- Amdahl's Law (Gene Amdahl was one of the principal designers of the IBM mainframes of the 1950s and 60s, including the 360 family) The MAXIMUM BENEFIT you can get from improving some factor in system performance is bounded by that factor's original contribution to overall performance. Suppose we improve some task so that it runs S times as fast as before (i.e., it takes 1/S the time it used to); meanwhile the remainder of the task is unaffected. If the task used to consume a fraction F (0 <= F <= 1) of the total time T0, then the new time, T1, is bounded below as follows: T1 >= T0 * ((1-F) + F/S) --> T0(1-F) as S --> oo That's an inequality (not an equality) because fractions of the task may not be additive -- not on the CRITICAL PATH. (Overly) simplistic example: Suppose your processor executes one instruction at a time your program contains contribution to instruction type frequency cycles CPI time arithmetic 50% 1 0.5 23% load 20% 5 1.0 45% store 10% 3 0.3 14% branch 20% 2 0.4 18% ---- 2.2 If a better data cache could reduce average load time from 5 cycles to 3, overall CPI would improve from 2.2 to 1.8 AT BEST, i.e. by 18%. If a better branch predictor reduced the average branch from 2 cycles down to 1.5, overall CPI would improve from 2.2 to 2.1 AT BEST, i.e. by 4.5%. If we had two ALUs that could work in parallel and we managed to always keep them busy (an unrealistically optimistic assumption), we'd see an arithmetic contribution to CPI of .25, reducing overall CPI to 1.95 AT BEST, i.e. by 11%. ======================================== Profiling Lots or profilers around. Huge area of research. Some tools (mostly used in chip design labs) are all hardware. Most rely on a combination of (a) sampling, and (b) instrumentation. (a) Sampling is a technique in which code outside your program periodically pauses your program to see where it is. If your program runs for 10s and we sample the PC every 1ms then we get 10,000 samples. If 9,000 of those are scattered around inside some single loop, we can be reasonably sure that the loop consumes about 90% of the run time of the program. (b) Instrumentation is extra code added to your program, usually by the compiler but sometimes by a binary re-writing tool, that actively collects information about what your program is doing. The most common form of instrumentation tracks the flow of control: subroutine calls and maybe branches. It's relatively cheap to track subroutine calls. Tracking all branches is kind of expensive, even with really clever instrumentation. Tracking may mean just collecting statistics (how many times is foo called?) More ambitiously, it may mean generating a *log* of the order in which calls or branches occur. Logs can get enormous really fast. Some tools make use of (c) hardware counters: (c) Most recent processors have special registers inside the processor (at least a few, perhaps a few dozen) that can be configured to count the number of occurrences of various hardware events (branches, stores, cache misses, mispredictions, bubbles, ...) Vendor-specific tools configure these and use them to collect useful statistics. Not generally portable, but really useful. Profs. Shen & Dwarkadas, and grad student Xiao Zhang, have done some interesting recent work on making hardware counters a "first class" resource managed by the operating system: exported with a standard interface, virtualized across context switches, etc. Under Linux the standard profiling tool is called gprof. It's pretty old technology, but still quite useful. It employs sampling and modest instrumentation. Read the man page. -------------------------------- Benchmarks Note that performance varies not only from machine to machine, but from program to program on a given machine. The best way to tell which computer system will run your program fastest is to run your program on all of them and see! For most users this isn't practical: you have lots of programs you care about, and you can't play with machines you haven't bought. Ergo benchmarks: "standard" programs that testing agencies can run on a bunch of machines to give you a sense of how *your* programs will work. --> Depends critically on the assumption that your programs resemble the benchmarks. Most widely used benchmark: SPEC (standard performance evaluation corporation) CPU suite. New versions issued every few years: SPEC '95, SPEC 2000, SPEC 2006. Separate CPU suites for integer-intensive and FP-intensive programs. Also less widely-used suites for web transactions, graphics, Java, file systems, etc. See www.spec.org. -------- Averaging performance Be careful what you're measuring! The key question is: how do the things you're "averaging" naturally combine to create a total? ARITHMETIC MEAN ave = 1/n sum(Ti) i Use this when the things you're "averaging" add up to a natural total. Example: Instruction A takes tA cycles; instruction B takes tB cycles; what is the average CPI? HARMONIC MEAN ave = n/sum(1/Ri) i Used for rates, in which the *reciprocals* add up to a natural total. Example: I render the same image on my new graphics system three times. The first time it paints r1 triangles per second; the second time it paints r2 triangles per second; the third time it paints r3 triangles per second. What is the average rendering rate? What I'm really asking is: what rendering rate, achieved in all three runs, would give me the same total execution time? It's the denominators (seconds) that naturally add up to create the total time. (Remember the old riddle: if I drive halfway home at 15 miles per hour, how fast do I have to drive the rest of the way to average 30 mph overall?) GEOMETRIC MEAN ave = (prod Fi) ^(1/n) i Used when things naturally *multiply* together to create a "total". Example: technological improvements give me a 10% increase in clock rate this year, 20% the year after that, and 15% the year after that (for a total multiplicative improvement of 1.1 * 1.20 x 1.15 = 1.518. What is the "average" yearly improvement? What I'm really asking is: what constant rate, achieved every year, would have given me the same result? The answer is *not* 1.15. That would give just over 1.52. The right answer is the cube root of 1.518, which is a little over 1.149.