2/458: Parallel and Distributed Systems March 3, 2008 reminders: - project/presentation proposal due today - MPI/OpenMP TSP due Friday - no class next week ========================================================== Describing parallel performance How do we know if something is parallelizing well? Should we expect things to run twice as fast on twice as many processors? Measures: let T(1) be the time to solve our favorite problem on 1 processor, using the best available single-processor algorithm let T(p) be the time to solve it on our p-processor machine with a parallel algorithm speedup S = T(1)/T(p) (effective processors) efficiency E = S/p How to lie with statistics: cripple the sequential program. This is a *very* common practice: people compare the performance of their parallel program on p processors to its performance on 1 processor, as if this told you something you care about, when in reality their parallel program on one processor runs *much* slower than the best known sequential program does. Moral: anytime anybody shows you a speedup curve, demand to know what algorithm they're using in the numerator. Amdahl's law: model performance as serial portion T_s + parallel(izable) portion T_p total work c = T_s + T_p = T(1) T(p) = T_s + T_p/p S = (T_s + T_p) / (T_s + T_p/p) = c / (T_s + T_p/p) --> c / T_s as p --> oo ========================================================== This looks like terrible news! --------------------- Gustafson But: do we want a given problem to run faster, or do we want to solve bigger problems? Folks at Sandia national labs argued [CACM May 1988] that bigger machines are used for bigger problems [Often true, though not always; consider weather prediction: for years it was the case that mathematical models could provide an "acceptably good" prediction of tomorrow's weather based on today's measurements, but only with many days of computing. In that situation, we wanted machines that could solve a given problem faster. A few years back (I'm not sure exactly when) we crossed the threshold to the point where we could get acceptably good prediction in a couple hours. Now we want to solve larger problems: reduce the grid size of the fluid dynamics code to get a better quality prediction.] Note that the problem size is often under direct programmer control in scientific applications -- it may mean larger experiments (e.g. bigger galaxies) or greater accuracy (e.g. finer grids). propose concept of "scaled speedup": constant total time T_s + T_p [ note new meaning for T_p ] total work = T_s + pT_p [ assuming we scale prob. size lin. with p ] total work (T_s + pT_p) scaled speedup = ---------- = ------------ total time (T_s + T_p) (T_s + pT_p) (T_s + (c-T_s)p) = ------------ = ---------------- = p (1 - T_s/c) + T_s/c c c which rises linearly with p, which is nice [NB: "linear speedup" is generally taken to mean speedup that is not only linear, but that has a constant very close to 1. This is an abuse of terminology, but probably too pervasive to reverse.] ========================================================== Karp & Flatt -- experimentally measured serial fraction Hard to tell whether a given speedup is "good". Arguably better intuition comes from *measuring* f = T_s/T(1). Take Amdahl's Law: T(p) = T_s + T_p/p rewrite as T(1)(1-f) T(p) = T(1)f + --------- p divide both sides by T(1): 1 1 - f --- = f + ------ S p and solve for f: p/S = pf + (1-f) = (p-1)f + 1 f = (p/S - 1)/(p-1) = ------- p/S - 1 p-S 1/S - 1/p 1 - 1/S f = ------- = ------ = --------- = 1 - ------- p - 1 S(p-1) 1 - 1/p 1 - 1/p (The last version is more numerically stable than the second-to-last.) The key argument for this metric is that (unlike speedup and efficiency) it should ideally remain constant as p increases. Thus small changes are easy to see: -- If you have load imbalance, you tend to see jumps in f as p increases. -- If you have too much synchronization overhead or memory contention, you tend to see smoothly increasing f as p increases. -- If f remains constant as p increases, you probably just don't have enough parallelism. -- If you get a sudden drop in f with an increase in p, you have some sort of "superlinear speedup" (see below). Authors give alternative definition for scaled speedup. ========================================================== Hemlbold & McDowell -- superlinear speedup There has been quite a lot of discussion over the years of speedup "anomalies", particularly reports of the occurrence of (or impossibility of!) "superlinear" speedup. Helmbold and McDowell [TPDS Apr. 1990] provide a formal model of speedup that captures these seemingly contradictory results. They note that "linear subunitary scaled speedup is possible even in the presence of sequential work... [whereas in the] nonscaled case... [speedup] is always bounded by a constant in the presence of sequential work." They also categorize sources of speedup anomalies: 1 reduced overhead -- some operations get cheaper because you've got fewer processes per processor 2 increasing cache size -- similar to the above: memory latency appears to go down because the total aggregate cache size went up 3 latency hiding -- if you have multiple processes per processor, you can do something else while waiting for a slow remote op to complete 4 randomization -- simultaneous speculative pursuit of several possible paths to a solution It should be noted that anytime "superlinear" speedup occurs for reasons 3 or 4, the sequential algorithm could (given free context switches) be made to run faster by mimicing the parallel algorithm. This is sometimes practical, e.g. wrt speculative computation in combinatorial search. ========================================================== Crowl -- Presenting Performance what to measure single ("typical") process CPU time? total CPU time? wall clock time? time variation: report single run? average? (and SD?) min? max? warm up the cache? will it be warm in practice? what to report time? speedup? scaled speedup? efficiency? serial fraction? how to present linear time -- hard to interpret; Crowl says "don't use" linear speed -- likes this one, but be sure to use zero origin log-log time \ log-log speed `> good for relative performance recommendations: use linear speed when graph needs to show absolute performance use log-log time otherwise former slopes up (usually), while latter slopes down, making them easy to distinguish. Use heavy dots and light lines to make clear what you measured and to encourage the eye to compare dots, not lines. Present error bars if data is uniformly distributed; show all points otherwise. Give average only when variance is insignificant (and say so in text).