CSC 2/458: Parallel and Distributed Systems Feb. 25ff 2019 ========================================================== Describing parallel performance How do we know if something is parallelizing well? Should we expect things to run twice as fast on twice as many processors? Measures: let T(1) be the time to solve our favorite problem on 1 processor, using the best available single-processor algorithm let T(p) be the time to solve it on our p-processor machine with a parallel algorithm speedup S = T(1)/T(p) (effective processors) efficiency E = S/p How to lie with statistics: cripple the sequential program. This is a *very* common practice: people compare the performance of their parallel program on p processors to its performance on 1 processor, as if this told you something you care about, when in reality their parallel program on one processor runs *much* slower than the best known sequential program does. Moral: anytime anybody shows you a speedup curve, demand to know what algorithm they're using in the numerator. Amdahl's law: model performance as serial portion T_s + parallel(izable) portion T_p total work c = T_s + T_p = T(1) T(p) = T_s + T_p/p S = (T_s + T_p) / (T_s + T_p/p) = c / (T_s + T_p/p) --> c / T_s as p --> oo This looks like terrible news! ---------------------------------------- Gustafson But: do we want a given problem to run faster, or do we want to solve bigger problems? Folks at Sandia national labs argued [CACM May 1988] that bigger machines are used for bigger problems [Often true, though not always; consider weather prediction: for years it was the case that mathematical models could provide an "acceptably good" prediction of tomorrow's weather based on today's measurements, but only with many days of computing. In that situation, we wanted machines that could solve a given problem faster. A couple decades ago (I'm not sure exactly when) we crossed the threshold to the point where we could get acceptably good prediction in a couple hours. Now we want to solve larger problems: reduce the grid size of the fluid dynamics code to get a better quality prediction.] Note that the problem size is often under direct programmer control in scientific applications -- it may mean larger experiments (e.g. bigger galaxies) or greater accuracy (e.g. finer grids). propose concept of "scaled speedup": constant total time T_s + T_p [ note new meaning for T_p ] total work = T_s + pT_p [ assuming we scale prob. size lin. with p ] total work (T_s + pT_p) scaled speedup = ---------- = ------------ total time (T_s + T_p) (T_s + pT_p) (T_s + (c-T_s)p) = ------------ = ---------------- = p (1 - T_s/c) + T_s/c c c which rises linearly with p, which is nice [NB: "linear speedup" is generally taken to mean speedup that is not only linear, but that has a constant very close to 1. This is an abuse of terminology, but probably too pervasive to reverse.] ---------------------------------------- Karp & Flatt -- experimentally measured serial fraction Hard to tell whether a given speedup is "good". Arguably better intuition comes from *measuring* f = T_s/T(1). Take Amdahl's Law: T(p) = T_s + T_p/p rewrite as T(1)(1-f) T(p) = T(1)f + --------- p divide both sides by T(1): 1 1 - f --- = f + ------ S p and solve for f: p/S = pf + (1-f) = (p-1)f + 1 p/S - 1 1/S - 1/p 1 - 1/p - (1 - 1/s) 1 - 1/S f = ------- = --------- = ------------------- = 1 - ------- p - 1 1 - 1/p 1 - 1/p 1 - 1/p (The last version is more numerically stable than the second-to-last.) The key argument for this metric is that (unlike speedup and efficiency) it should ideally remain constant as p increases. Thus small changes are easy to see: -- If you have load imbalance, you tend to see jumps in f as p increases. -- If you have too much synchronization overhead or memory contention, you tend to see smoothly increasing f as p increases. -- If f remains constant as p increases, but performance still sucks, you probably just don't have enough parallelism. -- If you get a sudden drop in f with an increase in p, you have some sort of "superlinear speedup" (see below). Authors give alternative definition for scaled speedup. ---------------------------------------- Hemlbold & McDowell -- superlinear speedup There was quite a lot of discussion through the 80s of speedup "anomalies", particularly reports of the occurrence of (or impossibility of!) "superlinear" speedup. Helmbold and McDowell [TPDS Apr. 1990] provide a formal model of speedup that captures these seemingly contradictory results. They note that "linear subunitary scaled speedup is possible even in the presence of sequential work... [whereas in the] nonscaled case... [speedup] is always bounded by a constant in the presence of sequential work." They also categorize sources of speedup anomalies: 1 reduced overhead -- some operations get cheaper because you've got fewer processes per processor 2 increasing cache size -- similar to the above: memory latency appears to go down because the total aggregate cache size went up 3 latency hiding -- if you have multiple processes per processor, you can do something else while waiting for a slow remote op to complete 4 randomization -- simultaneous speculative pursuit of several possible paths to a solution It should be noted that anytime "superlinear" speedup occurs for reasons 3 or 4, the sequential algorithm could (given free context switches) be made to run faster by mimicing the parallel algorithm. This is sometimes practical, e.g. wrt speculative computation in combinatorial search.