Notes for CSC 2/458, 20 Feb. 2006 ========================================================== Levels/categories of parallelism instruction-level superscalar superpipelined LIW/VLIW/EPIC (Multiflow, ia64, Transmeta Crusoe) vectors Cray, IBM, NEC, Fujitsu MMX, ... "multithreaded" MT (Denelcor HEP, Tera MTA) SMT (Alpha EV-8, Pentium-4 with "Hyperthreading", future Sun processors) dataflow (research only) SIMD (ILIAC, CM-2) MIMD multiprocessors multicomputers vector machines vector instructions -- parallelism in the ISA more explicit than ILP of superscalar OOO processors finer grain than thread-level parallelism consider loop nest for i := 0 to n-1 for j := 0 to n-1 A[i, j] := 0 for thread parallelism (as in assignment 2) we'd parallelize the outer loop: // on processor pid for i := (n/p * pid) to (n/p * (pid+1) -1) for j := 1 to n A[i, j] := 0 on a vector machine we (or rather, the compiler) would block and parallelize the inner loop: for i := 0 to n-1 for j := 0 to n/v A[i, j:j+v-1] := 0 The code transformation that extracts v-element operations from longer loops is known as "strip mining". It's basically a one-dimentional form of blocking (tiling). Fortran 90 and a few other languages have whole-array operators that make some of this particularly easy to express: A[0:n-1, 0:n-1] := 0 Using these operators not only makes the code easier to read, it makes it easier for the compiler to find opportunities to vectorize. Key concepts: vector registers pipelining within vector instructions chaining across instructions A couple comments on the section from Stallings: - The discussion is rather dated (sorry about that) - The array slice notation above is actually closer to actual F90 syntax than the extended FORTRAN of Fig. 18.13. - Combination instructions (e.g. multiply--add) are not limited to vector machines. The Power4 has a FMA (fused multiply--add). - High-end vector processors have traditionally had SRAM main memory and very high memory bandwidth, so they can chain loads to arithmetic operations to stores at full computational speed. multicomputers and multiprocessors tradeoffs multicomputers generally have lower unit cost and better scalability multiprocessors support more fine-grain operations and a more familiar programming model interconnection networks issues direct (e.g. hypercube) versus indirect (e.g. butterfly) (are switching nodes special?) multicomputer (big messages) versus multiprocessor (little messages) degree diameter packaging considerations mapping into two or three-dimensional space minimizing maximum wire length bus ring star tree log-depth indirect (just mention that a lot of these exist. The differences between them are not especially important) butterfly (FFT) banyan shuffle-exchange ADM omega fat tree mesh ("NEWS" in 2 dimensions) torus (2D or 3D) hypercube 2^k nodes, degree = k, diameter = k nodes connected to i are those whose numbers differ from i in exactly one bit cube-connected cycles (CCC) degree k t(k-1)^t nodes diameter 2t (constant degree with logarithmic diameter) disadvantage: you don't double size by connecting together two of the next smaller variety << draw cube with 3 nodes at each corner (t=k=3) >> deBruijn 2^k nodes, degree = 4, diameter = k (constant degree with logarithmic diameter) disadvantage: as with CCC, big machine is not two small machines connected nodes connected to i are those whose numbers are i shifted one bit left or right, with a one or a zero shifted in 8-node example: 001 ------------- 011 / | \ / | \ 000 | 010 === 101 | 111 \ | / \ | / 100 ------------- 110 routing and contention routing algorithms static dynamic but deterministic *many* variants randomized store-and-forward each packet is received, in full at a given intermediate node, before making a routing decision for the next hop significant space and time overhead virtual cut-through reserve room for whole packets, but start routing flits ASAP, generally before whole packet arrives. If output channel is not availble, rest of packet slides into buffer. "wormhole routing" for direct networks route flits ASAP, as in VCT, but without space to hold whole packets at intermediate nodes. If output channel is not available, rest of packet blocks whole path back to source. Deadlock avoidance is tricky. hot spots in log-depth indirect networks tree saturation in store-and-forward networks (RP3) arbitrary delays in back-out-and-retry networks (Bfly) where is the memory? local only (multicomputer) CalTech Cosmic Cube (path-breaking research machine. Interconnection network evolved into the commercial Myrinet) Dead commercial machines: Transputer-based systems N-Cube Intel iPSC, Touchstone, Paragon CM-5 IBM SP series modern clusters shared memory all global ("dance hall") broadcast Sequent, Encore, SGI, Sun, host of others indirect (mostly ancient machines) Monarch (private "poor man's instruction cache" local) Cedar (research: local private plus global shared) Ultracomputer (research: no local memory at all) Tera MTA (more recent; only one delivered) NUMA (NCC) ancient IBM RP3, 8CE (research) BBN Butterfly, GP1000, TC2000 Cray T3, X series Mercury coherently-cached (CC-NUMA) broadcast-based v. directory-based example: simple, early invalidate-based broadcast protocol (NOT the best-performing) --------------------------- Strong annecdotal suggests that programmers prefer a shared-memory model, preferably with automatic parallelization, but maybe with explicit parallelism. *** Shared memory is a programming model, not a machine type or a *** performance model. Multiprocessors can support a message-passing *** programming model. Multicomputers can support a shared-memory model. *** Respect for locality is crucial, and unavoidable with modern *** technology. --------------------------- We studied snoop-based shared-memory machines earlier How about bigger machines? From a hardware-centric point of view, you can imagine starting with a very simple multicomputer and adding various sorts of hardware support: user-level messages CM-5, Memory Channel, Myrinet, Infiniband, Giganet active messages message processor (Intel Paragon) uncached dance-hall memory (Illinois Cedar) uncached distributed shared memory (BBN, Cray, Mercury machines) combining hardware synchronization mechanisms barriers, eurekas locks full/empty bits fetch-and-phi hardware aids for software management of cached distributed shared memory fine-grain access control in VM system Dir1SW (one-entry cache-coherence directories) Alewife -- limitLESS directories remote TLB shoot-down full-blown large-scale cache coherence Where is the knee of the price/performance curve? What *should* machines look like? Jury is still out, but large-scale CC-NUMA is gaining popularity. scalable coherence protocols Sun broadcast requests, for snoop-based coherence; switched replies SGI, Compaq, IBM directory-based, with (I believe) full coverage for limited size machines IEEE SCI (IBM NUMA-Q, HP/Convex Exemplar) directory-based, but with sharing *chains* dead KSR (ring-based broadcast, COMA) lots of research alternatives, inc. HW/SW hybrids (Petersen, Alewife LimitLESS, CICO/Dir1SW, SW COMA, kthanasi, ...) NUMA memory management (generalization of DSM) Other issues TLB, page table consistency false sharing definition(s) mitigated some by relaxed consistency, esp. lazy release consistency (used in SW DSM) S-DSM latency tolerance prefetching (Dash claims 2x win for some applications) threaded architectures HEP (Burton Smith, Denelcor; old) J-machine (Bill Dally, MIT) Alewife (Anant Agarwal, MIT) Tera (Burton Smith, Tera Corp)