2/458: Parallel and Distributed Systems Jan. 16ff 2008 << reading for next time: PLP 8.6, 12.1-12.3; Sutter and Larus >> << assignment 0: see the web >> Why parallelism? speedup conceptual clarity, esp. for servers coping with real-world parallelism, distribution dimensions of the problem space hardware vector, VLIW/EPIC, pipelined/superscalar, MIMD, SIMD UMA, NUMA (remote caching?), NORMA SMP, SMT, CMP, SoC concurrent (logically parallel), parallel (physically parallel), distributed (no shared memory) -- nested classes data parallel v. task parallel shared memory v. message passing Administrivia prerequisites: CSC 254 and CSC 2/456, or equivalent web site: www.cs.rochester.edu/u/scott/458 discussion group: which would you prefer? blackboard or wiki? grading: probably no exams maybe some quizes -- haven't decided maybe 3 whole-class programming assignments; major individual projects presentations ** class participation more informal that most classes syllabus very loose; can adjust to fit interests of class I ABSOLUTELY EXPECT **EVERYONE** TO COME TO CLASS PREPARED AND TO PARTICIPATE ACTIVELY IN DISCUSSION. IF YOU CANNOT DO SO, PLEASE DO NOT ATTEND. core topics: Implementation of threads. Parallelization strategies: speedup, efficiency, Amdahl's law, etc. Parallel programming models and interfaces: language (e.g. Java) threads, pthreads, MPI, OpenMP, sockets, remote procedure call (RPC). Parallel machine architectures: multiproccessors, multicomputers, and clusters; multicore and multithreaded chips; interconnection networks. Coherence and consistency: memory models, cache coherence protocols. Synchronization: hardware primitives, clocks, mutual exclusion, transactions, lock-free data structures. Parallel semantics: memory models, the consensus hierarchy, linearizability and serializability, nonblocking progress conditions intended emphasis: how will we program manycore chips? parallel languages/models: dataflow/functional (Sisal, pH), streaming (StreamIt), massively threaded (Cilk), data parallel (C*), pGas (CAF, UPC, Titanium), HPCS (X10, Chapel, Fortress), p.o.iterators (Charm++, Galois) map/reduce transactional memory possible additional topics: Parallel program optimization techniques: synchronization granularity, dependences, scheduling, load balancing. supercomputers and supercomputing clusters; vector processing; GRID computing software distributed shared memory process migration scalable locks; scheduling and synchronization; transactional memory Distributed file systems: NFS, xFS, Coda, etc. P2P systems, overlay networks system-area networks: Infiniband, GSN, etc. the world wide web. Fault tolerance and reliability: two-phase commit, reliable group communication, checkpointing, message logging, Byzantine agreement. component models: CORBA, .NET, JavaBeans << brainstorm additional topics >> Rest of today's class will be review, to make sure we're all on the same page. Be sure to ask questions if you aren't following any of this. << learn who everyone is >> << survey: name, year (G/U), dept preferred user account name courses taken languages/compilers operating systems architecture did you cover pipelining in detail? cache coherence? experience with pthreads explicitly parallel languages (Java, C#, Ada, HPF, ...) Unix socket programming event-driven programming MPI multiprocessors topics you'd most like to see covered >> ------------------------------------ A little history early computers were single user, with busy-wait (polling) I/O. First motivation for concurrency/parallelism came from coping with devices. Busy waiting for devices wasted *very* expensive cycles switching between (batch) users on I/O Allowed cycles to be used for somebody else while current application waited. This is concurrency, but with no interaction between concurrent entities. Multiprogramming. asynchronous I/O interrupts Race conditions in accessing memory locations from normal code and interrupt drivers. Early to mid 1960s. *** First interacting concurrent entities programmable I/O (e.g. IBM channels) Nontrivial memory activity from device. *** First interacting _parallel_ entities interprocess communication in timesharing systems Quasi-parallel *user* programs (concurrent [i.e. logically parallel] but not physically parallel). Early 1970s. Internet servers did the same thing later that decade. networks led to truly parallel distributed programs. Early 1970s. multiprocessors led to truly parallel non-distributed programs. Mid to late 1960s in high-end scientific and business machines. Early 1970s in academia. Small-scale multiprocessors (via multi-ported memory) by mid to late 1970s. Multicomputers and shared-bus multiprocessors by early 1980s. Network-based multiprocessors (BBN) by the mid 1980s. 1990s dominated by ILP -- clobbered scalability SMTs and CMPs by early 00s. Communication/computation ratios back to 1990 levels. Can we really use it this time?? ------------------------------------ Shared-memory synchronization busy-wait v. scheduler-based latter requires the former spin-then-yield is sort of a hybrid condition synch v. mutual exclusion latter is harder: universal quantifier, rather than existential the *consensus problem*: every thread posits a value; all threads must eventually agree. Mutual exclusion provides a solution: whichever thread gets into its critical section writes down _the_ value. But that's blocking: may have to wait an indefinite period of time. *Wait free* consensus requires that every thread know _the_ value after a bounded # of its own steps. That's harder. what primitives are atomic? reads and writes only of historical interest only can solve mutual exclusion (and thus consensus) for n threads but cannot solve wait-free consensus for even 2 threads The first 2-thread solution was published by Dijkstra in 1965 and attributed to Dekker. Peterson published a substantially simpler solution in 1981. A tree of Peterson locks can be used for n-thread mutual exclusion. It takes O(lg n) time and O(n) space. An arguably more attractive solution was published by Lamport in 1987. It takes O(1) time in the absence of contention and O(n) time when threads collide. Also O(n) space in the absence of bounds on rates of progress: start: X := pid if Y <> free goto start Y := pid if X <> i /* Make sure no one else is in critical section several methods possible (none shown here); all known techniques require O(n) time with n threads in system, and either O(n) space or bounds on relative rates of progress. */ if Y <> i goto start -- critical section Y := free Open question (I think): can you combine O(lg n) time worst case with O(1) time in the no-contention case? atomic ops consensus number test-and-set (TAS) 2 swap 2 fetch-and-increment (FAI) 2 compare-and-swap (CAS) oo load-linked, store-conditional (LL/SC) oo test-and-set lock: type lock = Boolean := false procedure acquire(L : ^lock) repeat until test_and_set(L) = false procedure release(L : ^lock) L^ := false test-and-test-and-set lock: procedure acquire(L : ^lock) // "test-and-test-and-set" lock while test_and_set(L) = true repeat until L = false more on busy-wait synch later in the term emulation of arbitrary fetch-and-phi using CAS: procedure fetch_and_phi(L : location; P : value->value) repeat value old := *L value new := P(old) until CAS(L, old, new) or with LL/SC: procedure fetch_and_phi(L : location; P : value->value) repeat value old := LL(L) value new := P(old) until SC(L, new) three key differences between CAS and LL/SC (1) You can have only one LL outstanding. (2) SC can fail spuriously. (3) CAS can't tell if the value in L has changed and then changed back since the load; SC can. LL/SC has a natural implementation in cache coherence protocols. CAS is found on S/370 and successors, x86, ia64, SPARC LL/SC is found on MIPS, Alpha, PowerPC ---------------- The A-B-A problem If memory is dynamically allocated, I have to worry that a CAS will succeed even when it shouldn't, because it points to a *new* block that happens to have the same address as a no-longer-existent block to which the pointer used to point. This is a serious problem for certain algorithms. Suppose in a Trieber stack I read T = tos, read N = tos->next, and then try to pop via CAS(tos, T, N). But just before my CAS I go to sleep. Somebody else comes along, does a bunch of pops & pushes, and leaves the stack pointing to the same node as before, but with a different /next/. When I wake up my CAS may succeed, even though N is the wrong value to use -- we may easily corrupt the stack. Counted pointers for CAS. LL/SC isn't vulnerable to A-B-A in the same way: SC fails if anybody wrote the word since the LL, even if they wrote the same value. General solution (PODC 2002): "repeat offenders problem (ROP)" -- Herlihy, Luchangco, Martin, and Moir "hazard pointers" -- Michael