2/458: Parallel and Distributed Systems Feb. 13 and 18, 2008 Coherence, Consistency, and Memory Models << Be thinking about projects. See (still somewhat dated) lists of ideas on the course web site. There isn't enough time for everybody to do a lecture. Those who do lectures won't have as high an expectation for the rest of) their project. First come first served. >> --------------- Discussion of Culler & Singh sections 5.1 - 5.3 Things to note: Everything in this chapter assumes an atomic transaction bus. Lots of newer machines have split-transaction buses, which make things quite a bit more complicated. In particular, they introduce _transient_ states in the coherence protocols. NB: bus "transactions" are very different from TM transactions. distinction between coherence and consistency -- Coherence insists that values written to a given location become visible to everyone in the same order. Example: x == 0 A: x := 1 B: x := 2 It is not acceptable for C to read x == 1 and then read x == 2, while D reads x == 2 and then reads x == 1. << See definition across the 276-277 page break >> -- Consistency insists that values written to *different* locations become visible to everyone in the same order. Example: x == 0, y == 0 A: y := 1 B: x := 1 A: i := x B: j := y Is it possible for i and j to be zero afterward? Sequential consistency says no. Some models say yes. Note that state machine is PER CACHE PER LINE (terminology convention: line is place in the cache; block is data that can reside in a line) The Sequent Symmetry Model A (an early commercial bus-based multiprocessor) had a write-through cache -- probably a 2-state protocol like that of fig. 5.5. The Symmetry Model B had a four-state protocol similar to Illinois MESI, with an adaptation for migratory data. Significance of write-no-allocate for 2-state protocol. Writes of an invalid line leave you in invalid state. why don't processors necessarily generate memory events in order? compiler optimizations pipelining -- out-of-order completion write buffers out-of-order issue in addition, outside the processor we may have split-transaction buses multiple buses interleaved memory all of these obey single-threaded data dependences (only) Advantage of 3-state MSI: write-back instead of write-through. Advantage of 4-state MESI: transition from single reader to single writer without bus transaction; important for single-threaded performance. Need (something equivalent to) a "shared" signal line on the bus for MESI. 4-state Dragon protocol is really 5-state. State 'I' is only needed during cold-start, however, and doesn't appear in the picture -- we never downgrade to I because it's an update protocol. The "5th" state is used to distinguish between shared-and-I'm-the-owner (responsible for responding to Bus Reads, and for write-back on replacement) and shared-and-somebody-else-is-the-owner (other owner might be a cache or memory). LOTS of minor protocol variants - On a bus read of a line you have exclusive, do you drop to shared or invalid? (Is data mostly read or migratory?) Some machines (inc. the Symmetry Model B) attempt to adapt. - 5-state MOESI protocol: relieves memory of the responsibility of picking up every bus write; cache that drops out of M on a bus read becomes OWNER, responsible for supplying data on bus read, and for write-back on eviction. In general, protocols vary on when memory picks up data, and whether it or some cache responds on a miss. SunFire is MOESI within 4-processor clusters. - When upgrading from S to M, you already have the data, so BusRdX is wasteful: you don't need anybody to send the data back to you. We can add a BusUpgrade message to handle this case. - In response to a BusRd or BusRdX, who should supply the data, if it's in more than one place? Cache-to-cache transfers may or may not be faster than memory-to-cache transfers, depending on implementation technology and interconnect topology. Regatta has a 7-state L2 protocol and 5-state L3 protocol, complexity of which is largely to cope with who's responsible for answering. - In an update protocol, do you update on all bus writes, or do you invalidate, but then re-validate when somebody misses? - Original Alpha protocol: on bus write, if data is in L1, update but leave in L2 only; if data is in L2 only, invalidate. - DEC Firefly protocol is like 4-state Dragon -- memory is always up-to-date so there's no notion of cache ownership. Note that although one can trivially ensure sequential consistency at the level of the bus in a snooping system, this does NOT mean that most SMPs are, or should be, sequentially consistent. HW write buffers, out-of-order issue, and compiler instruction reordering can all mess up consistency (though not coherence). Fences are needed to avoid write buffer and OOO issue reordering; 'volatile' variable declarations may be needed to avoid compiler reordering. =============== >> >> Extending TSP assignment to Friday >> next: other version(s): MPI, OpenMP, RSTM >> >> Haven't heard from anybody about class presentations yet. >> Note that everyone will at least have to do a brief (20 min) >> presentation on his/her project. >> Some comments on Adve and Gharachorloo Figures 8 & 9 (TR version -- Tables 1 & 2, IEEEC version) are important but dated. A nice update can be found in Doug Lea's Java cookbook (http://gee.cs.oswego.edu/dl/jmm/cookbook.html) Definition of cache coherence (not universally agreed upon) (p. 9) (1) writes are eventually visible everywhere (2) writes to the same location are serialized the same way everywhere (i.e. if everybody stops writing then everybody eventually agrees on the final value of every location) Definition of sequential consistency (p. 2, p. 5). (1) (appearance of a) total global order for all memory operations (2) every processor's program order consistent with that global order Sequential consistency may be making a comeback. Work about 5 years ago at Wisconsin (Milo Martin's thesis work under Mark Hill, who was also Sarita Adve's advisor) suggests that it may be possible to implement efficiently after all. Note that SGI's machines have always been sequentially consistent. Note that if we want SC, the compiler has to avoid classical optimizations on shared locations. --------------- Note on Power4: it has four fences: the PowerPC 'sync' instruction and three others. From an IBM white paper: sync Creates a memory barrier. On a given processor, any load or store instructions ahead of the sync instruction in the program sequence must complete their accesses to memory first, and then any load or store instructions after sync can begin. lwsync (lightweight sync) Creates a memory barrier that provides the same ordering function as the sync instruction, except that a load caused by an instruction following the lwsync may be performed before a store caused by an instruction that precedes the lwsync, and the ordering does not apply to accesses to I/O memory (memory-mapped I/O). If you use lwsync after every load or store you get roughly the behavior of Sun TSO. isync (instruction sync) Causes the processor to discard any future, possibly speculatively executed instructions, so they have to be fetched over again. It is used in locking code (e.g. __check_lock()) to ensure that no loads following entry into a critical section can access data (because of aggressive out-of-order and speculative execution in the processor) before the lock is acquired. eieio (enforce in-order execution of I/O) Creates a memory barrier that provides the same ordering function as the sync instruction except that ordering applies only to accesses to I/O memory. Don't use this for processor-processor synchronization; it's meant for device drivers. Neither sync nor isync actually subsumes the other, but the stuff that isync does that sync doesn't matters only in kernel mode. Isync is cheaper, but doesn't force all previously issued instructions to complete. It just makes sure that subsequent instructions haven't started. (For kernel-level code, isync enforces some dependences that the processor doesn't normally enforce, even with sync. In particular, if you change processor status registers [e.g. the page table pointer], isync makes sure that subsequent instructions see the change.) Example: If processor A does a bunch of data stores and then sets a flag, it needs to put a sync or lwsync (lwsync is cheaper) right before the flag store to make sure the data stores have globally performed. If processor B spins on the flag and then tries to read the data, it needs to do a sync or isync (isync is cheaper) right after testing the flag to make sure subsequent loads haven't been performed speculatively. --------------- Programmer-centric philosophy: don't give me this complexity! Tell me what I have to do to make things act as if they were sequentially consistent. Adve's proposal: program using a synchronization library (e.g. barriers and mutex locks). If all accesses to shared data in different threads are separated by uses of library primitives, and those primitives are implemented correctly, then you're safe. DRF0: put a pair of synch operations in between every race DRF1: put a release/acquire pair in between every race PLpc (Properly Labeled [processor consistent]; from Kourosh's thesis work): put a release/spin-acquire pair in between every race But suppose you don't want to use the library (e.g. because you're writing a nonblocking data structure), or suppose you're trying to write the library itself. Here are the commercially significant options today: SC (MIPS) TSO (Sun), PO (Pentium) If your thread writes one variable and then reads another variable, and those have to happen in order but data dependences don't force it, put a "membar #StoreLoad" in-between. That's it. Sun defines two weaker models (PSO, RMO), first of which was implemented in some past machines, but neither of which was ever supported by Solaris. Intel defines a weaker model (SPO), but it has never been implemented either. Power (IBM) If your thread performs any two memory operations whose order is not constrained by data dependences, but you want it constrained, put a "sync" in-between. For better performance, use "lwsync" between anything other than a write and a read; even better, use "isync" between a test and a subsequent read. In addition to the above, it's worth knowing about several other models, because they've been conceptually influential, or because they show up in SW systems: processor consistency two different (incomparable) definitions in the literature, the first by Goodman, the second by the DASH project. Basic idea in both is that operations _by a given processor_ P are seen by others in the order P did them. Dubois/Scheurich/Briggs weak ordering (cf DRF0) release consistency (cf DRF1) See fig. 11 of A&G (not in the IEEEC version) categorize refs as shared ordinary special nsync sync release acquire spinning \ non-spinning / dynamic distinction --------------- The Java (5) Memory Model See resources on course web page. Can the compiler reorder the following? 2nd op normal load volatile load volatile store normal store monitor enter monitor exit (ordinary op) (acquire) (release) normal load normal store yes yes no (ordinary op) 1st volatile load op monitor enter no no no (acquire) volatile store monitor exit yes no no (release) What fences do you need to prevent the HW from reordering? 2nd op normal normal volatile load volatile store load store monitor enter monitor exit normal load LoadStore normal store StoreStore 1st volatile load LoadLoad LoadStore LoadLoad LoadStore op monitor enter volatile store StoreLoad StoreStore monitor exit Caveat: the kind of fence provided by a CAS or LL/SC varies from processor to processor. On some machines you can do lighter fences around monitor enter or exit, depending on how those are implemented, because the atomic op already does some of it.