CSC 2/458: Parallel and Distributed Systems Jan. 30ff 2019 First assignment due Sunday Feb. 3. Coherence, Consistency, and Memory Models --------------- Reading: Dubois, Annavaram, & Stemstrom Section 5.4 SMS Chapter 2 Basic multicore memory architecture. Typical modern multicore machine is NUMA (see picture). distinction between coherence and consistency -- Coherence insists that values written to a given location become visible to everyone in the same order. Example: x == 0 A: x := 1 B: x := 2 It is not acceptable for C to read x == 1 and then read x == 2, while D reads x == 2 and then reads x == 1. -- Consistency insists that values written to *different* locations become visible to everyone in the same order. Example: x == 0, y == 0 A: y := 1 B: x := 1 A: a := x B: b := y Is it possible for a and b to be zero afterward? Sequential consistency says no. Some models say yes. In real machines, coherence and consistency are maintained by a cache coherence protocol. We'll start by considering just coherence. The simplest protocols are for machines with a broadcast bus shared on which cache controllers can "snoop". Typical modern machine has 1 or 2 levels of private cache, with coherence protocol attached to the 2nd or 3rd level. Upper, private levels are usually (but not always) _inclusive_ and write-through, so anything they need to know about, the level below is guaranteed to notice. Protocols are described by state machines, executed independently by each cache for each line. (Terminology convention: a line is a place in the cache; a block is data that can reside in a line.) VI works ok for write-through caches e.g. Sequent Symmetry Model A MSI allow multiple concurrent readers (write-back instead of write-through) MESI avoid upgrade transactions for private data MOESI avoid going to memory when adding a sharer Cache that drops out of M or E on a bus read becomes OWNER, responsible for supplying data on bus read, and for write-back on eviction. NB: bus "transactions" are very different from TM transactions. Need (something equivalent to) a "shared" signal line on the bus for MESI and MOESI. On a multicore machine with a unified inclusive L3, MESI is probably all you need: L3 can prob. respond as fast as another L2 can. Significance of write-no-allocate for 2-state protocol. Writes of an invalid line leave you in invalid state. Many more complicated protocols. split transactions -- introduce transient states update (e.g., Dragon), migration (possibly when reading a line that is in M elsewhere) non-broadcast (directory-based -- more on this later) << GO OVER MESI STATE MACHINE IN DETAIL >> ---------------------------------------- Dragon protocol is really 5-state. State 'I' is only needed during cold-start, however, and doesn't appear in the picture in Dubois et al. -- we never downgrade to I because it's an update-only protocol. As in MOESI, the "5th" state is used to distinguish between shared-and-I'm-the-owner (responsible for responding to Bus Reads, and for write-back on replacement) and shared-and-somebody-else-is-the-owner (other owner might be a cache or memory). LOTS of minor protocol variants - On a bus read of a line you have exclusive, do you drop to shared or invalid? (Is data mostly read or migratory?) Some machines (inc. the Symmetry Model B) attempt to adapt. - When upgrading from S to M, you already have the data, so BusRdX is wasteful: you don't need anybody to send the data back to you. We can add a BusUpgrade message to handle this case. - In response to a BusRd or BusRdX, who should supply the data, if it's in more than one place? Cache-to-cache transfers may or may not be faster than memory-to-cache transfers, depending on implementation technology and interconnect topology. - In an update protocol, do you update on all bus writes, or do you invalidate, but then re-validate when somebody misses? - Original Alpha protocol: on bus write, if data is in L1, update but leave in L2 only; if data is in L2 only, invalidate. - DEC Firefly protocol was like 4-state Dragon -- memory is always up-to-date so there's no notion of cache ownership. ---------------------------------------- Return to consistency -- multiple locations Why don't processors necessarily generate memory events in order? compiler optimizations pipelining -- out-of-order completion write buffers out-of-order issue in addition, outside the processor we may have split-transaction buses multiple buses interleaved memory nontrivial interconnect Note that although one can trivially ensure sequential consistency at the level of the bus in a snooping system, this does NOT mean that most SMPs are, or should be, sequentially consistent. Compiler instruction reordering, out-of-order issue, HW write buffers, and non-broadcast networks can all mess up consistency (though not coherence). Two separate issues: - ordering. Recall the bow-tie pattern: x == 0, y == 0 A: y := 1 B: x := 1 A: i := x B: j := y Can we have i == j == 0? Machines differ in lots of details. Two classes are commercially common: TSO: total store order. Reads can bypass writes, but writes (to any location(s), by any core(s)) are guaranteed to be seen in program order by everybody. Moreover neither reads nor writes can bypass reads. SPARC and x86 do (variants of) this. "Relaxed": Absent fences or ordered loads and stores, all four kinds of bypassing can occur. Itanium, Power, and ARM all do this. - write atomicity. Problem illustrated by independent reads of independent writes (IRIW) example: x == 0, y == 0 A: x := 1 B: i := x C: k := y D: y := 1 j := y l := x Can we have i == j == 1 && j == l == 0 ?? Fences cannot prevent this! On some machines, it's possible for A's write to appear before B's two reads and after C's two reads -- and vice versa for D's write. We need the writes to be globally atomic. On some machines (SPARC, x86, Itanium), all writes are globally atomic; on others (ARM, Power), only the synchronizing writes are atomic. Note that Itanium is write atomic but weakly ordered. HW v. programmer-level memory models. HW level models describe the machine's behavior; it's your job to write programs that work correctly given that behaivor. For example, Power has 4 kinds of synchronizing instructions, with varying costs. sync Creates a memory barrier. On a given processor, any load or store instructions ahead of the sync instruction in the program sequence must complete their accesses to memory first, and then any load or store instructions after sync can begin. lwsync (lightweight sync) Creates a memory barrier that provides the same ordering function as the sync instruction, except that a load caused by an instruction following the lwsync may be performed before a store caused by an instruction that precedes the lwsync, and the ordering does not apply to accesses to I/O memory (memory-mapped I/O). If you use lwsync after every load or store you get roughly TSO. isync (instruction sync) Causes the processor to discard any future, possibly speculatively executed instructions, so they have to be fetched over again. It is used in locking code (e.g. __check_lock()) to ensure that no loads following entry into a critical section can access data (because of aggressive out-of-order and speculative execution in the processor) before the lock is acquired. eieio (enforce in-order execution of I/O) Creates a memory barrier that provides the same ordering function as the sync instruction except that ordering applies only to accesses to I/O memory. Don't use this for processor-processor synchronization; it's meant for device drivers. Neither sync nor isync actually subsumes the other, but the stuff that isync does that sync doesn't matters only in kernel mode. Isync is cheaper, but doesn't force all previously issued instructions to complete. It just makes sure that subsequent instructions haven't started. (For kernel-level code, isync enforces some dependences that the processor doesn't normally enforce, even with sync. In particular, if you change processor status registers [e.g. the page table pointer], isync makes sure that subsequent instructions see the change.) Example: If processor A does a bunch of data stores and then sets a flag, it needs to put a sync or lwsync (lwsync is cheaper) right before the flag store to make sure the data stores have globally performed. If processor B spins on the flag and then tries to read the data, it needs to do a sync or isync (isync is cheaper) right after testing the flag to make sure subsequent loads haven't been performed speculatively. So: got all that? I didn't think so. In addition to SC, TSO, and "relaxed", it's worth knowing about several other models, because they've been conceptually influential, or because they show up in SW systems: processor consistency two different (incomparable) definitions in the literature, the first by Goodman, the second by the DASH project. Basic idea in both is that operations _by a given processor_ P are seen by others in the order P did them. Dubois/Scheurich/Briggs weak ordering (cf DRF0) release consistency (cf DRF1) Dash, ARM v8 Arguably the best of the HW models is ARM v8, to which Hans Boehm provided significant input. Instead of fences, has load-acquire and store-release instructions. Caveat: the kind of fence provided by a CAS or LL/SC varies from processor to processor. On some machines you can do lighter fences around monitor enter or exit, depending on how those are implemented, because the atomic op already does some of it. ---------------------------------------- Adve's breakthrough was to craft program-level memory models that are simultaneously portable and easier to think about. *** data-race-free programs display sequentially consistent behavior *** (More precisely, if all of a program's sequentially consistent executions are data-race-free, then actual behavior will be one of those sequentially consistent executions. Nice thing about this more formal definition is that it avoids the need to reason about non-sequentially-consistent executions.) So what exactly is a data race? Depends on the language. General framework: - program order in each thread - distinguish between "ordinary" and "synchronizing" operations - synchronizing operations are mutually sequentially consistent - "sequenced before" is a subset of program order. May be _all_ of program order. May be a lot less. E.g., every ordinary access is sequenced after every prior acquire in program order and before every subsequent release in program order. - "synchronized before" is a subset of the global total order on synchronizing operations. E.g., every acquire is synchronized after every prior synchronizing access in the same thread and after the most recent prior release of the same location in the global total synchronization order. - "happens before" is the union of program order with the transitive closure of sequenced before and synchronized before. A data race can be defined formally in two provably equivalent ways: - two conflicting accesses not ordered by happens-before - two consecutive conflicting accesses in a sequentially consistent history *** SMS gets this wrong: it doesn't introduce sequenced-with order, which is needed to capture either Java or C++ precisely. We want x <_hb y iff x <_po y or x <_sw < y or there's a chain of po and/or sw arcs between x and y that includes at least one sw arc. You can do that without explicitly introducing sequenced-with, but it's ugly, and I didn't get it right. Specific models: DRF0 uses a synchronizes with that encompases all of synchronization order DRF1 is based on release consistency: synchronizing accesses are either acquires or releases; each release syncronizes with the next acquire of the same location in synchronization order PLpc (Properly Labeled [processor consistent]; from Kourosh's thesis work): requires only processor consistency, not sequential consistency, among synchronizing accesses, and distinguishes between polling and final accesses in a spin loop. Advantage over DRF1 is that the early reads in a spin loop are not acquires, and can migrate above a preceding (in program order) release. This enables compiler and HW optimizations not officially allowed under DRF1. Java and C++ are basically DRF1-based models; differ in that (1) Java tries (not entirely successfully) to give semantics to programs with data races. A load is permitted to return (a) the value from some most recent store to the same location on some backward happens-before path or (b) a store that is (or might be, in some execution) unordered wrt the load by happens-before. C++ says any program with a data race has undefined behavior. (2) C++ allows the program to explicitly relax both sequenced with and synchronized with. The language allows sequentially consistent, fully relaxed, acquire, release, acquire-release and "consume" orders. "Consume" means that sequenced with is determined by (local) data dependences. Many people think consume is too confusing, and should have been left out. Even with it left out, the C++ committee has failed to formalize the "no out-of-thin-air reads" rule. Fully formal memory models that enable all desired compiler and hardware optimizations are still an open problem. [ Some comments on Adve and Gharachorloo [ [ Figures 8 & 9 (TR version -- Tables 1 & 2, IEEEC version) are important [ but dated. A nice update can be found in Doug Lea's Java cookbook [ (http://gee.cs.oswego.edu/dl/jmm/cookbook.html) [ [ Definition of cache coherence (not universally agreed upon) (p. 9) [ (1) writes are eventually visible everywhere [ (2) writes to the same location are serialized the same way [ everywhere (i.e. if everybody stops writing then everybody [ eventually agrees on the final value of every location) [ [ Definition of sequential consistency (p. 2, p. 5). [ (1) (appearance of a) total global order for all memory operations [ (2) every processor's program order consistent with that global order [ [ Sequential consistency may some day make a comeback. Work at Wisconsin [ c. 2003 (Milo Martin's thesis work under Mark Hill, who was also Sarita [ Adve's advisor) suggests that it may be possible to implement [ efficiently after all. [ [ But note that if we want SC, the compiler has to avoid classical [ optimizations on shared locations. If the language doesn't distinguish [ between shared and private, performance may suffer a lot. [ [ Fig. 11 of A&G (not in the IEEEC version) categorizes refs as [ [ shared [ ordinary [ special [ nsync [ sync [ release [ acquire [ spinning \ [ non-spinning / dynamic distinction [ [ --------------- [ The Java (5) Memory Model [ [ See resources on course web page. [ [ Can the compiler reorder the following? [ [ 2nd op [ [ normal load volatile load volatile store [ normal store monitor enter monitor exit [ (ordinary op) (acquire) (release) [ normal load [ normal store yes yes no [ (ordinary op) [ [ 1st volatile load [ op monitor enter no no no [ (acquire) [ [ volatile store [ monitor exit yes no no [ (release) [ [ [ What fences do you need to prevent the HW from reordering? [ [ 2nd op [ [ normal normal volatile load volatile store [ load store monitor enter monitor exit [ [ normal load LoadStore [ [ normal store StoreStore [ [ 1st volatile load LoadLoad LoadStore LoadLoad LoadStore [ op monitor enter [ [ volatile store StoreLoad StoreStore [ monitor exit