CSC 2/458: Parallel and Distributed Systems Feb. 2ff 2026 Second assignment due Sunday Feb. 17. Coherence, Consistency, and Memory Models --------------- Reading: Dubois, Annavaram, & Stemstrom 1e, Section 5.4 SMS Chapter 2 Basic multicore memory architecture. Typical modern multicore machine is NUMA (see picture). distinction between coherence and consistency -- Coherence insists that values written to a given location become visible to everyone in the same order. Example: x == 0 A: x := 1 B: x := 2 It is not acceptable for C to read x == 1 and then read x == 2, while D reads x == 2 and then reads x == 1. -- Consistency insists that values written to *different* locations become visible to everyone in the same order. "Butterfly" example: x == 0, y == 0 A: y := 1 B: x := 1 A: a := x B: b := y Is it possible for a and b to be zero afterward? Sequential consistency says no. Some models say yes. In real machines, coherence and consistency are maintained by a cache coherence protocol. We'll start by considering just coherence. The simplest protocols are for machines with a broadcast-capable interconnect (e.g., a bus) on which cache controllers can "snoop". Typical modern machine has 1 or 2 levels of private cache, with coherence protocol attached to the 2nd or 3rd level. Upper, private levels are usually (but not always) _inclusive_ and write-through, so anything they need to know about, the level below is guaranteed to notice. Protocols are described by state machines, executed independently by each cache for each line. Terminology: a line is a place in the cache a block is data that can reside in a line a blocking protocol stalls the HW context until the request completes a write-back protocol doesn't update memory until a dirty block's line is needed for another block (cf: write-through) a write-allocate protocol fetches a to-be-written block on a miss, then updates the appropriate word(s) inside NB: bus "transactions" are very different from TM transactions. Simple protocols: VI works ok for write-through caches e.g. Sequent Symmetry Model A MSI allow multiple concurrent readers (write-back instead of write-through) MESI avoid upgrade transactions for private data [Papamarcos & Patel, '84 (Illinois)] MOESI avoid going to memory when adding a sharer Cache that drops out of M or E on a bus read becomes OWNER, responsible for supplying data on bus read, and for write-back on eviction. IEEE standard. [Sweazey & Smith '86] Need (something equivalent to) a "shared" signal line on the bus for MESI and MOESI. On a multicore machine with a unified inclusive L3, MESI is probably all you need: L3 can prob. respond as fast as another L2 can. Significance of write-no-allocate for 2-state protocol. Writes of an invalid line leave you in invalid state. Many more complicated protocols. split transactions -- introduce transient states update (e.g., Dragon), migration (possibly when reading a line that is in M elsewhere) non-broadcast (directory-based -- more on this later, maybe) << GO OVER MESI STATE MACHINE IN DETAIL >> ---------------------------------------- Dragon protocol is really 5-state. State 'I' is only needed during cold-start, however, and doesn't appear in the picture in Dubois et al. -- we never downgrade to I because it's an update-only protocol. As in MOESI, the "5th" state is used to distinguish between shared-and-I'm-the-owner (responsible for responding to Bus Reads, and for write-back on replacement) and shared-and-somebody-else-is-the-owner (other owner might be a cache or memory). LOTS of minor protocol variants - On a bus read of a line you have exclusive, do you drop to shared or invalid? (Is data mostly read or migratory?) Some machines (inc. the Symmetry Model B) attempt to adapt. - When upgrading from S to M, you already have the data, so BusRdX is wasteful: you don't need anybody to send the data back to you. We can add a BusUpgrade message to handle this case. - In response to a BusRd or BusRdX, who should supply the data, if it's in more than one place? Cache-to-cache transfers may or may not be faster than memory-to-cache transfers, depending on implementation technology and interconnect topology. MOESI facilitates cache-to-cache. - In an update protocol, do you update on all bus writes, or do you invalidate, but then re-validate when somebody misses? - Original Alpha protocol: on bus write, if data is in L1, update but leave in L2 only; if data is in L2 only, invalidate. - DEC Firefly protocol was like 4-state Dragon -- memory is always up-to-date so there's no notion of cache ownership. ---------------------------------------- Return to consistency -- multiple locations Why don't processors necessarily generate memory events in order? compiler optimizations pipelining -- out-of-order completion write buffers out-of-order issue in addition, outside the processor we may have split-transaction buses multiple buses interleaved memory nontrivial interconnect Note that although one can trivially ensure sequential consistency at the level of the bus in a snooping system, this does NOT mean that most SMPs are, or should be, sequentially consistent. Compiler instruction reordering, out-of-order issue, HW write buffers, and non-broadcast networks can all mess up consistency (though not coherence). Two separate issues: - ordering. Recall the bow-tie pattern: x == 0, y == 0 A: y := 1 B: x := 1 A: i := x B: j := y Can we have i == j == 0? Machines differ in lots of details. Two classes are commercially common: TSO: total store order. Reads can bypass writes, but writes (to any location(s), by any core(s)) are guaranteed to be seen in program order by everybody. Moreover neither reads nor writes can bypass reads. SPARC and x86 do (variants of) this. "Relaxed": Absent fences or ordered loads and stores, all four kinds of bypassing can occur. Itanium, Power, and ARM all do this. - write atomicity. Problem illustrated by independent reads of independent writes (IRIW) example: x == 0, y == 0 A: x := 1 B: i := x C: k := y D: y := 1 j := y l := x Can we have i == j == 1 && j == l == 0 ?? Fences cannot prevent this! On some machines, it's possible for A's write to appear before B's two reads and after C's two reads -- and vice versa for D's write. We need the writes to be globally atomic. On some machines (SPARC, x86, Itanium), all writes are globally atomic; on others (ARM, Power), only the synchronizing writes are atomic. Note that Itanium is write atomic but weakly ordered. HW v. programmer-level memory models HW level models describe the machine's behavior; it's your job to write programs that work correctly given that behaivor. For example, Power has 4 kinds of synchronizing instructions, with varying costs. sync Creates a memory barrier. On a given processor, any load or store instructions ahead of the sync instruction in the program sequence must complete their accesses to memory first, and then any load or store instructions after sync can begin. lwsync (lightweight sync) Creates a memory barrier that provides the same ordering function as the sync instruction, except that a load caused by an instruction following the lwsync may be performed before a store caused by an instruction that precedes the lwsync, and the ordering does not apply to accesses to I/O memory (memory-mapped I/O). If you use lwsync after every load or store you get roughly TSO. isync (instruction sync) Causes the processor to discard any future, possibly speculatively executed instructions, so they have to be fetched over again. It is used in locking or condition-checking code to ensure that no following loads (e.g., in critical section) can access data (because of aggressive out-of-order and speculative execution in the processor) before the lock is acquired or condition is detected. eieio (enforce in-order execution of I/O) Creates a memory barrier that provides the same ordering function as the sync instruction except that ordering applies only to accesses to I/O memory. Don't use this for processor-processor synchronization; it's meant for device drivers. Neither sync nor isync actually subsumes the other, but the stuff that isync does that sync doesn't matters only in kernel mode. Isync is cheaper, but doesn't force all previously issued instructions to complete. It just makes sure that subsequent instructions haven't started. (For kernel-level code, isync enforces some dependences that the processor doesn't normally enforce, even with sync. In particular, if you change processor status registers [e.g. the page table pointer], isync makes sure that subsequent instructions see the change.) Example: If processor A does a bunch of data stores and then sets a flag, it needs to put a sync or lwsync (lwsync is cheaper) right before the flag store to make sure the data stores have globally performed. If processor B spins on the flag and then tries to read the data, it needs to do a sync or isync (isync is cheaper) right after testing the flag to make sure subsequent loads haven't been performed speculatively. So: got all that? I didn't think so. In addition to SC, TSO, and "relaxed", it's worth knowing about several other models, because they've been conceptually influential, or because they show up in SW systems: processor consistency two different (incomparable) definitions in the literature, the first by Goodman, the second by the DASH project. Basic idea in both is that operations _by a given processor_ P are seen by others in the order P did them. Dubois/Scheurich/Briggs weak ordering (cf DRF0) release consistency (cf DRF1) Dash, ARM v8 Arguably the best of the HW models is ARM v8, to which Hans Boehm provided significant input. Instead of fences, has load-acquire and store-release instructions. Caveat: the kind of fence provided by a CAS or LL/SC varies from processor to processor. On some machines you can do lighter fences around monitor enter or exit, depending on how those are implemented, because the atomic op already does some of it. ---------------------------------------- What humans really want is a language-level memory model that determines the writes that a read is allowed to see in a parallel program. We can think of the memory model as a part of the language semantics, which give us a mapping from programs to executions of some abstract machine (specified by the language definition). A language implementation (compiler, run-time system, OS, HW) maps that same program and input to a set of realizable _concrete_ executions (hardware behaviors). What we need is a guarantee that every realizable concrete execution corresponds to (has the same output as) some allowable abstract execution. (We do not necessarily require that there be an realizable concrete execution for every allowable abstract exectution.) source program -------------------------> (set of) allowable and input abstract executions | ^ | language required | | implementation correspondence | | | `----------> (set of) realizable -----------' concrete executions ---------------------------------------- A major breakthrough, in Sarita Adve's 1990 PhD thesis work w/ Mark Hill at Wisconsin, was to craft program-level memory models that are simultaneously portable and easier to think about. *** data-race-free programs display sequentially consistent behavior *** (More precisely, if all of a program's sequentially consistent executions are data-race-free, then actual behavior will be one of those sequentially consistent executions. Nice thing about this more formal definition is that it avoids the need to reason about non-sequentially-consistent executions.) So what exactly is a data race? Details dpend on the language memory model. A general framework is described in SMS2e Secs. 2.2 [rules of our pseudocode] and 3.4.1 [memory models]. - Every memory access is either _ordinary_ or _synchronizing_ (essentially, labeled). In most language models, the type system distinguishes between these. - Every individual memory location is _coherent_: all accesses (ordinary and synchronizing) to that location appear in some global total order. - A read instruction is allowed to return the value written by the most recent write (to the same location) that _happens before_ the read. It may also, in some cases, return the value written by an unrelated write. We build happens-before incrementally: - PROGRAM ORDER is the union of a collection of disjoint total orders, each of which captures the steps performed by one of the program’s threads. Each thread’s steps must be allowable under the language’s sequential semantics, given the values returned by read operations (values we're trying to nail down). [NB: C++ actually uses a _partial_ order within each thread, called "sequenced with." It avoids specifying an order for function and operator argument evaluation.] Note that program order does not necessarily correspond to physical reality in the implementation: both the compiler and the hardware may reorder assembly-level instructions if they are able to prove that a sequential program cannot tell the difference. Moreover, since ordinary accesses of different threads are ordered only transitively via synchronizes-with order, a thread is generally unable to observe the program order of its peers. - SYNCHRONIZATION ORDER is a total order, across all threads, on all synchronizing steps. This order is not specified by the source program, but it has to exist, and it has to be consistent with program order in each thread. It must also explain the values read and written by the synchronizing steps (this will ensure, for example, that acquire and release operations on any given lock occur in alternating order). - SYNCHRONIZES-WITH ORDER is a subset of synchronization order induced by language semantics. In a language based on transactional memory, the subset may be trivial: all transactions are globally ordered. In a language based on locks, each release operation may synchronize with the next acquire of the same lock in synchronization order, but may be unordered with respect to other synchronizing steps. - HAPPENS-BEFORE ORDER is the transitive closure of program order and synchronizes-with order. It captures all the ordering the language guarantees. An execution is valid only if there exists a synchronization order that leads, as described below, to a writes-seen relation that explains the values read by both ordinary and synchronizing steps. (NB: a non-SC execution _can_ be valid.) A language implementation is valid if all of its executions are valid. A DATA RACE can be defined formally in two provably equivalent ways: - two conflicting accesses not ordered by happens-before - two consecutive conflicting accesses in a sequentially consistent history Important theoretical models (from Adve's work): DRF0 uses a synchronizes with that encompases all of synchronization order DRF1 is based on release consistency: synchronizing accesses are either acquires or releases; each release syncronizes with the next acquire of the same location in synchronization order [ PLpc (Properly Labeled [processor consistent]; from Kourosh [ Gharachorloo's thesis work): requires only processor [ consistency, not sequential consistency, among synchronizing [ accesses, and distinguishes between polling and final accesses [ in a spin loop. Advantage over DRF1 is that the early reads in [ a spin loop are not acquires, and can migrate above a preceding [ (in program order) release. This enables compiler and HW [ optimizations not officially allowed under DRF1. Java and C++ are basically DRF1-based models; they differ in that (1) Java tries (not entirely successfully) to give semantics to programs with data races -- i.e., to characterize all valid executions (SC and non-SC) of such programs. (It needs these semantics to guarantee that racy programs will never hard a system in which the JVM is embedded.) A load is permitted to return (a) the value from some most recent store to the same location on some backward happens-before path or (b) a store that is (or might be, in some execution) unordered wrt the load by happens-before. C++ says any program with a data race has undefined behavior. (2) Where synchronizes-with in Java is all about acquires and releases, C++ allows for richer subsets of synchronization order. Specifically, synchronizing accesses are sequentially consistent (SC) (ordered wrt all other synchronizing accesses) by default, but they can also be labeled as acquire, release, acquire-release, fully relaxed, or "consume". ("Consume" means that sequenced with is determined by (local) data dependences. Many people think consume is too confusing, and should have been left out. "Sequentially consistent" is equivalent to "Acquire-release" plus write atomicity.) *** By using explicitly relaxed atomics, C++ programmers can write programs that admit non-SC executions -- with circular "causality" -- without officially having data races. *** Unfortunately, even without consume, the C++ committee has failed to formalize the "no out-of-thin-air reads" rule. Fully formal memory models that enable all desired compiler and hardware optimizations are still an open problem. (In the "butterfly" example near the beginning of these notes, out-of-thin-air reads would allow a == b == 42 at the end of the program. C++ says an implementation isn't allowed to do that, but doesn't say exactly what "that" is.) NB: Ignoring "consume" ordering, one can restate the above ordering rules by eliminating sequences-with (so all synchronizing accesses are totally ordered) and creating a subset of program order instead. That's what the pseudocode in SMS (Sec. 2.2) does. It calls the program order subset "local order" and synchonization order "global order". Synchronizing accesses can say that they're ordered, for HB purposes, wrt only some other accesses in their own thread. WR|| == release; ||RW == acquire; Trevor and I found it easier to reason about algorithms this way. It's also closer to how hardware-centric models work. Consider: atomic x == 0; ordinary w == 0 T1: w := 1 x == 0 // acquire T2: x := 1 // release w == 0 // possible? Note that I've drawn this with an acquire-release pair, not the usual release-acquire. Both forumlations allow the bad w == 0 outcome, and label the accesses to w a data race. In the Java and C++ models, there is no ordering across the antidependence on x. In the "local order" formulation of SMS, happens-before is the union of program order with the transitive closure of local order and synchronization order. The antidependence on x is a happens-before arc, but the program-order arcs between the internal accesses in T1 and T2 aren't part of local order, and don't contribute transitively to HB.