Notes for CSC 2/458, 10 Apr 2019 Software Distributed Shared Memory S-DSM -- hot research topic in the 1990s Do coherence in software at the granularity of pages Use page faults as the moral equivalent of cache misses Stemming from thesis work of Kai Li at Yale -- Ivy System and subsequent work at Princeton -- Shiva (for Intel Hypercube) MESI protocol using page faults as "hooks" for misses invalid : no mapping exclusive or shared : read-only modified : writable Cf: NUMA memory management (Platinum, 8CE, DUnX) where should pages be placed in distributed shared memory? to reduce the cost of memory accesses or, on a cache-coherent machine, cache misses issues message overhead, esp. kernel protocol stack false sharing -- address via reduced granularity (cache lines, "objects", words) multiple writers relaxed coherence/consistency lazy coherence/consistency spectrum of system architectures VM-based Munin (Carter, Bennett, Zwaenepoel) introduced multiple writers, diffs release consistency -- don't send diffs until release Treadmarks (Zwaenepoel, Cox, Dwarkadas, Amza, Keleher, Rajamony, Yu) lazy release consistency -- track happens-before don't send diffs until subsequent miss more on this below Cashmere (Stets, Dwarkadas, Scott, Kontothanassis, Rencuzogullari, Parthasarathy, Hardavellas, Hunt) exploit SMP nodes, fast messages more on this below HLRC (Samanta, Bilas, Iftode, Singh) many others software-based explicit association between locks and regions of memory Midway/Entry Consistency (Bershad and Zekauskas) Shared Regions (Sandhu, Gamsa, and Zhou) CRL (Johnson, Kaashoek, and Wallach) binary instrumentation Blizzard-S (Schoinas, Falsafi, Lebeck, Reinhardt, Larus, and Wood) Shasta (Scales and Gharachorloo) hybrid HW/SW systems Shrimp/AURC (Li, Blumrich, Iftode, Felten, Dubnicki) S-COMA (Saulsbury, Wilkinson, Carter, Landin, and Haridi) main memory a tertiary cache hardware does cache-line-level access control to avoid false sharing Blizzard, Tempest/Typhoon Recall release consistency for HW cache coherence: (introduced by Stanford DASH project; adopted by Itanium and ARMv8) synchronizes-with defined in terms of release (outgoing) & acquire (incoming) cross-thread arcs happens-before is the union of (1) program order and (2) the transitive closure of sequenced-before with synchronizes-with typically assume that invalidations propagate "soon" but why? basically because it's convenient in HW (why not?) What coherence protocol to use for S-DSM? In a distributed setting, arguably the best is (some variant of) LRC Key differences from ordinary (HW) RC invalidations are not sent until absolutely necessary concurrent writers (and readers concurrent w/ writers) are allowed so long as they use different parts of a block assume that program is DRF, so _ability_ to access a page concurrently implies that conflicts are false Rationale: communication is expensive if you wait you can batch it with page-size blocks you get a LOT of false sharing you want to avoid unnecessary invalidations you want to allow conflicting accesses to different parts of a page ---------------------------------------- Treadmarks Assume program is DRF Track happens-before using vector clocks assign a vector timestamp to each inter-message interval of time (intervals separated by sends and receives) Keep track of which pages we write in each interval AND WHICH PAGES HAVE BEEN MODIFIED IN INTERVALS IN OUR LOGICAL PAST (fully detailed list) (yes, this is a lot of metadata) No notion of home nodes for pages (except for bootstrapping) But home nodes for locks Releasing a lock is a local operation -- no messages and no significant changes to local state For the moment, though, it's easiest to imagine that we also downgrade all shared pages to read-only (more on this in a minute) (Locks use the usual distributed protocol: There's a home node. To acquire the lock, ask the home. If the lock is free it will send us ownership. Otherwise it will send our name to the current owner, who will send us ownership directly upon release. If we own a lock and haven't heard from the home since acquiring it, release by sending ownership back to the home (not much harm in delaying this a bit) If release and request cross in the mail, releaser typically assumes responsibility to minimize hot spots) When acquiring a lock, send old vector clock in request. Releaser will send not only ownership but (a) its vector clock (max into ours) and (b) fully detailed list of _write notices_ we haven't yet seen. We invalidate any pages that have been written in intervals that are now in our logical past but didn't used to be. On a read fault, query nodes that did the writing that caused us to invalidate, and obtain and apply (lazily generated) DIFFS that describe the updates performed in the corresponding intervals. This allows multiple concurrent writers to the same page (correctness depends on all concurrent writes to same page being to different words) On a write fault, create a TWIN of the page so we can tell, later, which words we've modified. Optimization Don't actually have to downgrade everything to read-only on every write, because it's ok to send bigger write notices than necessary (i.e., to tell a thread about recent writes that are not yet in its history). Downgrading would require creating new twins, which is wasteful. You might worry that we'd end interval i with a release, skip downgrading, end up with writes in both i and i+1, and, when subsequently asked for writes from interval i+1, end up telling a thread about old writes in i that would overwrite values it cares about. That can't happen in a DRF program: if we're asked about (just) i+1, i must already be in the asker's history, meaning it must already have (directly or indirectly) asked us about it, so we'll have already created a diff and a new twin. Bottom line: acquires and releases define intervals, but diffs can have different boundaries. That's ok, because any "extra" (outside the interval) writes will either be not yet in the asker's past -- and thus harmless -- or already in the asker's past -- and thus either not of interest [due perhaps to false sharing] or redundant [already seen] and thus not re-sent. Complications diff and twin management barriers (complicated, but good time for GC of twins, diffs) write notices passed to nodes that don't care, and never will ---------------------------------------- Cashmere (HLRC is similar) "moderately lazy": send write noices on release; on acquire, invalidate everything written in the past (not just causal past). uses home nodes for pages master copy of page local threads use this directly updated by remote users on release used as source of updates by remote users on post-acquire access fault could put directory information (sharer set) for each page at home node, but MC doesn't support remote reads (just remote stores), so kept copies of directory at all nodes, updated via remote-write broadcast. local data structures copy of sharing directory (for all pages) dirty list (pages written locally and not yet written back to home) incoming list of write notices organized as peer-peer, remote-writable buffers acquire invalidate all pages for which write notices have been received update directory clear write notice set release for each page on dirty list, create diff; send diff to / apply at home node send write notices to sharers remove from dirty list downgrade to read-only; update directory read fault get up-to-date page from home node update directory create and apply _incoming diff_ if necessary (subject of my first patent :-) idea is to avoid writing stale values on top of falsely shared portions of page that have been updated by local peers compare home node copy to local twin; identify any words that are different and update in both local copy and twin write fault update directory create twin add to local dirty list lots of potential optimizations exclusive mode (no twins, diffs) broadcast (for directories, locks & barriers) home node migration (message-only version) performance on 32-processor (4x8) AlphaServer system with Memory Channel speedups ranging from low of 8 (Barnes-Hut) to high of 34 (TSP) median across 11 apps: 19 generally lower for SPLASH apps (designed for CC-NUMAs); higher for apps with coarser grain (WaterNSq, TVD, Gauss, SOR, TSP) ---------------------------------------- InterWeave Shared memory for geographically distributed apps. Parallel speedup not the goal Instead: make it easy to write apps that share state make them faster than they would be with all but the most carefully tuned hand-written application-specific coherence protocol Innovations very relaxed coherence reader-writer locks "recent enough" predicate strict, full, temporal, delta, diff, none, others optional consistency heterogeneity IDL pointer swizzling Other features URL-based naming (simple) security optional fault tolerance (replication) Further optimizations adaptive notify/poll whole-segment transfer views equivalent types run coallescing others