Notes for CSC 2/458, Wed 30 Apr 2008 Software Distributed Shared Memory Original Ivy system (Li and Hudak, 1986), Shiva MESI protocol using page faults as "hooks" for misses invalid : no mapping exclusive or shared : read-only modified : writable NUMA memory management (Platinum, 8CE) where should pages be placed in distributed shared memory? (to reduce the cost of cache misses) issues message overhead, esp. kernel protocol stack false sharing; address via reduced granularity (cache lines, "objects", words) multiple writers relaxed coherence/consistency lazy coherence/consistency spectrum of system architectures VM-based Munin (Carter, Bennett, Zwaenepoel) introduced multiple writers, diffs release consistency -- don't send diffs until release Treadmarks (Zwaenepoel, Cox, Dwarkadas, Amza, Keleher, Rajamony, Yu) lazy release consistency -- track happens-before don't send diffs until subsequent miss Cashmere (Stets, Dwarkadas, Scott, Kontothanassis, Rencuzogullari, Parthasarathy, Hardavellas, Hunt) exploit SMP nodes, fast messages two-way diffs HLRC (Samanta, Bilas, Iftode, Singh) many others software-based explicit association between locks and regions of memory Midway/Entry Consistency (Bershad and Zekauskas) Shared Regions (Sandhu, Gamsa, and Zhou) CRL (Johnson, Kaashoek, and Wallach) binary instrumentation Blizzard-S (Schoinas, Falsafi, Lebeck, Reinhardt, Larus, and Wood) Shasta (Scales and Gharachorloo) hybrid HW/SW systems Shrimp/AURC (Li, Blumrich, Iftode, Felten, Dubnicki) S-COMA (Saulsbury, Wilkinson, Carter, Landin, and Haridi) main memory a tertiary cache hardware does cache-line-level access control to avoid false sharing Blizzard, Tempest/Typhoon state-of-the-art systems Treadmarks note: no notion of home node, except for bootstrapping acquire get write notices from lock releaser, and apply release nothing, really read fault gather and apply diffs from appropriate nodes, which generate them lazily write fault create twin complications diff and twin management barriers (complicated, but good time for GC of twins, diffs) write notices passed to nodes that don't care, and never will Cashmere (HLRC is similar) note: uses home nodes acquire apply write notices release create diff, send diff to / apply at home node send write notices to sharers read fault get clean page from home node update directory create and apply incoming diff if necessary write fault update directory create twin lots of potential optimizations exclusive mode (no twins, diffs) broadcast home node migration (message-only version) Cashmere performance on 32-processor (4x8) AlphaServer system with Memory Channel speedups ranging from low of 8 (Barnes-Hut) to high of 34 (TSP) median across 11 apps: 19 generally lower for SPLASH apps (designed for CC-NUMAs); higher for apps with coarser grain (WaterNSq, TVD, Gauss, SOR, TSP) InterWeave Shared memory for geographically distributed apps. Parallel speedup not the goal Instead: make it easy to write apps that share state make them faster than they would be with all but the most carefully tuned hand-written application-specific coherence protocol Innovations very relaxed coherence reader-writer locks "recent enough" predicate strict, full, temporal, delta, diff, none, others optional consistency heterogeneity IDL pointer swizzling Other features URL-based naming (simple) security optional fault tolerance (replication) Further optimizations adaptive notify/poll whole-segment transfer views equivalent types run coallescing others