InterWeave: Object Caching Meets Software Distributed Shared Memory

Michael L. Scott, Sandhya Dwarkadas, Srinivasan Parthasarathy,
Rajeev Balasubramonian, DeQing Chen, Grigorios Magklis,
Athanasios Papathanasiou, Eduardo Pinheiro, Umit Rencuzogullari, Chunqiang Tang

Computer Science Department
University of Rochester

Work-in-Progress Talk, SOSP '99, Charleston, SC, Dec. 1999.

Abstract (left) and slides (right).
OS designers have long been aware of the duality between shared memory and message passing. They first remarked upon it with respect to kernel structure, but it also shows up in file systems (memory-mapped files v. read-write access) and in parallel programs on shared-memory multiprocessors. Over the course of the past 15 years, various groups have managed to extend the duality to local-area distributed computing as well, allowing medium and coarse-grain parallel applications to use software-distributed shared memory (S-DSM) instead of message passing on homogeneous clusters of machines.

But what about wide-area applications on heterogeneous machines? Does it make sense to talk about shared memory when the pieces of an application may be written in different languages, compiled to different machine architectures, and separated by multi-millisecond latencies?

Our answer to this question is: yes! At Rochester we are working with colleagues in data mining, scientific visualization, and distributed intelligent environments, all of whom want a compute-intensive parallel application on a multiprocessor or cluster to interact with one or more (sometimes many more) "satellite" machines. We could connect the satellites to the compute engine(s) with application-specific message or RPC protocols, but this approach is less than satisfying when the code for the compute engine(s) is already written in a shared-memory style. It's also extremely tedious and error-prone for programs in which we need to cache data at the satellite for performance reasons, and must then worry about consistency.
As an alternative to message passing, Rochester's InterWeave system, currently under development, allows the programmer to map shared segments into program components spread across heterogeneous, distributed machines. Each segment evolves through a series of consistent versions. When beginning a read-only critical section on a given segment, we use a program-specific predicate to determine whether the currently cached version, if any, if "recent enough" to use. To avoid causality loops, we invalidate mutually-inconsistent versions of other segments, using a novel hashing mechanism that captures the history of each segment in a bounded amount of space. S-DSM-like twins and diffs allow us to update stale segments economically.
InterWeave's type system is based on CORBA IDL, but we do not require that programmers adhere to an object-oriented programming style. We simply ensure that the version of a segment cached by a given program component is appropriate to the component's language and machine architecture. When transmitting data between machines, we convert to and from a standard wire format. We also swizzle pointers, so that references to data currently cached on the local machine are represented as machine addresses. We even allow programs to organize dynamically-allocated data within a segment in different ways on different machines, for the sake of spatial locality.
InterWeave represents a merger and extension of our previous Cashmere and InterAct projects. It is a ``three-level'' S-DSM system, smoothly combining hardware coherence within SMP machines, Cashmere-style lazy release consistency within tightly coupled clusters, and version-based consistency for distributed shared segments. A preliminary implementation (missing many of the features just described) is currently running on our AlphaServer cluster.

Last Change: 15 December 1999 / Michael Scott's email address