next up previous
Next: Experimental Methodology Up: Lazy Release Consistency for Previous: Introduction

A Lazy Protocol for Hardware-Supported Coherence

 

Our lazy protocol for hardware coherent multiprocessors resembles the software-based protocol described in an earlier paper [15], but has been modified significantly to exploit the ability to overlap coherence management and computation and to deal with the fact that coherence blocks can now be evicted from a processor's cache due to capacity or conflict misses. The basic concept behind the protocol is to allow processors to continue referencing cache blocks that have been written by other processors. Although write notices are sent as soon as a processor writes a shared block, invalidations occur only at acquire operations; this is sufficient to ensure that true sharing dependencies are observed.

The protocol employs a distributed directory to maintain caching information about cache blocks. The directory entry for a block resides at the block's home node---the node whose main memory contains the block's page. The directory entry contains a set of status bits that describe the state of the block. This state can be one of the following.

Uncached -- No processor has a copy of this block. This is the initial state of all cache blocks.
Shared -- One or more processors are caching this block but none has attempted to write it.
Dirty -- A single processor is caching this block and is also writing it.
Weak -- Two or more processors are caching this block and at least one of them is writing it.

In addition to the block's status bits, the directory entry contains a list of pointers to the processors that are sharing the block. Each pointer is augmented with two additional bits, one to indicate whether the processor is also writing the block, and the other to indicate whether the processor has been notified that the block has entered the weak state. To simplify directory operations two additional counters are maintained in a directory entry: the number of processors sharing the block, and the number of processors writing it. Figure 1 shows the directory state transition diagram for this original version of the protocol. Text in italics indicates additional operations that accompany the transition.

The state described above is a global property associated with a block, not a local property of the copy of the block in some particular processor's cache. There is also a notion of state associated with each line in a local cache, but it plays a relatively minor role in the protocol. Specifically, this latter, local state indicates whether a line is invalid, read-only, or read-write; it allows us to detect the initial access by a processor that triggers a coherence transaction (i.e. read or write on an invalid line, or a write on a read-only line). An additional local data structure is maintained by the protocol processor; it describes the lines that should be invalidated at the next acquire operation. The size of this data structure is proportional to the number of lines in the cache. There is no need to maintain such information for lines that have been dropped from the cache.

 
Figure 1:   Directory state diagram for a variant of lazy release consistency

On a read miss by a processor the node's protocol processor allocates an ``outstanding transaction'' data structure that contains the line (block) number causing the miss. The outstanding transaction data structure is the equivalent of a RAC entry in the DASH distributed directory protocol [17]. It then sends a message to this block's home node asking for the data. When the request reaches the home node, the protocol processor issues a memory read for the block, and then starts a directory operation---reading the current state of the block and computing a new state. As soon as the memory returns the requested block, the protocol processor sends a message to the requesting node containing the data and the new state of the block. If the block has made the transition to the weak state an additional message is sent to the current writer.gif It is worth noting that the protocol never requires the home node to forward a read request. If the block is not currently being written, then the memory module contains the most up-to-date version. If it is being written, then the fact that the read occurred indicates that no synchronization operation separates the write from the read. This in turn implies (in a correctly synchronized program) that true sharing is not occurring, so the most recent version of the block is not required.

Writes are placed in the write buffer and the main processor continues execution, assuming the write buffer is not full. If the write buffer accesses a missing cache line, the protocol processor allocates an outstanding transaction data structure and sends a write request message to the home node. If the block was not present in the processor's cache (the local line state was invalid), then the entry in the write buffer cannot be retired until the block's data is returned by the home node. If the block was read-only in the processor's cache, however, we still need to contact the home node and inform it of the write operation, but we do not need to wait for the home node's response before retiring the write buffer entry. This stems from the fact that we allow a block to have multiple concurrent writers; we do not need to use the home node as a serializing point to choose a unique processor as writer.

When the write request arrives at the home node, the home node's protocol processor consults the directory entry to decide what the new state of the block should be. If the new state does not require additional coherence messages (i.e. the block was uncached, or cached only by the requesting processor) then an acknowledgment can be sent to the requesting processor. However if the block is going to make a transition to the weak state then notification messages must be sent to the other sharing processors. A response is sent to the requesting processor, instructing it to wait for the collection of acknowledgements. Acknowledgements could be directed to, and collected by, either the requesting processor or the home node (which would then forward a single acknowledgement to the requesting node). We opted for the second approach. It has lower complexity and it allows us to collect acknowledgments only once when write requests for the same block arrive from multiple processors. The home node keeps track of the write requests and acknowledges all of them when it has received the individual acknowledgments from all of the sharing processors.

Lock releases need to make sure that all writes by the releasing processor have globally performed, i.e. that all processors with copies of written blocks have been informed of the writes, and that written data has made its way back to main memory. We ensure this by stalling the processor until (1) its write buffer has been flushed, (2) its outstanding requests have been serviced (i.e. all outstanding request data structures have been deallocated), and (3) memory has acknowledged any outstanding write-backs or write-throughs (see below).

Lock acquires need to invalidate all lines in the acquiring processor's cache for which write notices have been received. Much of the latency of this operation can be hidden behind the latency of the lock acquisition itself. When a processor attempts to acquire a lock its protocol processor performs invalidations for any write notices that have already been received. When it receives a message granting ownership of the lock, the protocol processor performs invalidations for any additional notices received in the intervening time. Invalidating a line involves notifying the home node that the local processor is no longer caching the block. This way the home node can update the state of the block in the directory entry appropriately. If a block no longer has any processors writing it, it reverts to the shared state; if it has no processors sharing it at all, it reverts to the uncached state. If a block is evicted from a cache due to a conflict or capacity miss, the home node must also be informed.

One last issue that needs to be addressed is the mechanism whereby data makes its way back into main memory. With a multiple-writer protocol, a write-back cache requires the ability to merge writes to the same cache block by multiple processors. Assuming that there is no false sharing within individual words, this could be achieved by including per-word dirty bits in every cache, and by sending these bits in every write-back message. This approach complicates the design of the cache, however, and introduces potentially large delays at release operations due to the cache flush operations. A write-through cache can solve both these problems by providing word granularity for the memory updates and by overlapping memory updates with computation. For most programs, however, write-through leads to unacceptably large amounts of traffic, delaying critical operations like cache fills. A coalescing fully associative buffer [12] placed after the write-through cache can effectively combine the best attributes of both write strategies. It provides the simple design and low release synchronization costs of the write-through cache, while maintaining data traffic levels comparable to those of a write-back cache [15].

We also consider a lazier version of the protocol that attempts to delay the point at which write notices are sent to other processors. Under this protocol, the node's protocol processor will refrain from sending a write request to a block's home node as long as possible. Notification is sent either when a written block is replaced in a processor's cache, or when the processor performs a release operation. Writes are buffered in a local data structure maintained by the protocol processor. Processing writes for replaced blocks allows us to place an upper bound on the size of this data structure (proportional to the size of the processor's cache) and to avoid complications in directory processing that arise from having to process writes from processor's that may no longer be caching a block. Delaying notices has been shown to improve the performance of software coherent systems [4,15]. In a hardware implementation, however, delayed notices do not take full advantage of the asynchrony in computation and coherence management and can cause significant delays at synchronization operations.



next up previous
Next: Experimental Methodology Up: Lazy Release Consistency for Previous: Introduction



Leonidas Kontothanassis
Mon Jul 24 22:40:09 EDT 1995