next up previous
Next: Results Up: Lazy Release Consistency for Previous: A Lazy Protocol

Experimental Methodology

 

We use execution-driven simulation to simulate a mesh connected multiprocessor with up to 64 nodes. Our simulator consists of two parts: a front end, Mint [23], that simulates the execution of the processors, and a back end that simulates the memory system. The front end is the same in all our experiments. It implements the MIPS II instruction set. Our back end is quite detailed, with finite-size caches, full protocol emulation, distance-dependent network delays, and memory access costs (including memory contention). Our simulator is capable of capturing contention within the network, but only at a substantial cost in execution time; the results reported here model network contention at the sending and receiving nodes of a message, but not at the nodes in-between. We have also simplified our simulation of the programmable protocol processor, abstracting away such details as the instruction and data cache misses that it may suffer when processing protocol requests. We believe that this inaccuracy does not detract from our conclusions. Current designs for protocol processors incorporate very large caches with a negligible miss rate for all but a few pathological cases [16]. In our simulations we simply charge fixed costs for all operations. The one exception is a write request to a shared line, where the cost is the sum of the directory access and the dispatch of messages to the sharing processors. Since in most cases directory processing can be hidden behind the memory access cost, the increased directory processing cost of the lazy protocol does not affect performance. Table 1 summarizes the default parameters used in our simulations.

  
Table 1: Default values for system parameters

Using these parameters and ignoring any contention effects that may be seen at the network or memory modules, a cache fill would incur the cost of a) sending the request message to the home node through the network, b) waiting for memory to respond with the data, c) sending the data back to the requesting node through the network, and d) satisfying the fill through the node's local bus. Assuming a distance of 10 hops in the network the cost of sending the request is (2+1)*10=30 cycles, the cost of memory is 20+128/2=84 cycles, the cost of sending the data back is (2+1)*10+128/2=94 cycles, and the cost of the local cache fill via the node's bus is 128/2=64. The aggregate cost for the cache fill is then (a+b+c+d)=30+84+94+64=272 processor cycles.

We report results for 7 parallel programs. We have run each program on the largest input size that could be simulated in a reasonable amount of time and that provided good load-balancing for a 64-processor configuration. Three of the programs are best described as computational kernels: gauss, fft, and blu. The rest are complete applications: barnes-hut, cholesky, locusroute, and mp3d.

Gauss performs Gaussian elimination without pivoting on a 448X448 matrix. Fft computes a one-dimensional FFT on a 65536-element array of complex numbers. Blu is an implementation of the blocked right-looking LU decomposition algorithm presented in [5] on a 448X448 matrix. Barnes-hut is an N-body application that simulates the evolution of 4K bodies under the influence of gravitational forces for 4 time steps. Cholesky performs Cholesky factorization on a sparse matrix using the bcsstk15 matrix as input. Locusroute is a VLSI standard cell router using the circuit Primary2.grin containing 3029 wires. Mp3d is a wind-tunnel airflow simulation of 40000 particles for 10 steps. All of these applications are part of the Splash suite [21]. Due to simulation constraints our input data sizes for all programs are smaller than what would be run on a real machine. As a consequence we have also chosen smaller caches than are common on real machines, in order to capture the effect of capacity and conflict misses. Experiments with larger cache sizes overestimate the advantages of lazy release consistency, by eliminating a significant fraction of the misses common to both eager and lazy protocols.



next up previous
Next: Results Up: Lazy Release Consistency for Previous: A Lazy Protocol



Leonidas Kontothanassis
Mon Jul 24 22:40:09 EDT 1995