The first part of my proposal will focus on improving MPI communication performance. It proposes delta send-recv, an MPI extension for overlapping communication and dependent computation. Delta send-recv provides an interface for marking the data computation and its communication. It automatically blocks computation and divides communication into increments with the virtual memory support. Delta sends and recvs are dynamically chained to effect sender-receiver pipelining, which is superior to pipelining only at the sender or the receiver side.
The second half of the proposal will discuss analysis and optimization techniques to improve the computation performance of MPI programs running on multicore-based clusters. It proposes an analytical performance model that quantifies the effect of re- source sharing. The model is based on reuse distance and footprint analysis. I will also present a novel array regrouping optimization which can reorganize arrays in different MPI processes. The new transformation uses cache sharing to exploit spatial locality across processes.