Parallel Sparse LU Factorization on Second-class Message Passing Platforms Kai Shen Department of Computer Science, University of Rochester Several message passing-based parallel solvers have been developed for general (non-symmetric) sparse LU factorization with partial pivoting. Due to the fine-grain synchronization and large communication volume between computing nodes for this application, existing solvers are mostly intended to run on tightly-coupled parallel computing platforms with high message passing performance (e.g., 1--10us in message latency and 100--1000Mbytes/sec in message throughput). In order to utilize platforms with slower message passing, this paper investigates techniques that can significantly reduce the application's communication needs. In particular, we propose batch pivoting to make pivot selections in groups through speculative factorization, and thus substantially decrease the inter-processor synchronization granularity. We experimented with an MPI-based implementation on several message passing platforms. While the speculative batch pivoting provides no performance benefit and even slightly weakens the numerical stability on an IBM Regatta multiprocessor with fast message passing, it improves the performance of our test matrices by 28--292% on an Ethernet-connected 16-node PC cluster. We also evaluated several other communication reduction techniques and showed that they are not as effective as our proposed approach.