计算机系统研究的一大主题是性能,性能优化的通常手段是对重复行为进行优化。 我们研究的主要问题是数据的重复使用,我们先确定一个复杂的程序中数据的使用是否存在规律,如果存在,是否可以被测量,分析,并最终改善。我们研究的目的是探索数据使用的性能极限。
现代计算机除极少数外都依赖高速缓存对数据作快速存取,随着计算机速度和并行度的提高,高速缓存的效率越来越多地决定了系统的运行速度,成本和能源使用。 高速缓存的使用取决于程序局部性,也就是数据重用模式。 应用程序有大量和多样的数据重用,例如,一个天文模拟程序的时间步之间重用天体数据,一个编译器的优化步之间重用程序表达式,或一个象棋程序每步计算之间重用棋盘布局。 然而,局部性是一个难以捉摸的概念。一个程序通常对数以百万计的内存单元进行每秒数十亿次的访问。局部性研究的挑战在于理解和控制数据使用的宏观规律。
Behavior-oriented parallelization (BOP) provides a suggestion interface for a user to mark possible parallelism and run-time support to guarante correctness and efficient execution whether the hints are correct or not. It enables program parallelization based on partial information and is useful for incrementally parallelizing a program or streamlining it for common uses.
Chip Multiprocessors have brought new challenges to MPI programs. A performance bottleneck raises when multiple cores in one chip share hardware resources like last level cache and memory link. The project aims at modeling memory resource sharing effect on verstile MPI applications. The performance model uses the reuse distance and footprint information collected from different MPI tasks to calculate the slowdown from resource sharing and thus to predict the scalability of MPI programs on Chip Multiprocessor based clusters.
Pipelining is necessary for efficient do-across parallelism but non-trivial for MPI programs because it requires perfectly matched loop and message blocking in both the sender and the receiver code. Delta send-recv is a run-time system that uses virtual memory support to automatically divide computation and communication into increments to effect dynamic pipelining. A programmer marks the relevant computation and communication but does not restructure the computation code or compose multiple messages at the sender or the receiver.
DBOP is the BOP-compatible runtime that supports speculative execution on distributed machines. The distributive orchestration of speculative parallel tasks is comprised of three types of processes.
Garbage collected programs are increasingly run on today’s multi-processor, multi-core, and multi-threaded machines. The traditional way of manually specifying the heap size becomes increasingly problematic. Since the system load may change dynamically and unpredictably, being conservative may leave most of the system memory unused, while being aggressive may lead to severe contention. Memory sharing makes classical demand-based memory performance model unsuitable for describing multiple VMs with GC.
Fast track is a software system that enables speculation on unsafe optimizations by leveraging the extra processors in multicore systems. Fast track consists of a programming interface and a supporting runtime system. The fast track programming interface lets the programmer install unsafe optimized code while leaving the correctness checking to the runtime system. The following code snippet demonstrates how to program with fast track.