Compiler Research at Rochester

Overview

Two main themes of systems research are correctness and performance. Correctness requires that all aspects of a program conform to specifications; performance benefits most from the optimization of frequently recurring behavior.  Our research focuses on recurrence in the use of data— determining whether a complex program has an inherent pattern of data reuse and, if so, to what degree that pattern can be modeled, measured, and modified (improved).  The answers to these questions determine the fundamental limits of data caching.

Our funding sources

Contents


Distance-based locality analysis and optimization

Caching is widely used in many computer programs and systems, and cache performance increasingly determines system speed, cost, and energy usage. The effect of caching depends on program locality or the pattern of data reuse. Many applications may have a consistent recurrence pattern at the whole-program level, for example, reusing a large amount of data across the time steps of an astronomical simulation, the optimization passes of a compiler, or the moves of a game-playing program. However, locality has been an elusive concept at the program level because it requires defining, predicting, and verifying patterns across billions of accesses to millions of memory locations.

 

The past work provides mainly three ways of locality analysis: by a compiler, which analyzes loop nests but is not as effective for dynamic control flow and data indirection; by frequency profiling, which analyzes a program for select inputs but does not predict the behavior change in other inputs; or by run-time analysis, which cannot afford to analyze every access to every data. Over the past four years, we have pursued a new direction analyzing the reuse of data, in particular, the distance of data reuses, measured by the volume of other data between two accesses of the same data element. The reuse distance is Euclidean. It combines the information of the frequency, the time, and the volume of data access.

 

Reuse distances reveal invariance in program behavior. Most control flow perturbs only short access sequences but not the cumulative distance over millions of data. Long reuse distances suggest important data and signal major phases of a program. In addition, reuse distance allows direct comparison of data behavior in different program runs. Distance-based correlation does not require two executions to have the same data or execute the same function. It can identify consistent patterns in the presence of dynamic data allocation and input-dependent control flow. Next, we describe our key findings and future research plan.

Near linear time distance measurement (PLDI'03)

In 1970, Mattson et al. defined the concept of stack distance and enabled one-pass simulation of multiple cache configurations. The measurement method was critical for Denning and others to develop the solution for virtual memory management and many more researchers to study cache design in the following decades. Reuse distance is the same as {\em LRU stack distance} or stack distance using LRU (Least Recently Used) replacement policy. The stack algorithm by Mattson et al. requires O(N M) time and O(M) space, where N is the length of the trace and M the size of data. The past 30 years have seen a steady stream of work in improving the efficiency of measuring reuse distance. By organizing the stack as a search tree, Olken reduced the time complexity to O(N log M) in 1981. In 1994, Sugumar and Abraham created the then fastest implementation using a splay tree. Their simulator, Cheetah, has been widely used and distributed. In 2003, we gave an approximation algorithm, which guarantees the measured distance is between for example 99% and 100% of the actual distance. The lower bound can be arbitrarily close to 100%. For the first time, we reduced the space complexity to O(log M) and time complexity to O(N log log M), which is practically linear to the length of the trace. On an Intel PC, the 99% approximation analysis maintains an almost constant speed of 1.2 million references per second for up to 100 billion accesses with average reuse distance up to 1 billion data. Consider an analogy in physical distance. While the old methods look back in miles, the new method looks back in light years. Its unprecedented power enables us to analyze large, data intensive programs.

Whole-program locality prediction (PLDI'03, LACSI'03, PACT'03)

By profiling a few program inputs, we extract its regular patterns and predict their changes in other data inputs, as an astronomer predicts the orbit of a star by observing its trails. Reuse distance is a powerful instrument because of its unique properties. As mentioned before, long reuse distances reveal consistent program behavior and allow pattern correlation across program runs, even when different runs do not execute the same function or have the same data. In addition, the reuse distance largely determines the effect of caching, because it determines whether a memory access incurs a cache capacity miss. Finally, reuse distance can be no more than program data size. Therefore, the change in distance is at most a linear function to the input size, reducing the space of possible patterns. Our experiments show that the prediction is over 94% accurate for a wide range of programs even for data inputs whose data size, access frequency, and average reuse distances are hundreds times larger than profiled ones.

 

With whole-program locality, we can accurately predict the miss rate of many programs for all inputs on all cache sizes. Based on a few training runs, we construct a parameterized model that predicts the miss rate of a fully associative cache with a given cache-block size.  By supplying the range of all input sizes as the parameter, we can predict miss rates for all data inputs on all sizes of fully associative caches.  For a given cache size, the model also predicts the input size where the miss rate exhibits marked changes. Our experiments show the prediction accuracy is 99% for fully associative caches and better than 98% for caches of limited associativity, excluding compulsory misses.  In addition, the predicted miss rate is either very close or proportional to the miss rate of direct-map or set-associative cache.

 

The miss-rate prediction is displayed through interactive 3D visualization available through a web page, which has been downloaded at least hundreds of times by researchers from industry and academia. Mellor-Crummey's group at Rice found similar prediction accuracy in a later, independent study, reported in a full paper in SIGMetrics 2004. Using our tool, Carr's group at Michigan Tech. recently reported in a workshop paper that the locality can be accurately predicted at a per-instruction basis.

 

We next divide the whole-program locality pattern in time and in space. The temporal analysis identifies locality phases and their (hierarchical) structure. The spatial analysis identifies locality relations among program data.

Locality phase hierarchy (ASPLOS'04)

As computer memory hierarchy becomes adaptive, its performance increasingly depends on forecasting the dynamic program locality. This paper presents a method that predicts the locality phases of a program by a combination of locality profiling and run-time prediction. The locality profiling sifts through all accesses to all data elements. Viewing the reuse-distance trace as a composite signal, we use wavelet to filter for each data element and recombine the results to locate the boundary of locality phases and their hierarchical structure (through CFG grammar compression). We then identify phase markers in the program code. When the instrumented program runs, we use the first few executions of a phase to predict its later executions. The phase analysis is suited for programs that have consistent phase behavior. Common examples are dense or N-body systems used in physical, mechanical, and biological simulation. Although the run-time behavior is input dependent, it either repeats or changes slowly. Compared with existing methods, our method predicts with orders of magnitude larger granularity and orders of magnitude better accuracy. We have shown its benefits in adaptive cache resizing and memory remapping. It also outperformed manual phase marking.

Locality data hierarchy (PLDI'04, TDM'04, TR-845)

While the memory of most machines is organized as a hierarchy, program data are laid out in a uniform address space. We think of the data placement problem as the problem of mapping. The domain is the set of programs, which is the power set of the set of all sequences of data accesses. The image is all data decompositions.

 

We have defined a model of reference affinity, which measures how close a group of data are accessed together in a reference trace. We have proved that the model gives a hierarchical partition of program data. At the top is the set of all data with the weakest affinity. At the bottom is each data element with the strongest affinity. Based on the theoretical model, we developed practical tests for the hierarchical affinity of source-level or run-time program data. Programs often have a large number of homogeneous data objects such as molecules in a simulated space or nodes in a search tree. Each object has a set of attributes. In Fortran 77 programs, attributes of an object are stored separately in arrays. In C programs, the attributes are stored together in a structure. Neither scheme is sensitive to the access pattern of a program. A better way is to group attributes based on their reference affinity. For arrays, the transformation is array regrouping. For structures, it is structure splitting. We have shown that the new method consistently outperformed array and structure layouts given by the programmer, compiler analysis, frequency profiling, and statistical clustering on machines from all major vendors.

 

We have recently further explored the theoretical properties of the reference affinity and developed a general method for hierarchical data placement. Previous data placement methods are specialized: Morton layout for matrices, Hilbert curve for particles, and van Emde Boas layout for search trees. We show that these diverse methods share a common basis. The general method automatically gives the desirable data hierarchy not only for the diverse cases reported by past studies---matrix multiplication, Cholesky factorization, wavelet transform, N-body simulation, sparse mesh, and search trees---but also for important new cases---random accesses and random walks. Through a sequence of theorems, we have established the link between the organization of the computation and the relation of data, between the temporal sequences and the spatial structure, and between time and space.

 

The programming for the complex memory hierarchy has becomes increasingly untenable. Automatic data placement in general is NP-hard and poorly approximable. Still, the past solutions have successfully solved important, sub-classes of problems such as the dependence theory for regular loop nests. Our research has moved to the scope of the whole program. The distance-based tools, methods, and models have put whole-program locality analysis on a scientific foundation.


Compiler-assisted data adaptation

Dynamic programs have extensible data structures whose content and access pattern remain unknown until run time and may change during execution.  Examples include physics simulation, sparse matrix calculation, and search trees and hashtables.  For these programs, we are developing a system that can manage data at fine granularity and at run time. 

Selective program monitoring (MSP'02)

A compiler selects important data and marks their access, and then a monitor traces references to the selected data at run time.  The system reduces its cost by limiting the coverage of monitoring and by using compiler knowledge to reduce the workload of the monitor. We have developed a tool that can analyze type-safe C programs.  Preliminary data have shown that selective monitoring can reduce the monitoring cost by three orders of magnitude.  In a pointer-intensive program, monitoring a data field costs no more than 9% in our current system.   Our earlier work used selective analysis for array data in Fortran-style programs.

Dynamic data packing (PLDI'99)

We have studied dynamic array reorganization in Fortran-style programs.  A compiler analyzes array indirection, selects candidate arrays for monitoring and reorganization, and inserts calls to runtime library such that during execution, these accesses are analyzed.  The system uses a run-time map to map from initial data address to their current location.  Access to transformed data is remapped at run time so that correctness is guaranteed regardless the type and frequency of data transformation.  The system reduces the cost of run-time remapping by reusing the map for multiple data and by updating data references.


Compiler enhancement of global cache reuse (JPDC'04, IPDPS'01, IPDPS'00, LCPC'99, Ding Dissertation)

We are developing a two-step strategy.  The first step fuses computation on the same data so that once the data is loaded in cache, we finish its use as much as possible before it is evicted.  Since the first step may bring together a large amount of related computation, it may cause poor cache spatial locality due to a large data working set.  The second step remedies this problem by grouping data used by the same computation so that related data are located in nearby memory cells. We have developed a set of compiler techniques that can reorder the whole program and reorganize the entire data layout.  A paper describing these techniques has won the best paper award in IPDPS'01.

 

A technique similar to the reuse-based fusion is implemented by the Intel Itanium compiler group and helped to obtain 12% performance improvement for SPECfp2K benchmarks on Intel Itanium2. See Ng et al. PACT'03.


The limit of program locality (SC'04)

An effective strategy for improving locality is to group computations on the same data so that once the data are loaded into cache, the program performs all their operations before the data are evicted. However, computation regrouping is difficult to automate for programs with complex data and control structures. We have studied the potential of locality improvement through trace-driven computation regrouping. First, it shows that maximizing the locality is different from maximizing the parallelism or maximizing the cache utilization. The problem is NP-hard even without considering data dependences and cache organization. We have developed a tool that performs constrained computation regrouping on program traces. The new tool is unique because it measures the exact control dependences and applies complete memory renaming and re-allocation.


Adaptive data partitioning for sorting using probability distribution (ICPP'04)

Many computing problems benefit from dynamic partition of data into smaller chunks with better parallelism and locality. However, it is difficult to partition all types of inputs with the same high efficiency. This paper presents a new partition method in sorting scenario based on probability distribution, an idea first studied by Janus and Lamagna in early 1980's on a mainframe computer. The new technique makes three improvements. The first is a rigorous sampling technique that ensures accurate estimate of the probability distribution. The second is an efficient implementation on modern, cache-based machines. The last is the use of probability distribution in parallel sorting. Experiments show 10-30% improvement in partition balance and 20-70% reduction in partition overhead, compared to two commonly used techniques. The new method reduces the parallel sorting time by 33-50% and outperforms the previous fastest sequential sorting technique by up to 30%.


Program balancing for saving power (PACT'04, LCPC'01)

A computer processor consists of a number of functional units, each carrying out a separate set of instructions.  Different programs or different parts of the same program may have a varied number of instructions for each functional unit to execute.  Recent studies have exploited this imbalance to reduce program power consumption.  The idea is to slow the functional units that have a lower load so that the machine saves energy by eliminating idle cycles.  Another possibility, however, is to alter the instruction balance and consequently adjust the program demand for functional units.  In this project, we explore further energy reduction by reorganizing both the program and the machine.  We have proved that a program with a constant instruction balance always consumes less power than the same program with an uneven instruction balance.  Furthermore, we have shown that energy reduction requires a stronger optimization than traditional compiler techniques that are aimed at improving program performance.

 

We then evaluate the benefits of fusion empirically on synthetic and real-world benchmarks, using our existing loop-fusing compiler and a heavily modified version of the SimpleScalar/Wattch simulator. For the real-world benchmarks, we demonstrate energy savings ranging from 7--40%, with run times ranging from 1% slowdown to 17% speedup. In addition to validating our theoretical model, the simulation results allow us to ``tease apart'' the factors that contribute to fusion-induced time and energy savings.


Performance tuning and prediction for memory hierarchy (PDCS'99)

As the speed gap widens between CPU and memory, memory hierarchy performance has become the bottleneck for most applications. This is due in part to the difficulty of fully utilizing the deep and complex memory hierarchies found on most modern machines. In the past, various tools on performance tuning and prediction have been developed to improve machine utilization. However, these tools are not adequate because they either do not consider memory hierarchy or do so with expensive and machine-specific program simulations. In an earlier study, we have demonstrated that application performance is now primarily limited by the bandwidth of different levels of memory hierarchy, especially memory bandwidth. With this observation, we are developing a new approach based on estimating and monitoring the bandwidth consumption of programs.  For program tuning, we identify program segments where machine bandwidth is under-utilized.  When machine bandwidth is well utilized, we predict program running time by dividing the total amount of data transfer with the data bandwidth of the machine.  The following is a preliminary evaluation of the bandwidth-based performance tuning and prediction.


cding@cs.rochester.edu
Last modified: Thu Dec 22 13:22:15 EST 2005