CDP 2014 Program

10am-11:30am, Paper Session 1: 

Chair: Tarek Abdelrahman


Modern analytics with the IBM Dash Compiler (slides)

Ettore Tiotto, Bob Blainey, John Keenleyside - Hardware Acceleration Lab, IBM Software Group 
J Nelson Amaral - University of Alberta, IBM Center for advance Studies 
Taylor Lloyd - Hardware Acceleration Lab, IBM Software Group and University of Alberta 
 
Programming languages are important for the efficient encoding of computing-problem solutions by programmers. The role of the compiler is to translate the intention of programmers into efficient sequences of instructions to be executed by a machine. Tremendous strides have been made in the expressiveness of programming languages and in the resourcefulness of compilers to find good encodings. On the other hand, the propensity of language designers and programmers to create new abstractions has generally outstripped the ability of compilers to analyze and transform code sufficiently well to overcome the cost of such abstractions. Furthermore, the pressing need to exploit parallelism has put pressure on algorithm, programming language and compiler designers to revisit the ways in which the intention of programmers can translate effectively into efficient code.
 
The IBM Dash programming language aims to take up the mantle of creating a mathematical notation that can be used to effectively express modern analytic algorithms and can be compiled for high performance using many forms of parallel execution. The design of the IBM Dash compiler prototype draws lessons from the experience of both functional and mathematical languages but also recognizes the practical need to closely integrate with mainstream programming environments such as C/C++ and to minimize the semantic and syntactic gaps for programmers trained in these environments.
 
Dash is being used as the source language for a capstone compiler design class at the University of Alberta with the goal of acquainting the students with the issues involved in the design of a contemporary language. The students will, in turn, contribute to the analysis of the language specification by searching for ways to improve its specification.
 

Performance Effects of Lock Fallback in Best Effort HTM Systems (slides)

Matthew Gaudet, J. Nelson Amaral, Guido Araujo
 
In Best-effort Hardware Transactional Memory (BE-HTM) systems a fallback path must be provided for transactions that cannot complete in hardware, typically through a global lock.   Using IBM's Blue Gene/Q (BG/Q) implementation of BE-HTM we show that tuning lock fallback policies has a dramatic effect on application performance. Furthermore, we use the configurability of the BG/Q BE-HTM system to show that fallback-policy tuning can be sensitive to even small perturbations in the characteristics of the underlying TM system, meaning that policy tuning will not be portable to future TM systems.  This presentation summarizes a very extensive experimental evaluation of the effects of tuning fallback policies in both the short-running and long-running mode in a compute node of the IBM BG/Q Supercomputer. 
 
 

Profiling through Runtime Instrumentation Facility in IBM Hardware (slides)
 
Irwin D'Souza, Iris Baron, Younes Manton, Marius Pirvu, Joran Siu and Julian Wang, IBM Toronto Software Lab
 
Profile Guided Optimization (PGO) is a well known technique which is often used to boost the performance of applications. Such a technique is especially powerful in the context of a Just-In-Time (JIT) compiler that can take optimization decisions at runtime, based on the observed program behavior. The main drawback is that profiling introduces non-negligible runtime overhead, which, at least temporarily, affects the performance of the running application. To overcome this problem IBM introduced the Runtime Instrumentation Facility which offers cheap hardware profiling support.
 
This presentation will describe the implementation of the Runtime Instrumentation Facility in the IBM zEnterprise EC12 and IBM Power8 processors. We will share our initial experience with this feature show-casing one of the many possibilities of how to take advantage of it and detailing various sources of overhead as well as other hurdles we encountered along the way. We hope that this will open new avenues of research in the field of PGO.
 

11:45am - 1pm, Paper Session 2

Chair: Roch Archambault


PORPLE: An Extensible Optimizer for Portable Data Placement on GPU (slides)

Guoyang Chen North Carolina State University
 
GPU has complex memory systems. Where to place the data is important
for the performance of a GPU program. However, the decision is
difficult for a programmer to make because of architecture complexity
and the sensitivity of suitable data placements to input and
architecture changes.
 
This paper presents PORPLE, a portable data placement engine that
enables a new way to solve the data placement problem. PORPLE consists
of a mini specification language, a source-to-source compiler, and a
runtime data placer. The language allows an easy description of a
memory system; the compiler transforms a GPU program into a form
amenable to runtime profiling and data placement; the placer, based on
the memory description and data access patterns, computes on the fly
the appropriate placement schemes for the data and places them
accordingly. PORPLE is distinctive in being adaptive to program inputs
and architecture changes, being transparent to programmers (in most
cases), and being extensible to new memory architectures. Our
experiments on three types of GPU systems show that PORPLE is able to
consistently find optimal or near-optimal placement despite the large
differences among GPU architectures and program inputs, yielding up to 2.08X (1.59X on average)
speedups on a set of regular and irregular GPU benchmarks.

 

OpenPOWER Linux ABI changes to improve performance (slides)
 
Ian McIntosh, IBM Toronto Software Lab

Many of the differences between the AIX and Linux on PowerPC ABIs versus the OpenPOWER Linux ABI are designed to improve performance of calls, passing parameters, returning results, accessing static memory, and to reduce memory usage.  Some of the approaches are novel.

 

Mincer: a distributed automated problem determination tool (slides)
 
Andrew CraikPatrick Doyle, Christopher Black - IBM Canada
 
Developers of dynamic optimizing compilers face unique debugging challenges.  Traditionally, compiler developers have exploited the optional nature of each optimization to narrow down the scope of the bug search by running experiments with various optimizations disabled.  However, in a dynamic compilation environment, the complex dynamic control heuristics guiding the optimization process can cause the failures to occur only intermittently, making it difficult to tell whether a particular test run passed because the faulty optimization was disabled, or merely because the run-time conditions were different.  Many test runs are required to gain confidence that a failure never occurs for a particular experiment.  A natural way to accelerate this debugging process is to run multiple experiments in parallel, but naive parallel experimentation can waste significant machine resources on redundant experiments: if 100 identical copies of an experiment are run in parallel, and one of them fails, the other 99 provide no additional information, and those machines could have been used more profitably by running other experiments.
 
In this talk we present Mincer: a distributed automated problem determination tool that uses Bayesian inference and Shannon Information Theory to coordinate experimental parameter selection across a machine farm to maximize the information generated per experiment, greatly reducing the time and resources consumed to isolate a particular compiler bug.
 

High-Level Abstraction, Safety and Code Generation in Coconut (slides)
 
Christopher Kumar Anand, Jessica L.M. Pavlin, Maryam Moghadas, Yuriy Toporovskyy, Michal Dobrogost, and Wolfram Kahl, McMaster University
 
The Coconut (COde CONstructing User Tool) Project aims to provide each domain expert collaborating on high-performance, safety-critical scientific software their own interface allowing for the separate specification of all ingredients from mathematical models down to efficient instruction scheduling, with all transformations providing proofs of correctness and safety.  Initial work---presented at previous CDPWs---focussed on instruction selection and scheduling, introducing a declarative assembly language.  This talk will provide (1) a description of the highest level of abstraction, the Coconut Expression Library (CEL), in which domain experts can transparently specify mathematical models and regularizers, independent of implementation considerations, and of subsequent optimized code generation---including algebraic simplifications, symbolic differentiation, common subexpression elimination (CSE), and parallelization---via term graph transformation rules; (2) a sketch of our abstraction for parallel programs which affords linear-time safety proofs; and (3) a prototype type system for CEL which would allow the compiler to flag mathematical modelling errors, eliminating whole classes of errors from scientific programs.
 

2pm-2:30pm, J. Gregory Steffan Memorial Session

Chair: Chen Ding


Professor Steffan, formerly of University of Toronto, passed away unexpectedly on July 24, 2014.  He was a long-time attendee and contributor of the workshop.  The memorial session celebrats his life and the collective memory of him as a brilliant and inspiring researcher, teacher and friend.


2:45pm-3:45pm, Paper Session 3

Chair: Christopher Kumar Anand


Reducing Memory Buffering Overhead in Software Thread-Level Speculation (slides)

Zhen Cao, McGill University; Clark Verbrugge, McGill University
 
Software-based, automatic parallelization through Thread-Level Speculation (TLS) has significant practical potential, but also high overhead costs. "Lazy" buffering mechanisms that enable strong isolation of speculative threads imply large memory overheads, while "eager" mechanisms improve scalability, but are more sensitive to data dependencies, and have higher rollback costs. We propose three main techniques to address this problem: (1) we reduce overhead on the critical path of lazy buffering by parallelizing the validation and commit process itself; (2) we apply a low-overhead, shared address-owner buffering design to enable arbitrarily large granularity speculation, and so fully leverage the higher scalability of eager version management buffering; (3) we describe a runtime mechanism to automatically identify readonly and independent variables and apply the most appropriate buffering/checking optimization. We implement our approaches using MUTLS, a software-TLS system within the popular LLVM compiler framework. Results show we can get to 54%--100% of the speed of fully unbuffered approaches, and we also observe significant validation/commit time reduction and performance boost in memory-intensive benchmarks, while preserving speedups in compute-intensive ones. Application of these optimizations is thus a useful part of the optimization stack needed for effective and practical software TLS.
 

Type Specialization in Dynamic Weakly Typed Languages (slides: pdf, ppt)
 
Dr David Siegwart, Testarossa Compiler Development, IBM Hursley UK
Presented by Ian Gartley, IBM Canada
 
Interpreted dynamic weakly typed languages such as PHP, Ruby and Python are often accelarated by dynamically compiling into bytecode, thus allowing a JIT compiler to compile the bytecode into native code. 
 
One drawback with this approach is that the JIT compiler does not understand the source language's dynamic types, and hence many optimizations, such as loop invariance and induction variable analysis, and arithmetic optimizations are blocked. But in many cases, such variables are not actually dynamic, and their type may have been determined. 
 
This paper discusses how prior to bytecode compilation, type specialization may be carried out, so that the dynamic type of a variable may be in some case be specialized to say an int or a double allowing the generated bytecode to be consumable to the JIT compiler. 
 
On a PHP engine (IBM's P8) running on IBM's J9 compiler with Testarossa JIT compiler, we were able to show approximately a factor of ten speedup on a simple but uneliminable loop, where the induction variable could thus be deduced by the JIT compiler. 
 

4pm-5pm, Paper Session 4

Chair: Clark Verbrugge


VeloCty: An optimizing static compiler for Matlab and Python (slides)
 
Sameer Jagdale, McGill University

The rising popularity of multi-core systems has renewed interest in the development of par- allel algorithms. Research is also being carried out in the development of compiler tools to port existing systems to parallel architectures. Moreover, high-level scientific languages such as Matlab and Python with its NumPy library are also gaining popularity among scientists and mathematicians. These languages provide many features such as a dynamic typing, functions like eval for runtime code evaluation etc. which allow easy prototyping. However these same features inhibit performance of the code.
 
We present VeloCty, an optimizing static compiler for Matlab and Python as a solution to the problem of enhancing performance of programs written in these languages. In most programs, a large portion of the time is spent executing a small part of the code. More- over, these sections can often be compiled ahead of time and improved performance can be achieved by optimizing only these ‘hot’ sections of the code. VeloCty takes as input functions written in Matlab and Python which are defined by the user as computationally intensive and generates an equivalent C++ version. VeloCty also generates glue code to interface with the Matlab and Python. The generated code can then be compiled and packaged as a shared library that can be linked to any program written in the Matlab and Python. VeloCty also supports parallelism through OpenMP.
 
VeloCty uses the Velociraptor toolkit. It consists of a C++ backend for the Velociraptor intermediate representation, VRIR, and language-specific runtimes for Matlab and Python. We have also implemented language-specific frontends for Matlab and Python which com- pile to VRIR. The Matlab frontend is implemented using the McLab framework. VeloCty was evaluated using 16 Matlab benchmarks. The benchmark versions using the C++ library were between 1.3 to 400 times faster than the MathWorks’ Matlab 2013a. Experiments for the Python benchmarks were in progress at the time of writing of this abstract.
 

McNumJS : Enabling Fast Numeric Computations for JavaScript (slides)
 
Sujay Kathrotia, School of Computer Science, McGill University
 
Although initially developed as a browser-agnostic scripting language, we have seen JavaScript evolve beyond the desktop to areas such as mobility and server-side web applications. Due to its portability and availability of sophisticated virtual machines and JIT compilers for JavaScript, the application of JavaScript has evolved from simpler to more complex ap- plications like 3D games, image editing, signal processing and data visualization. Various JavaScript features such as typed arrays, web workers, and technologies like asm.js, WebGL, WebCL have been developed to improve performance of JavaScript for these compute intensive applications.
 
However, for developers and scientists, it is non-trivial to work with these technologies. Some developers stick to the classical programming languages as they provide significant performance and easy-to-use APIs. We aim to solve this problem by creating a library which uses these technologies and exposes familiar numerical API like NumPy to the developers and compiler writers, who can use this API to develop and distribute numer- ical applications, either through developing JavaScript applications directly, or by using JavaScript as a target in compilers for languages such as MATLAB or R.
 
Our library makes use of typed arrays instead of regular arrays to represent matrix or ar- rays. However, typed arrays can only provide one-dimensional indexing. So we augmented the constructor of typed arrays and added some properties like shape and stride to provide multi-dimension indexing. Furthermore, our library also makes use of the asm.js typing sys- tem to improve the performance. This will be a helpful guide on coercing type information to variables for the compiler writers who are targeting JavaScript or JavaScript developers who are writing compute-intensive applications. We ran some micro-benchmarks using our library and compared it with popular JavaScript library for numerical computation, NumericJS. Our results show that McNumJS is on average 2-11 times faster than NumericJS.
 

Implementing Intel SSE-compatible functions on PowerPC to simplify application porting (slides)


Ian McIntosh, IBM Toronto Software Lab

Some Intel SSE/AVX-compatible functions were implemented on PowerPC to allow easier and more SIMD parallelism in ported programs.  Trying to maximize their performance led to finding missed compiler optimization opportunities, and also to some changes in programming techniques.  Another result was discovering what aspects of Intel little-endian SSE/AVX SIMD are hard to emulate efficiently on PowerPC VMX(Altivec)/VSX SIMD designed for big-endian.

 
Groups: