Login
Computer Science @ Rochester
Thursday, April 28, 2011
1:00 AM
Computer Science Bldg. Room 601
Ph.D. Thesis Proposal
Konstantinos Menychtas
University of Rochester
Dynamic Scheduling for Accelerating, Heterogeneous Architectures
From smart-phones to HPC clusters, computer systems are embracing heterogeneity by utilizing accelerators (ACCs) specialized hardware units built for high performance/ power ratios for specific tasks. The CPU/GPU conglomerate is a prominent incarnation of such architectures, as the General-purpose Programmable GPU offloads (mostly) data-parallel tasks from the CPU (graphics, media, etc). Modern programming models and language extensions abstract away details of the increasingly flexible hardware, allowing the development of applications with acceleration-amenable computational kernels which can be run on any type of device (CPU/GPU/other ACC). Still, the current approach in using heterogeneous resources requires the developer to pre-select the plat- form on which the kernel will run. This can realistically lead to situations such as the following: on a laptop with a multi-threaded CPU and two GPUs (embedded, discrete), the discrete GPU becomes a point of contention for concurrent kernel execution requests (e.g. from a compositing window manager, CAD application, HTML5 renderer, and media en/decoder), while CPU threads and the second GPU are idle/sleeping/ turned off .

We argue that it is essential that the decision of where (compute platform) and when (dependencies permitted) to schedule acceleration-amenable kernels for execution be left to an educated system scheduler. We suggest as the target granularity for such decisions the individual kernel calls, for reasons of minimal state preservation requirements and clarity of resource needs. We expect the development of such scheduling techniques to open opportunities for unrealised power, performance, and interactivity (quality of service) optimizations for systems with heterogeneous processing resources.

As part of our ongoing effort to evaluate this proposal, we benchmark OpenCL applications of common computational kernels on a modern workstation with standard, consumer-grade hardware. We build a database using the pro filing information; performance evaluation already suggests that the optimal target is not always fixed to the best GPU. We stress-test the system to identify limitations of the existing software/hardware interface and build a simulation model of it, simple enough to allow for the evaluation of different kernel scheduling policies. Simulations using the real, collected data suggest that there exist promising, non-trivial scheduling policies which can outperform the prevalent "always schedule on the best GPU" approach.

We plan to implement our scheduling policies on real systems, and to validate them using realistic (user-captured) workloads, both regular (optimized) and irregular (custom built, not perfectly fit for acceleration). Also, we intend to examine the effect of different scheduling policies on alternative objectives, primarily power requirements and the satisfaction of soft/hard real-time constraints - especially for embedded devices.