We argue that it is essential that the decision of where (compute platform) and when (dependencies permitted) to schedule acceleration-amenable kernels for execution be left to an educated system scheduler. We suggest as the target granularity for such decisions the individual kernel calls, for reasons of minimal state preservation requirements and clarity of resource needs. We expect the development of such scheduling techniques to open opportunities for unrealised power, performance, and interactivity (quality of service) optimizations for systems with heterogeneous processing resources.
As part of our ongoing effort to evaluate this proposal, we benchmark OpenCL applications of common computational kernels on a modern workstation with standard, consumer-grade hardware. We build a database using the pro filing information; performance evaluation already suggests that the optimal target is not always fixed to the best GPU. We stress-test the system to identify limitations of the existing software/hardware interface and build a simulation model of it, simple enough to allow for the evaluation of different kernel scheduling policies. Simulations using the real, collected data suggest that there exist promising, non-trivial scheduling policies which can outperform the prevalent "always schedule on the best GPU" approach.
We plan to implement our scheduling policies on real systems, and to validate them using realistic (user-captured) workloads, both regular (optimized) and irregular (custom built, not perfectly fit for acceleration). Also, we intend to examine the effect of different scheduling policies on alternative objectives, primarily power requirements and the satisfaction of soft/hard real-time constraints - especially for embedded devices.