Konstantinos Menychtas

626 Computer Studies Bldg
160 Trustee Rd
University of Rochester
Rochester, NY, 14627
(585) 275-8479
kmenycht@cs.rochester.edu

About

I am a graduate student at the Computer Science department of the University of Rochester, co-advised by professors Michael L. Scott and Kai Shen. Before coming to the States, I attended the University of Patras in my home country, Greece. I am broadly interested in systems research, but more particularly in all topics around Operating Systems and especially challenges that arise at the interface between software and hardware. For the past few years, I have been working with fast accelerating processors, like the GPU (Graphics Processing Unit), understanding and exploiting their interface and instructing techniques for their fair, protected, low-overhead management.


Enabling fair, protected access to fast accelerators

Massive deployment, tight system integration and increasing rates of utilization of fast computational accelerators (e.g. GPUs) make their OS-management a matter of uttermost importance. Fair, safe scheduling is a particular challenge for such resources, as direct device access from user-space is commonly employed to avoid kernel-crossing overheads - especially for short, frequent acceleration requests. We propose a disengaged scheduling strategy in which the kernel intercedes between applications and the accelerator on an infrequent basis, to monitor the use of accelerator cycles and to determine which applications should be granted access over the next time interval. We employ this strategy to develop kernel-level schedulers that study the trade-offs of protection and efficiency. We implement and test these schedulers on the latest Nvidia GPUs, demonstrating fair-sharing with low overhead (~4% on average when compared to direct device access).

ASPLOS 2014 [PDF]

Uncovering interactions in the black-box GPU stack

Fast accelerators like the GPU are complex software/hardware stacks, composed of user-level libraries, drivers, and the hardware itself. The mechanisms through which accelerator resources — cycles, memory and bandwidth — are managed, are commonly hidden behind binaries, loosely documented components, and a kernel-bypassing memory-based interface. Working with 3 generations of Nvidia GPUs, we have systematized a methodology to uncover and understand resource-management related interactions across the stack of black boxes that make up the GPU. By building a state-machine that captures relevant interactions, we can enable the OS kernel to intercept and act upon events that signify the submission or completion of individual acceleration requests - effectively enabling OS-level management of black-box GPUs.

USENIX ATC 2013 [PDF]

NEON

NEON is a thin software layer that sits between the GPU hardware and the driver and enables the OS to control access to the GPU with minimal overhead. No changes to applications or libraries is required. It is implemented as a Linux kernel module, requiring only small, easily update-able patches to the Linux kernel and Nvidia's binary driver interface. It has been built and thoroughly tested on the Nvidia GTX670, GTX275 and NVS295; it should be "easy" to port to other GPUs. It works with CUDA, OpenCL, OpenGL, and should be "easy" to extend to any other GPU-accelerating library.

Go to the repository