Associate Professor, Computer Science
Affiliate Faculty in the Goergen Institute for Data Science and Artificial Intelligence
3409 Wegmans Hall
Department of Computer Science,
University of Rochester
sree [at] cs rochester edu
585 276 2391 (please e-mail instead)
Nullius in verba
I am an experimental computer systems researcher interested in the performance of computer programs. To that end, I work in compilers, computer architecture, and the implementation of programming languages for parallel computing. My research aims to make it easier to write high-performance programs on increasingly complex machines.
My work and my graduate students are graciously supported by the National Science Foundation.
If you're a student wanting to work with me, please read this note for prospective students.
I joined Rochester in 2017. Before Rochester, I was a Postdoctoral Research Fellow at The University of Texas at Austin. I obtained my PhD at the Indian Institute of Science.
Additionally, I'm a member of the TriForce Center for Multiphysics Modeling where I contribute (primarily) GPU expertise to the codes being developed for use at the Laboratory for Laser Energetics.
AI Use in the Workplace: Statistics and Studies (new!)
Upward Bound Turtle Workshop (Summer 2024)
For a complete list, please check my publication archive.
Yumeng He, Chandrakana Nandi, and Sreepathi Pai, Formalizing Linear Motion G-code for Invariant Checking and Differential Testing of Fabrication Tools, OOPSLA 2025, to appear. [pre-print]
Benjamin Valpey, Xinyi Li, Sreepathi Pai, and Ganesh Gopalakrishnan, An SMT Formalization of Mixed-Precision Matrix Multiplication, NASA Formal Methods 2025, June 2025 [pre-print] [Springer Link] (Best Paper)
Rongcui Dong and Sreepathi Pai, Modeling Utilization to Identify Shared-Memory Atomic Bottlenecks, GPGPU 2025 (in conjunction with PPoPP 2025), February 2025 [pre-print] [ACM DL]
Shoham Shitrit and Sreepathi Pai, Registered Report: Generating Test Suites for GPU Instruction Sets through Mutation and Equivalence Checking, FUZZING 2022 (in conjunction with NDSS 2022), April 2022 [Camera Ready]
Microbenchmarking Unified Memory in CUDA 6.0, looks at CUDA Unified Memory performance on the Kepler K20Xm.
"How the Fermi Thread Block Scheduler Works (Illustrated)", if you've ever wondered.