Woods Library, Canandaigua (October 2011, thanks to Peter Keng)

2004 Portrait by Yawen Ding

Chen Ding
Professor and Chair (2024 letter, CRA highlights 2024 )
Computer Science Department (why UR)
University of Rochester
Rochester, New York
Ph.D. Rice 2000, M.S. MTU 1996, B.S. Beijing U. 1994

Email: cding@cs.rochester.edu


Short Bio

Chen Ding's research focuses on the scientific foundation of computer memory, in particular, locality theory and optimization and their use in programming with hardware or software cache systems to minimize data movement which is the main bottleneck limiting the performance and power efficiency of modern systems from handhelds to GPUs to supercomputers. His work received the NSF CAREER award (2003) and the inaugural Young Investigator Award from DOE (2002). He was a Visiting Researcher at Microsoft Research and a Visiting Associate Professor at MIT EECS (2007). He teaches compilers, programming languages, parallel and distributed systems, collaborative software design, logic foundations, and computer organization.

 

Research ( my ORCiD page with lists of grants (NSF) and publications, prior work, funding sources)

Computer memory is not uniform but hierarchical, and the fast memory is dynamically managed (often as caches) and shared. The field of locality research is concerned with the analysis and optimization of the memory hierarchy. It is most often used to solve the following three problems:

  • Data movement: Computer architects design on-chip caches to minimize the number of cache misses.
  • Data reuse: Programmers and algorithm designers write code that maximizes data reuse in local memory.
  • Working set: Since a modern computer or device runs many tasks simultaneously, a runtime system allocates the main memory proportionally to each task's needs.

Our work in locality theory (see papers listed below) has shown that the measures of data movement, data reuse, and working set are mathematically related and can be mutually converted. They are not different phenomena but manifestations of the same underlying phenomenon.

 

Locality theory

Parallel program locality [PACT'24, ISMM'18, PPOPP'17]

Data movement complexity (DMC) [ICS'22, HIPS'21, CnC'21], monotonicity and worst case [MEMSYS'25a]

Relational theory of locality (RTL), aka, a higher order theory of locality (HOTL) [ISMM'21, TACO'19, TOS'18, ATC'16, JCST'14, tool download, ASPLOS'13 (slides), MSPC'12]

Reference affinity [CPM'21, POPL'06, PLDI'04]

Whole-program locality (and reuse-distance measurements) [POPL'07, TOC'07, PLDI'03, PACT'03, LACSI'03, TR 875]

Programmable caches

Lease cache [MEMSYS'25b, MEMSYS'24a, TACO'23, MEMSYS'23, LCTES'22, MEMSYS'20]

Optimal cache programming [TACO'22]

Collaborative caching [ISMM'12, ISMM'11, LCPC'08]

Cache hints [ISMM'13]

CPU/GPU cache modeling

Benchmarking [MEMSYS'24b, MEMSYS'18]

Symmetric locality [MEMO'24]

Multi-level cache exclusivity [TACO'17]

Associativity, granularity, sub-block cache [MEMSYS'16]

Cache sharing

Program symbiosis in cache [CCGrid'15, CCGrid'12a, PACT'11,PPOPP'11]

Optimal cache partition sharing [IJPP'17, ICPP'15]

Peer-aware optimization: code [ICPP'14], data [CGO'13 (slides), Bao dissertation ]

Key-value memory caching

Continuous-time modeling of Zipfian Workloads [TOMPECS'25]

Locality-aware memory allocation (LAMA) [TC'17, ATC'15]

Write locality and optimization

Write locality theory and measurement [MEMSYS'21, IPDPS'17, MEMSYS'16]

Reducing persistent memory writebacks [IPDPS'17]

Program locality and optimization

Code layout [CC'19, CC'18]

Compiler optimization: global cache reuse [ICS'05, JPDC'04, IPDPS'01, IPDPS'00, LCPC'99], dynamic cache reuse [MSP'02, PLDI'99]

A component model of spatial locality [ISMM'09]

Locality phase hierarchy [JPDC'07, ExpCS'07, MSP'05, ASPLOS'04, LCPC'04]

Hardness of data packing [POPL'16 by my student Lavaee]

Memory management

A higher order theory of memory demand (HOTM) [ISMM'16 (Li et al.), ISMM'14]

Parallel memory allocation [ISMM'19]

Resource-based memory management [ISMM'11b, ISMM'06], adapted as Poor Richard's memory manager in Haskell

Parallel programming

BOP: Parallel programming by hints [OOPSLA'11,PPoPP'11 poster,TR952,TR948, PPoPP'10 poster, PLDI'07]

GPU race checking [TACO'17]

Message passing support: Delta send/recv [CCGrid'12b], Multiphysics AMR [CoRR'11]

FastTrack: Suggestible program optimization [CGO'09]

Other studies [ICPP'04, PACT'04, SC'04, EuroPar'97, HICSS'96]

Related link

Reuse distance based SLO (suggestions of locality optimizations) tool by Kristof Beyls and Eric D'Hollander.

 

Teaching ( previously taught courses )

See roclocality.org (posts tagged teaching and course number) for course web pages with basic information and learn.rochester.edu for course content including announcements, handouts, and assignments.


cding@cs.rochester.edu
Last modified: August 16, 2025