CSC 2/458

Parallel and Distributed Systems

Spring 2019

Possible semester projects

The following are listed in no particular order.  Also note that this is in no way an exhaustive list; suggestions here are just ideas to get you started.  Feel free to suggest a project of your own! 

Parallelize something cool
Are you passionate about machine learning?  Computational linguistics?  On-line gaming?  Medical informatics?  Arguably the most compelling projects are those that involve parallelizing some application in which you have a strong personal interest.  Depending on personal interest and the characteristics of the program, you might use MPI, OpenMP, Chapel, CUDA, or various other options. 

Explore graph runtimes
Recent years have seen the development of a variety of run-time systems for large-scale graph computations.  Example systems include Pregel, GraphLab, Grappa, Giraph, and Hive.  The article by McCune et al. provides a good survey of these and others.  Use some subset of these to build implementations of a few standard graph algorithms (shortest paths, betweeness centrality, connected components, page rank, etc.) and compare their functionality and performance. 

Subgraph Isomorphism (sponsor: Prof. Sreepathi Pai)
While this problem is NP complete, given its importance, there are now many implementations.  Read papers on the subject and implement at least two parallel algorithms that solve subgraph isomorphism, comparing their performance.

Array-mapped trie (sponsor: Prof. Chen Ding)
Build a concurrent version—Prof. Ding suggests in Rust—and optimize its memory performance, building on a 458 project from last year.

Machine-checked proofs (sponsor: Prof. Chen Ding)
Build a machine-checked proof of correctness (Prof. Ding suggests using Coq) for your favorite concurrent data structure.

Compiler parallelization (sponsor: Prof. Sreepathi Pai)
Compilers consume significant amounts of time and are mostly serial.  Characterize the performance behaviour of a modern compiler framework (GCC, LLVM) and identify opportunities for parallelism.  Implement and compare performance.

Transactional Memory (TM)
While this area is not as “hot” as it was a few years ago, interesting work continues to be done, and papers to be published.  Possible projects include

Hybrid transactional / non-blocking data structures
One of the key advantages of nonblocking data structures is increased concurrency. A major disadvantage is complexity. Can we get (most of) the concurrency without (most of) the complexity by using transactions for sub-parts of each operation? Consider, for example, a B-tree or 2-3 tree: can we capture rebalancing as a series of transactions, each of which transforms the tree from a consistent but sub-optimal state to a “better” consistent state?

Dual data structures
Bill Scherer gained considerable fame a decade ago by rewriting classic queues, stacks, synchronous queues, and exchangers as lock-free dual data structures, in which an operation that has to wait for a precondition leaves an explicit reservation in the data structure.  More recently, Joe Izraelevitz showed how to build dual versions of the LCRQ (FAI-based concurrent queue) of Morrison & Afek.  What else can be built in this style?  In particular, you might consider priority queues or skip lists. 

Cluster-level shared memory
Cashmere was a locally-developed project to emulate shared memory across clusters of multiprocessors connected by a fast system-area network with support for remote reads and writes—a predecessor to today’s Infiniband.  HLRC was a similar project developed at Princeton.  We now have an Infiniband cluster here in the department.  Rebuild Cashmere or HLRC to use this more modern network (and modern processors), and re-evaluate the conclusions of the earlier project. 

Sparse linear solver
In some years, one of the whole-class projects has been to parallelize Gaussian elimination.  The algorithm isn’t very efficient when the coefficient matrix is sparse (mostly zeros).  Create a better version.  (Kai Shen, formerly a professor in our department, built a very good version of this as a graduate student.  He notes:  “Sparse matrix processing is the computational kernel in applications ranging from scientific simulations to Google's PageRank calculation.  There is a little bit of numerical analysis involved in this project, but your main efforts will be centering around fine-grain synchronization, work stealing/balancing, and cache-efficient algorithms.”)

Safe parallelization of loop-based applications
Over the past decade, the research group of Prof. Chen Ding has developed a technique they call “Behavior-Oriented Parallelization” (BOP), which automatically parallelizes applications based on programmer-provided hints while preserving sequential semantics.  Versions of BOP are currently available for C/C++ and Ruby.  There are several possible directions for further work; one possibility would be to integrate BOP with gcc’s OpenMP implementation, to allow safe parallelization of loops whose iterations the programmer thinks are probably—but not provably— independent.  If you’re interested, Prof. Ding would be more than happy to talk. 

Deterministic Parallel Ruby (DPR)
Li Lu’s thesis work explored a set of extensions to the Ruby programming language for simple, deterministic parallel execution.  The extensions are safe only when code sections marked by the programmer are mutually independent.  Li developed a set of virtual machine extensions (called TARDIS) to check this independence at run-time, but the overhead was fairly high (about a 10x slowdown—acceptable for testing but certainly not ideal).  We believe that TARDIS could be extended to move much of the checking overhead off the critical path when extra cores are available.  Implement this idea and benchmark its performance. 

Alternatively, working from the same infrastructure, explore the possibility of using DPR as a teaching language. This project would entail a detailed assessment of the implementation, identification of the steps needed to make it “first-year proof,” and implementation of as many steps as possible toward that goal. 

Sharing-Aware Mapping on Multicore Systems (sponsor: Prof. Sandhya Dwarkadas)
Multicore systems share resources such as on-chip interconnects, last-level caches, and off-chip bandwidth, and incur non-uniform latencies of access.  Users of parallel applications written for these environments often have to understand the topology to get the best stand-alone performance.  In the presence of multiprogramming, which dynamically changes the resources allocated to the parallel application, extracting the best performance available from the system is next to impossible.  In a recent kernel-level implementation and corresponding publication, we have developed a Sharing-Aware Mapper, which identifies and reacts to both sharing and resource contention.  Your task will be to develop both microbenchmarks and real applications to stress test the system. 

Parallel architecture simulation and evaluation (sponsor: Prof. Sandhya Dwarkadas)
Implement a simulation model of your own or an existing shared memory multiprocessor design (e.g., TimeStamp Snooping) and evaluate its performance on a set of available benchmarks.  Possible simulation tools include SimpleScalar extended to handle multithreaded applications, SIMICS, a full system simulator, and GEM5. 

Memory Hierarchy Design for Multi-Core Processors (sponsor: Prof. Sandhya Dwarkadas)
Prof. Dwarkadas’s group is exploring novel communication and synchronization mechanisms, as well as examining ways in which on-chip state resources can be partitioned/shared so as to improve both single and multi-threaded performance.  They have working simulation designs for various communication mechanisms and cache designs.  As part of this project, you could use the simulator to examine/evaluate the proposed cache designs, suggest improvements of your own, or find and experiment with new ways to utilize the proposed communication mechanisms.  For more information, take a look at the CoSyn project.

Scalable try-locks for in-core databases
15 years ago, Bill Scherer and I developed a family of scalable queue-based locks in which a process can “time out” and stop waiting for the lock.  Chabbi et al. have recently published a variant on one of these schemes.  Bill and I conjectured that such locks would be particularly useful for in-core user-level database systems.  It would be interesting to engineer them into MySQL or memcached and measure the performance of the resulting code. 

Replication for availability
Consider a data repository—e.g., a key-value store—in which lookups are much more common than updates.  For the sake of availability, such a repository may be replicated in multiple locations, with replicas kept up to date via multicast updates.  Using memcached as the foundation, build a replicated repository and experiment with alternative implementations of ordered (consistent) multicast.  Compare the performance of your implementations to that of a (nonserializable) alternative that does not enforce consistent orders. 

Automatic hardware benchmarking
As we have seen, performance on modern machines can depend on a wide variety of system characteristics, including the number of threads per core, cores per socket, and sockets per machine; the number of levels of cache, their sizes, associativity, and sharing across threads and cores; the interconnection network topology; and the distribution of main memory.  Build a tool that automatically explores the underlying hardware and produces a detailed report of these and other characteristics. 

Parallel sorting
This is a perennial favorite.  Andrea and Remzi Arpacci-Dusseau, now at UW-Madison, made a big splash 20 years ago by improving on the best known techniques while they were graduate students at UC-Berkeley.  A good place to start would be to read up on their work, re-implement it, explore more recent improvements, and if possible implement your own. 

Parallel game tree search
Another perennial favorite.  Alpha-beta search with pruning is central to computerized versions of many classic games, including chess, checkers, reversi (Othello), and go.  How many plies can you search per second using a cluster-based combination of OpenMP and MPI? 

Shared memory v. message passing
Create a shared memory version of some existing MPI application, and perform a detailed performance comparison. 

Data parallel programming
Create a GPU version of some existing data parallel application, and perform a detailed performance comparison.  Alternatively, experiment with machine learning apps in TensorFlow.

Last Change: 01 May 2019 / Michael Scott's email address