Parallel and Distributed Systems
With the explosion of the Internet over the past 25 years, and with the
proliferation of PC clusters in the server/data center marketplace,
distributed computing has become central to most of computer
science. It also remains the dominant computing paradigm in very
high-end scientific computation.
With the end of Denard scaling and the rise of multicore some 15 years
ago, shared memory parallelism has become similarly ubiquitous in the
desktop/laptop/cell phone market.
Almost every nontrivial program today is multithreaded.
CSC 2/458 is a loosely structured course devoted to all aspects of
parallel and distributed systems.
Core topics to be covered include
Implementation of threads.
Parallelization strategies: speedup, efficiency, Amdahl’s
Synchronization: hardware primitives, clocks, mutual exclusion,
transactions, nonblocking data structures.
Parallel machine architectures: multicore and multithreaded chips;
large-scale multiproccessors (with and without coherence); clusters;
Coherence and consistency: hardware-level memory models, cache
Parallel programming models and interfaces: language threads,
pthreads, MPI, OpenMP, Cilk, TBB, sockets, remote procedure call
(RPC), transactional memory (TM), determinism.
Parallel semantics: memory models; consensus; the consensus hierarchy;
safety (linearizability, serializability, etc.); liveness
Fault tolerance and reliability: fail-stop versus Byzantine failure
models; the FLP theorem; two- and three-phase commits; Paxos and
Raft; reliable group communication; checkpointing; message logging.
In keeping with the multicore revolution and the
instructor’s current interests, the course this semester will be
weighted somewhat toward shared memory parallelism.
CSC 2/454 and 2/456 or equivalent.
This will be a time-intensive and discussion-heavy class. We will
be drawing on a wide range of written material, including multiple
journal and conference papers.
Reading is mandatory and must be completed in
Barring illness and similar issues, class attendance is also mandatory.
If you can’t commit to being present and
prepared for each class session, please don’t take the
Additional topics will depend to some degree on the interests of
participants. Possibilities include
Parallel program optimization techniques: synchronization
granularity, dependences, scheduling, load balancing.
Distributed file systems: NFS, xFS, Coda, etc.
Supercomputers and supercomputing clusters; vector and GPGPU
Data-parallel languages: HPF; C*; Split-C; co-array Fortran, UPC,
and Titanium; Fortress, Chapel, and X10.
Race detection and deterministic execution.
Parallel functional languages: Concurrent Haskell, Erlang, etc.
RDMA networks: Infiniband, etc.
Software distributed shared memory.
Component models: CORBA, .NET, JavaBeans.
Back to the course home page
06 January 2019 /