CSC 258/458 - Parallel & Distributed Systems (Spring 2011)

Class time: Monday, Wednesday 3:25pm-4:40pm.
Class location: CSB 632.

Instructor: Kai Shen (kshen@cs.rochester.edu)
Office hours: Monday, Wednesday 4:40pm-5:40pm at CSB 714; walk-ins without appointment are welcome.

Overview:
Do you know that processing a single Google search requires collaborative efforts from a large number of distributed machines, performed sometimes in parallel? And that Google's web page ranking algorithm (appropriately named PageRank) is based on sparse matrix computations that make parallel computing desirable yet challenging?

In parallel computing, multiple processors compute sub-tasks simultaneously so that work can be completed faster. Early parallel computing has focused on solving large computation-intensive problems like scientific simulations. With the increasingly available commodity multiprocessors (like multicores), parallel processing has penetrated into many areas of general computing. In distributed computing, a set of autonomous computers are coordinated to achieve unified goals like performance, reliability, and scalability. The ubiquity of computer networks and popular Internet services ties almost every aspect of the digital world to distributed computing.

This course explores the paradigms of parallel and distributed computing, their applications, and the systems/architectures supporting them. We will discuss the fundamental design and engineering trade-offs in parallel and distributed systems at every level. We will study not only what these systems are and how they work today, but also why they are designed the way they are and how they are likely to evolve in the future. We will draw examples from real-world parallel and distributed systems in this course.

Topics covered:
The first stage of the course will be devoted to parallel programming models and language support, including the identification of parallelisms in applications, the design and implementation of parallel algorithms in applications, and the use of parallel languages in programming (shared memory threads and distributed memory programming).

In the second stage, we will study the system and architectural support for shared memory parallel computing, including the support for synchronization, coherence, and consistency. At this stage we will also be exposed to the multiprocessor-based servers for Internet services, with the so-called "embarrassingly parallel" workloads. We will conclude the second stage with a study on the operating system support for efficient and fair use of the cache-sharing multicore processors.

In the third stage, we will study the system support for distributed memory parallel computing, focusing on the dominant MPI-based parallel systems. We will look into the design and implementation of the MPI runtime system, and particularly its support for point-to-point and group communications. We will also cover the system support for parallel I/O and its integration with MPI.

In the last stage of this course, we will look into the foundation of distributed computing, including the distributed consensus, fault-tolerance and reliability. We will then perform case studies of practical distributed systems, including distributed file systems and peer-to-peer systems. We will also devote significant attention to cluster-based server systems in large Internet data centers (e.g., those running Google services). We will examine the parallel computing models customized to these systems (like the MapReduce).

Text and references:
This course will not follow a formal textbook. Most of the reading materials will be technical papers and general information available on the web. A good reference text on parallel computing is "Parallel Computer Architecture, A Hardware/Software Approach" by Culler and Singh. A good reference text on distributed systems is "Distributed Systems" by Tanenbaum and Van Steen. You may also need some reference materials to help you in programming assignments. Many of these materials should be available on the web.

Assignments and grading:
There will be about three programming assignments and a term project. The project will be partially evaluated through a presentation at the end of the semester. Part of the grade will also be determined from each student's participation in class discussions. A tentative grade division is listed below.
       15% - prog. assignment #1
       15% - prog. assignment #2
       15% - prog. assignment #3
       10% - written assignment
       35% - term project
       10% - participation in class discussions
Each student may also consider to make an optional class presentation on a topic of his/her choice (which must be related to parallel and distributed systems). A successful delivery of such presentation will earn an extra credit of up to 10%.