CSC 258/458 - Parallel & Distributed Systems (Spring 2014)

Class time: Tuesday, Thursday 3:25pm-4:40pm.
Class location: CSB 601.

Instructor: Kai Shen (kshen@cs.rochester.edu)
Office hours: Tuesday, Thursday 4:40pm-5:40pm at CSB 714.

TA: Kostas Menychtas (kmenycht@cs.rochester.edu)
Office hours: Monday, Wednesday 4:00-5:00pm at CSB 626.

Overview:

Do you know that processing a single Google search requires collaborative efforts from a large number of distributed machines, performed often in parallel? And that Google's web page ranking algorithm (appropriately named PageRank) is based on large-scale sparse matrix computations that make parallel computing necessary yet challenging?

In parallel computing, multiple processors compute sub-tasks simultaneously so that work can be completed faster. Early parallel computing has focused on solving large computation-intensive problems like scientific simulations. With the increasingly available commodity multiprocessors (like multicores), parallel processing has penetrated into many areas of general computing. In distributed computing, a set of autonomous computers are coordinated to achieve unified goals like performance, reliability, and scalability. The ubiquity of computer networks and popular Internet services ties almost every aspect of the digital world to distributed computing.

This course explores the paradigms of parallel and distributed computing, their applications, and the systems/architectures supporting them. We will discuss the fundamental design and engineering trade-offs in parallel and distributed systems at every level. We will study not only what these systems are and how they work today, but also why they are designed the way they are and how they are likely to evolve in the future. We will draw examples from real-world parallel and distributed systems in this course.

Topics covered:

We will study parallel programming models and language support, including the identification of parallelism in applications, the design and implementation of parallel algorithms, and the use of parallel languages in programming (shared memory threads, distributed memory programming, and MapReduce/Hadoop parallel data processing).

We will also study the system and architectural support for shared memory parallel computing, including the support for synchronization, coherence, and consistency. We will be exposed to the multiprocessor-based servers for Internet services, with the so-called "embarrassingly parallel" workloads. We will further study the operating system support for efficient and fair use of the cache-sharing multicore processors.

We will study the system support for distributed memory parallel computing, focusing on the dominant MPI-based parallel systems. We will look into the design and implementation of the MPI runtime system, and particularly its support for point-to-point and group communications. We will also cover the system support for parallel I/O and its integration with MPI.

We will look into the foundation of distributed computing, including the distributed consensus, fault-tolerance and reliability. We will then perform case studies of practical distributed systems, such as distributed file systems. We will also devote significant attention to cluster-based server systems in large Internet data centers and cloud computing facilities (e.g., those run by Google and Amazon).

You will gain parallel programming experiences through several programming assignments, including thread-based parallel programming, MPI, and MapReduce/Hadoop parallel data processing. You will also play a large role in selecting the topic for your term project.

Text and references:

This course will not follow a formal textbook. Most of the reading materials will be technical papers and general information available on the web. A good reference text on parallel computing is "Parallel Computer Architecture, A Hardware/Software Approach" by Culler and Singh. A good reference text on distributed systems is "Distributed Systems" by Tanenbaum and Van Steen. You do not need to purchase either of the books in advance. We will discuss how these books will be useful for this course and your future study/work in the first class. Generally speaking, you probably do not want purchase them if you find no use of them beyond this course.

You may also need some reference materials to help you in programming assignments. Many of these materials should be available on the web.

Assignments and grading:

There will be about three programming assignments, two written assignments, and a term project. The project will be evaluated through a demo at the end of the semester. You may also earn extra credit through particularly strong participation in class discussions and/or a class presentation. A tentative grade division is listed below.
       5% - assignment #1 (written)
       18% - assignment #2 (programming)
       5% - assignment #3 (written)
       18% - assignment #4 (programming)
       18% - assignment #5 (programming)
       36% - term project
       up to 5% extra credit - strong participation in class discussions
       up to 5% extra credit - successful delivery of an optional class presentation on a topic of your choice (which must be related to parallel and distributed systems)