Class time: Tuesday, Thursday 3:25pm-4:40pm.
Class location: CSB 601.
Instructor: Kai Shen
(kshen@cs.rochester.edu)
Office hours: Tuesday, Thursday 4:40pm-5:40pm at CSB 714.
TA: Kostas Menychtas (kmenycht@cs.rochester.edu)
Office hours: Monday, Wednesday 4:00-5:00pm at CSB 626.
In parallel computing, multiple processors compute sub-tasks simultaneously so that work can be completed faster. Early parallel computing has focused on solving large computation-intensive problems like scientific simulations. With the increasingly available commodity multiprocessors (like multicores), parallel processing has penetrated into many areas of general computing. In distributed computing, a set of autonomous computers are coordinated to achieve unified goals like performance, reliability, and scalability. The ubiquity of computer networks and popular Internet services ties almost every aspect of the digital world to distributed computing.
This course explores the paradigms of parallel and distributed computing, their applications, and the systems/architectures supporting them. We will discuss the fundamental design and engineering trade-offs in parallel and distributed systems at every level. We will study not only what these systems are and how they work today, but also why they are designed the way they are and how they are likely to evolve in the future. We will draw examples from real-world parallel and distributed systems in this course.
We will also study the system and architectural support for shared memory parallel computing, including the support for synchronization, coherence, and consistency. We will be exposed to the multiprocessor-based servers for Internet services, with the so-called "embarrassingly parallel" workloads. We will further study the operating system support for efficient and fair use of the cache-sharing multicore processors.
We will study the system support for distributed memory parallel computing, focusing on the dominant MPI-based parallel systems. We will look into the design and implementation of the MPI runtime system, and particularly its support for point-to-point and group communications. We will also cover the system support for parallel I/O and its integration with MPI.
We will look into the foundation of distributed computing, including the distributed consensus, fault-tolerance and reliability. We will then perform case studies of practical distributed systems, such as distributed file systems. We will also devote significant attention to cluster-based server systems in large Internet data centers and cloud computing facilities (e.g., those run by Google and Amazon).
You will gain parallel programming experiences through several programming assignments, including thread-based parallel programming, MPI, and MapReduce/Hadoop parallel data processing. You will also play a large role in selecting the topic for your term project.
You may also need some reference materials to help you in programming assignments. Many of these materials should be available on the web.