CSC 2/458
Parallel and Distributed Systems
Spring 2013
Both parallel and distributed systems can be defined as a collection of
processing elements that communicate and cooperate to achieve a common goal.
Advances in processor technology have resulted in today's
computer systems using parallelism at all levels: within
each CPU by executing multiple instructions from the same thread
of control simultaneously (superscalar architectures/instruction-level parallelism);
by executing multiple intructions from different threads of control
simultaneously (simultaneous multithreading); by introducing multiple
cores in a single chip (chip multiprocessors); by using multiple
chips to form multiprocessors; or via multiple networked nodes to form
a cluster; making parallel systems increasingly ubiquitous.
Simultaneously, advances in networking technology have created
an explosion of distributed applications, making distributed computing
an inherent fabric in our day-to-day lives.
This course will focus on the principles of parallel and distributed systems
and the implementation and performance issues associated with them.
We will examine programming models/interfaces to parallel and
distributed computing,
interprocess communication, synchronization and consistency models,
fault tolerance and reliability, distributed process management,
parallel machine architectures, parallel program optimization,
and the interaction of the compiler, run-time, and machine architecture.
Class time: 3.25---4.40 p.m., Tuesdays and Thursdays.
Class location: CSB 601.
Instructor:
Sandhya Dwarkadas
e-mail: sandhya at cs
Office: CSB 717, 275-5647
Office hours by appointment, but don't be afraid to drop by. If
you want to be sure I'm in my office, use e-mail to contact me.
TA:
Brandon Shroyer
e-mail: bshroyer at cs
Office: CSB 624, 275-1348
Office hours: Tuesdays and Wednesdays, 2-3 p.m.
Questions and Answers
E-mail is best. Please use the class
discussion board to post
questions or information of general interest. When appropriate,
I will use a class e-mail list to disseminate information/instructions.
Prerequisites:
CSC 252 or equivalent, and C/C++ programming experience
under Unix. Also, CSC 254 and CSC 256 is recommended.
Material we will use:
There is no required text for this course. Please see the
class for pointers to slides and readings (currently points to slides and readings from 2009). The exact content covered during the
semester will depend to some extent on the interests of the students.
In addition to papers covering the state-of-the-art,
we will draw material for the course from several sources, the main ones
being (these books have been placed on a 2 hour reserve at Carlson):
Distributed Systems, Principles and Paradigms, 2002 Edition: Andrew S. Tanenbaum and Maarten van Steen
Parallel Computer Architecture, A Hardware/Software Approach, 1999 Edition:
David E. Culler, Jaswinder Pal Singh, and Anoop Gupta
High Performance Compilers for Parallel Computing, 1996 Edition:
Michael Wolfe
Optimizing Compilers for Modern Architectures, 2002 Edition:
Randy Allen and Ken Kennedy
Foundations of Multithreaded, Parallel, and Distributed Programming, 2000 Edition: Gregory R. Andrews (Publishers: Addison-Wesley)
Introduction to Parallel Computing, Anantha Grama, George Karypis,
Vipin Kumar, and Anshul Gupta (Publishers: Addison-Wesley)
(Some) Topics Covered:
Basics of parallelization and parallelization strategies
Parallel/distributed programming models and interfaces -
shared memory vs. message
passing vs. remote procedure call (RPC) vs. global address space
languages: e.g., pthreads, MPI, MapReduce, OpenMP, HPF, UPC,
language-level threads (e.g., Java)
Parallel machine architectures - shared and distributed memory machines,
multicore and multithreaded chips, interconnection networks
Parallel program optimization techniques - synchronization granularity,
dependences, scheduling, load
balancing
Synchronization - hardware primitives, logical and physical clocks,
mutual exclusion, distributed transactions, transactional memory
Consistency and coherence - data-centric versus client-centric consistency
models, cache coherence protocols
Fault tolerance and reliability - fail-stop versus byzantine failure models,
two- and three-phase commits, reliable group communication, checkpointing,
message logging
Assignments and Grading
There will be two or three small programming assignments, a couple of
written homework assignments, a term project, and
potentially one exam. There may also be occasional spot quizzes.
The course will consist of a combination of lectures
and student presentations. Grading will be based on the
assignments, project, presentations,
exam (maybe)/quizzes, and class participation and attendance.
The tentative grading scheme is as follows:
- 30% class participation and attendance, class presentation(s), and seminar summaries
- 20% small projects
- 30% large project
- 20% homeworks/quizzes/exams (the latter if any)
The guidelines for the small projects will be available below.
Honor Code Policy:
For homeworks and programming assignments,
students are encouraged to consult each other, the TA, the
instructor, or anyone else for that matter. However, the assistance
offered or accepted should not go beyond a discussion of the problem
and a sketch of a solution. You can use the following guideline:
when it comes time for you to write your program or your homework
paper, do not use any written material from the discussion. If you
can reconstruct the discussion and complete the solution on your own,
then you have learned the material (and that is the objective of this
course!). For team projects, you should make sure
to identify division of labor in your README.
While projects will generally be graded as a team rather than separately
for each individual, corrective action could be taken.
Links to Relevant Documentation
Programming Posix Threads - a tutorial from Lawrence Livermore National Lab
MPI - The Message Passing Interface (MPI) standard and tutorials
Cashmere
- Overview and Documentation
Paper (and Chapter) Reading List
Section 8.6,
“Coroutines”, and Chapter 12 (Section 12.2),
“Concurrency”, from
Programming Language Pragmatics, by Michael L. Scott.
Morgan Kaufmann Publishers, 2000.
Chapter 1 of Culler, Singh, and Gupta, as well as Tanenbaum and van Steen.
Chapters 2 and 3 of Culler, Singh, and Gupta
The Performance Implications of Thread Management Alternatives for
Shared-Memory Multiprocessors, T. J. Anderson, E. D. Lazowska, and H. M. Levy, IEEE Transactions on Computers, 38 (12), December 1989.
Algorithms for Scalable Synchronization on Shared Memory Multiprocessors,
John Mellor-Crummey and Michael L. Scott,
ACM Transactions on Computer Systems, 9(1):21-65, February 1991.
Capriccio: Scalable Threads for Internet Services , R. Behren, J. Condit, F. Zhou, G. C. Necula, and E. Brewer, Symposium on Operating Systems
Principles , October 2003.
Shared Memory Consistency Models: A Tutorial,
Sarita Adve and Kourosh Gharachorloo, Rice TR 9512, also appeared in
IEEE Computer, December 1996.
Architecture and Design of AlphaServer GS320 ,
Kourosh Gharachorloo, Madhu Sharma, Simon Steely, and Stephen Van Doren,
International Conference on
Architectural Support for Programming Languages and Operating Systems, 2000.
POWER4 System Microarchitecture , white paper, IBM Server Group,
October 2001.
The Sun Fireplane System Interconnect ,
Alan Charlesworth, Supercomputing Conference, November 2001.
TreadMarks: Shared Memory Computing on Networks of Workstations,
A. Amza, A. L. Cox, S. Dwarkadas, P. Keleher, H. Lu, R. Rajamony, W. Yu, and W.
Zwaenepoel,
IEEE Computer, February 1996.
Cashmere-2L: Software Coherent Shared Memory on a Clustered Remote-Write Network,
R. Stets, S. Dwarkadas, N. Hardavellas, G. Hunt, L. Kontothanassis,
S. Parthasarathy, and M. L. Scott,
Symposium on Operating Systems Principles,
October 1997.
Implementing Remote Procedure Calls, A.D. Birrell and B.J. Nelson,
ACM Transactions on Computer Systems,
Vol. 2, No. 1, pp. 39-59, February 1984.
Spinglass: Secure and Scalable Communications Tools for
Mission-Critical Computing , Kenneth P. Birman, Robbert
van Renesse and Werner Vogels, International
Survivability Conference and Exposition, DARPA
DISCEX-2001, Anaheim, California, June 2001.
A Survery of Rollback-Recovery Protocols in Message-Passing Systems,
E. N. Elnozahy, L. Alvisi, Y. Wang, and D. B. Johnson,
ACM Computing Surveys,
34:3, pp. 375-408, September 2002.
IO-Lite: A Unified I/O Buffering and Caching System ,
V. Pai, P. Druschel,and W. Zwaenepoel,
Proceedings of the Third Operating Systems Design and Implementation
Symposium, pp.15-28, February 1999.
The Costs and Limits of Availability for Replicated Services ,
Haifeng Yu and Amin Vahdat,
Proceedings of the Eighteenth ACM Symposium on Operating
Systems Principles (SOSP), October 2001.
The Horus and Ensemble Projects: Accomplishments and Limitations ,
Ken Birman, Robert Constable, Mark Hayden, Christopher Kreitz, Ohad Rodeh,
Robbert van Renesse, Werner Vogels.
Proc. of the DARPA Information Survivability
Conference & Exposition (DISCEX '00), January 25-27 2000 in Hilton Head,
South Carolina.
Fundamental Challenges in Mobile Computing ,
Principles of Distributed Computing, 1995.
Locality-Aware Request Distribution in Cluster-based Network Servers ,
Vivek Pai, Mohit Aron, Gaurav Banga, Michael Svendsen, Peter Druschel,
Willy Zwaenepoel, and Eric Nahum,
8th International Conference on Architectural Support for
Programming Languages and Operating Systems, October 1998.
Parallelization of General Linkage Analysis Problems,
S. Dwarkadas, A.A. Schaffer, R.W. Cottingham, A.L. Cox, P. Keleher,
and W. Zwaenepoel,
Human Heredity, Vol. 44, pp. 127-141, July 1994.
A Survey of Synchronization Methods for Parallel Computers,
Anne Dinning, IEEE Computer, July 1989.