Assignment #4 - Hadoop Parallel Data Processing
Due by 11:59pm, Thursday, April 7.
In this assignment, you will program two applications on the Hadoop parallel data processing platform. This will allow you to gain practical experience on MapReduce programming and learn the performance implication of parallel data processing.
Hadoop System Structure:
A Hadoop cluster has two parts: the Hadoop Distributed File System (HDFS) and the MapReduce framework. Each of these systems performs a distinct task, but they work together to process large amounts of data. HDFS stores files across the cluster, spreading them out so it can hold files that are larger than what any individual node could store. MapReduce takes Java programs and handles both running them near where the data is stored and scheduling tasks to not overload the nodes.
HDFS is made of a single NameNode and multiple DataNodes. The NameNode tracks the location of the files in the DataNodes and is responsible for coordinating them to ensure consistency of data. When a user wishes to read data, it asks the NameNode for which DataNodes hold the data and then contacts the DataNodes to get the data. Similarly, when writing data the user asks the NameNode where to store the data and then uploads the data directly to the DataNodes. To handle large files, HDFS splits files into blocks of uniform size. DataNodes hold blocks instead of files, and blocks from a single file may be spread across multiple DataNodes. If you are interested in more detail about the architecture of HDFS, you can read the HDFS Architecture Guide.
MapReduce is made of a single JobTracker and multiple TaskTrackers. Users write MapReduce jobs, which consist of two types of tasks: maps and reduces. The JobTracker takes a description of these tasks and schedules them across the TaskTrackers. One TaskTracker runs on each DataNode, and the JobTracker ensures that tasks run close to the nodes that have the data for them.
The MapReduce Tutorial
provides an excellent start to understand the general structure of a
MapReduce program and as a template to start with your own programs.
Please consider the following as you search the web for useful examples:
try to find simple algorithms that, in principle, follow the mechanism
you are trying to implement for your program. For example, as you are trying
to implement a matrix-vector multiplication, perhaps a matrix transpose
operation (which is simpler) will serve as a good guide.
The Hadoop API is
the place to check for detailed information on what function to call.
Some of the more useful classes are
subclasses. You will have to implement subclasses of
will give you empty versions of functions not all map or reduce tasks need.
All of the official documentation can be found at the
main documentation index.
A wealth of examples can be found on the web, which should guide you in building
your own programs. When you use online resources, please make sure that
they are up-to-date;
the Hadoop API has undergone a few revisions and some functionality and API
references you might find can be out of date. And of course, always
use said examples as you should treat work by others; if you build upon
them, add appropriate references to your code and your report.
You are asked to implement the following applications in this assignment:
- Word Count
This is the warm-up part, to get you acquainted with Hadoop.
This application can be built using the MapReduce tutorial alone,
so make sure to read it carefully and understand it.
You should count the number of files each word appears in. Your word
identification should be case-insensitive and ignore anything that isn't
a letter. For correctness tests, you can find
37 text files from Project Gutenberg
/input/gutenberg HDFS directory in the cluster.
Some words used in all of
the files are: for, one, thousand, and years. Some words used in only a
single file are: academics, faints, tag, and yams.
We provide another dataset in the
directory for performance testing.
Possibly helpful hint: Here is the code for a mapper to get a file name for its input:
FileSplit fileSplit = (FileSplit) reporter.getInputSplit();
String filename = fileSplit.getPath().getName();
- Jacobi Method
Please first read this simple Wikipedia page
about the Jacobi method to solve a system of linear equations!
It is an iterative solver, and in this sense it resembles in principle the mechanism
employed to solve problems such as PageRank.
To solve a system of linear equations "Ax = b" with the Jacobi method, we start by
pre-assigning a set of values to vector x (e.g. all zeros) and update these
values after running a few rounds of the method.
When the values of vector x do not change between two rounds, or they don't change
beyond a certain, pre-defined threshold of a few significant digits (let's call
this "divergence"), the algorithm has reached a solution (has converged according
to the defined threshold). The exact number of required iterations will depend on the
problem (matrix A and the right-hand-side vector b), initial values of vector x,
and how accurate you want the solution to be (a few significant digits).
To guarantee convergence, we will be working with artificially built matrices
that are guaranteed to be "strict diagonally dominant" (convergence condition).
The sequential version of the algorithm can be reached from the class account,
at the NFS directory
/u/cs(2 or 4)58/apps/jacobi/seq.
The input for this program will be a text file which describes a matrix A[NxN] and
a right-hand-side vector b[N]. This file contains both A and b as a single matrix
Ab[N x (N+1)], where A makes the first N columns, and b makes the last column.
Every row of this file contains an entry of the matrix Ab in space-separated
tuples of row(int), column(int) and value(double).
You will find such inputs at the HDFS directory
/input/jacobi/Number.dat, where Number is the value N.
Use smaller values of N to test for correctness --- use the larger matrices
for performance evaluation.
According to our experience, Hadoop expects truly massive files to offer good
performance. We might, according to your feedback, provide you with larger
and different inputs.
As opposed to the previous two examples, Jacobi requires your "Hadoop driver"
(the main program that will be used to launch mappers and reducers) to go
through a number of iterations. There are a few challenges here :
Though not the best approach (as it maintains state directly, which is conceptually
against the map-reduce paradigm),
feel free to take the following (or any convenient approach you find)
to ease your task of communicating the iteration value to mappers/reducers:
- You will need to maintain an iteration value and will likely want
to create different output directories for every iteration;
you can do this at your main program, the "Hadoop job driver".
- You might want to let your mappers/reducers know on which iteration
they are working on; a mechanism to do this is to update the Hadoop job
configuration on every iteration (see hint below).
- After every iteration, you might want to "merge" the outputs of the
reducers (that could be parts of vector x) to a single file, such that
every reducer will read a single file as vector x; you can do this
at your main program, the "Hadoop job driver"; note here that such merging
is unlikely to be needed for this assignment (so don't worry about it
at the first steps of your development, only consider for completeness).
- You might want your reducers to "communicate" the current progress towards
convergence to your main program; you can use a counter
(see the MapReduce tutorial)
to track the number of values in vector x that do not yet meet the threshold.
For our tests, convergence with 10 significant digits and vector x initialized
to 0 would be achieved in less than 100 iterations for all inputs.
JobConf.setJarByClass with the job's class to allow you to
JobConf across iterations, and save and update an
i on every iteration with a mechanism like this:
How to Use the CS or CSUG Hadoop Cluster Setup:
In order to test your code and ensure that others can test their code, you need to
pay attention to what jobs are running on the cluster. You can do this by checking
the cluster status page (see details below). If someone
is running a job already, please wait until it is complete to start your own.
Conversely, please make sure to introduce a dealy between your job
submissions so others have a chance to run.
If your code has a bug in it that causes it to run indefinitely, then you can
stop it with the command
~cs(2 or 4)58/hadoop/bin/hadoop job -kill <job_ID>
job_ID is the ID given in the log at the start of your job,
on the cluster status page, or in the output of the job list command above.
Please note that simply killing the program running on your own computer is
not enough to stop the job on the cluster.
If you suspect that someone else's program is left hanging in the system,
please e-mail him/her and ask him/her to kill the task, and if the problem is
not resolved let the TA know.
NameNode and JobTracker run on cycle2.csug.rochester.edu.
DataNodes are on cycle1, cycle2, cycle3, node2x14a
From the university network, you can check the status of the HDFS cluster at
which shows status of the DataNodes and allows you to view the files in the cluster.
shows the status of the TaskTrackers and the jobs that will run, are running, and have run on the MapReduce cluster. There is a link for every job which will display the current status, number of tasks, and various statistics for the run.
Alternatively, the command
~cs258/hadoop/bin/hadoop job -list will provide the same information.
NameNode and JobTracker run on cycle3.cs.rochester.edu.
DataNode are on cycle1, cycle2, cycle3, node17, node18, node20, node21, node23.
From the university network, you can check the status of the HDFS cluster at
which shows status of the DataNodes and allow you to view the files in the cluster.
shows the status of the TaskTrackers and the jobs that will run, are running, and have run on the MapReduce cluster. There is a link for every job which will display the current status, number of tasks, and various statistics for the run. Alternatively, the command
~cs458/hadoop/bin/hadoop job -list will provide the same information.
Build/Run Your Programs:
The TA recommends using ant to build your
programs. The ant utility is already installed on the department machines. The TA
has saved a
build.xml file under
/u/cs (2 or 4)58/apps/jacobi/hadoop
that should build your code and already includes the paths to the Hadoop libraries.
Download the build.xml file and put all of your java source files under a subdirectory
src/. After running
ant, it will create a jar file
build/Hadoop.jar. Don't forget that you need to have your
environment variable set; we recommend that you use the latest jvm installed in the
department machines (e.g.
You can run a class from that jar file with the following command:
~cs458/hadoop/bin/hadoop jar build/Hadoop.jar <class> <arguments>
or, on csug
~cs258/hadoop/bin/hadoop jar build/Hadoop.jar <class> <arguments>
class is the name of your main class and the
arguments are passed to your program. The
will automatically read the configuration to run your program on the cluster.
It will output a log file containing any errors and a simple progress meter.
At the end it will output a small pile of counters. You may find the following
useful when attempting to debug your programs:
* Map input records
* Map output records
* Combine input records
* Combine output records
* Reduce input records
* Reduce output records
MapReduce tasks generally expect their input and output to be in the HDFS cluster.
You can upload a file with the command
~cs458/hadoop/bin/hadoop fs -put <file> .,
which will upload
file into your home directory on the HDFS cluster.
Since the input files for your program are read-only, you don't have to copy
them to your HDFS home directories --- you can just instruct your applications to
get input from the
/input/ HDFS directory.
We have already created your home directories in the HDFS cluster and they should
be found at
<username> is your cs login.
Similarly, files can be downloaded via
~cs(2 or 4)58/hadoop/bin/hadoop fs -get <file> ., which will download
the file from HDFS to your current directory.
You will be provided with useful help info when you try to use a utility without
parameter; for example,
~cs458/hadoop/bin/hadoop fs will give you
a set of commands that you can apply to HDFS.
Each MapReduce task must have an output path, which must not exist before running
the program, that will be created as a directory in HDFS. That directory will
contain logs of your program. The output from the reduce tasks will be saved as
part-00000 (large outputs will create
part-00001 and so
on, but that is unlikely to happen for this assignment). If your output is a
text file, you can read it without downloading it with the command
~cs458/hadoop/bin/hadoop fs -cat <output_directory>/part-00000
output_directory is the path set by your program). If you
used a SequenceFileOutputFormat, you can get a text version with
~cs458/hadoop/bin/hadoop fs -text <output_directory>/part-00000.
In order to measure the performance of your code, you need to test it with varying
levels of parallelism. You can control the number of map tasks run by your code
by varying the number of blocks in your input data or suggesting min/max split sizes
for your InputFormat in your Hadoop job driver (main program).
You can check the log of your program for a "Launched map tasks" counter, which
will tell you how many tasks were run at once to handle your data.
You are asked to turn in your source files and a
README file. The README file should be in plain text format. It should
contain a description of your design, what is and what is not realized
in your implementation. If your program requires any special compilation
flag to build, you need to specify the full build command in the README
file. The README
file should also include any additional information needed to run your
You should electronically turn in the required files.
Instructions for electronic turn-in can be found on the course Web site.
By turning in early, you help relieve the contention of using the shared cluster
and therefore we strongly encourage that.
- 10%: correct implementation of WordCount.
- 50%: correct implementation of Jacobi Method.
- 30%: performance of your solution. This is equally split
across the two applications.
- 10%: a clear README file, clarity of your source code and completeness
of your comments.
- 10% bonus for those who make their last turn-in by at least three days in advance.
This assignment is adapted from one created by Prof. Kai Shen.