Due by 11:59pm, Monday, April 7.
You earn 10% bonus if you make your last (therefore final) turn-in by 11:59pm, Friday, April 4.
Please direct your questions to the TA about this assignment.
In this assignment, you will program three applications on the Hadoop parallel data processing platform. This will allow you to gain practical experience on MapReduce programming and learn the performance implication of parallel data processing.
HDFS is made of a single NameNode and multiple DataNodes. The NameNode tracks the location of the files in the DataNodes and is responsible for coordinating them to ensure consistency of data. When a user wishes to read data, it asks the NameNode for which DataNodes hold the data and then contacts the DataNodes to get the data. Similarly, when writing data the user asks the NameNode where to store the data and then uploads the data directly to the DataNodes. To handle large files, HDFS splits files into blocks of uniform size. DataNodes hold blocks instead of files, and blocks from a single file may be spread across multiple DataNodes.
MapReduce is made of a single JobTracker and multiple TaskTrackers. Users write MapReduce jobs, which consist of two types of tasks: maps and reduces. The JobTracker takes a description of these tasks and schedules them across the TaskTrackers. One TaskTracker runs on each DataNode, and the JobTracker ensures that tasks run close to the nodes that have the data for them.
In our cluster, the NameNode and JobTracker both run on node17.cs.rochester.edu. On larger clusters, these servers may run on separate computers. A DataNode and TaskTracker run on node18, node20, node21, node23, node24, node25, node26, node27 and node28. From the university network, you can check the status of the HDFS cluster by visiting http://node17.cs.rochester.edu:50070/ which will display that status of the DataNodes and allow you to view the files in the cluster. Also, http://node17.cs.rochester.edu:50030/ shows the status of the TaskTrackers and the jobs that will run, are running, and have run on the MapReduce cluster. There will be a link for every job which will display the current status, number of tasks, and various statistics for the run.
You do not need to log into any of these machines to run your programs. You instead run your program on another machine on the research network such as cycle1, cycle2, or cycle3 and the program will connect to the NameNode and JobTracker as needed. Specifics on how to do this and how to read the status of the MapReduce cluster are provided below. If you are interested in more detail about the architecture of HDFS, you can read the HDFS Architecture Guide.
The Hadoop API is the place to check for detailed information on what function to call. Some of the more useful classes are JobConf and the InputFormat subclasses. You will have to implement subclasses of Tool, Mapper, and Reducer. Extending MapReduceBase will give you empty versions of functions not all map or reduce tasks need.
All of the official documentation can be found at the main documentation index. A wealth of examples can be found on the web, which should guide you in building your own programs. When you use online resources, please make sure that they are up-to-date; the Hadoop API has undergone a few revisions and some functionality and API references you might find can be out of date. And of course, always use said examples as you should treat work by others; if you build upon them, add appropriate references to your code and your report.
/input/gutenberg
HDFS directory in the cluster.
Some words used in all of
the files are: for, one, thousand, and years. Some words used in only a
single file are: academics, faints, tag, and yams.
We provide another dataset in the /input/webster.1
HDFS
directory for performance testing.
FileSplit fileSplit = (FileSplit) reporter.getInputSplit(); String filename = fileSplit.getPath().getName();
matrix.dat 1 1 1.1 1 2 2.2 2 1 3.3 2 2 4.4 vector.dat 1 2The vector and matrix files can be found in HDFS at
/input/vector.dat
and /input/matrix.1
. The first entry in the result matrix should
be 10240.0.
/u/cs458/apps/jacobi/seq
./input/jacobi
,
marked as /input/jacobi/Number.dat
, where Number is the value N.
Use smaller values of N to test for correctness --- use the larger matrices
for performance evaluation.
According to our experience, Hadoop expects truly massive files to offer good
performance. We might, according to your feedback, provide you with larger
and different inputs.
JobConf.setJarByClass
with the job's class to allow you to
reuse a JobConf
across iterations, and save and update an
iteration variable i
on every iteration with a mechanism like this:
job.getConfiguration().setInt("iteration", i)
.
build.xml
file under /u/cs458/apps/jacobi/hadoop
that should build your code and already includes the paths to the Hadoop libraries.
Download the build.xml file and put all of your java source files under a subdirectory
src/
. After running ant
, it will create a jar file
build/Hadoop.jar
. Don't forget that you need to have your JAVA_HOME
environment variable set; we recommend that you use the latest jvm installed in the
department machines (e.g. export JAVA_HOME="/usr/lib/jvm/java-1.7.0/"
).
You can run a class from that jar file with the following command:
~cs458/hadoop-1.2.1/bin/hadoop jar build/Hadoop.jar <class> <arguments>Where
class
is the name of your main class and the
arguments
are passed to your program. The hadoop
program
will automatically read the configuration to run your program on the cluster.
It will output a log file containing any errors and a simple progress meter.
At the end it will output a small pile of counters. You may find the following
useful when attempting to debug your programs:
* Map input records * Map output records * Combine input records * Combine output records * Reduce input records * Reduce output recordsMapReduce tasks generally expect their input and output to be in the HDFS cluster. You can upload a file with the command
~cs458/hadoop-1.2.1/bin/hadoop fs -put <file> .
,
which will upload file
into your home directory on the HDFS cluster.
Since the input files for your program are read-only, you don't have to copy
them to your HDFS home directories --- you can just instruct your applications to
get input from the /input/
HDFS directory.
We have already created your home directories in the HDFS cluster and they should
be found at /user/<username>
, where <username>
is your cs login.
Similarly, files can be downloaded via
~cs458/hadoop-1.2.1/bin/hadoop fs -get <file> .
, which will download
the file from HDFS to your current directory.
You will be provided with useful help info when you try to use a utility without
parameter; for example, ~cs458/hadoop-1.2.1/bin/hadoop fs
will give you
a set of commands that you can apply to HDFS.
Each MapReduce task must have an output path, which must not exist before running
the program, that will be created as a directory in HDFS. That directory will
contain logs of your program. The output from the reduce tasks will be saved as
part-00000
(large outputs will create part-00001
and so
on, but that is unlikely to happen for this assignment). If your output is a
text file, you can read it without downloading it with the command
~cs458/hadoop-1.2.1/bin/hadoop fs -cat <output_directory>/part-00000
(where output_directory
is the path set by your program). If you
used a SequenceFileOutputFormat, you can get a text version with
~cs458/hadoop-1.2.1/bin/hadoop fs -text <output_directory>/part-00000
.
You can check the log of your program for a "Launched map tasks" counter, which will tell you how many tasks were run at once to handle your data.
~cs458/hadoop-1.2.1/bin/hadoop job -list
. If someone
is running a job already, please wait until it is complete to start your own.
Conversely, please give some space between your jobs so others have a chance to run.
If your code has a bug in it that causes it to run indefinitely, then you can
stop it with the command ~cs458/hadoop-1.2.1/bin/hadoop job -kill <job_ID>>
where job_ID
is the ID given in the log at the start of your job,
on the cluster status page, or in the output of the job list command above.
Please note that simply killing the program running on your own computer is
not enough to stop the job on the cluster.
If you suspect that someone else's program is left hanging in the system,
please e-mail him/her and ask him/her to kill the task, and if the problem is
not resolved let the TA know.
You should electronically turn in the required files. Instructions for electronic turn-in can be found on the course Web site.