Due by Wednesday, April 22.
In this assignment, you will develop an application using the Hadoop parallel data processing platform. This will allow you to gain practical experience using the MapReduce programming model and to learn to parallelize for performance when processing large-scale data.
HDFS is made of a single NameNode and multiple DataNodes. The NameNode tracks the location of the files in the DataNodes and is responsible for coordinating them to ensure consistency of data. When a user wishes to read data, it asks the NameNode for which DataNodes hold the data and then contacts the DataNodes to get the data. Similarly, when writing data the user asks the NameNode where to store the data and then uploads the data directly to the DataNodes. To handle large files, HDFS splits files into blocks of uniform size. DataNodes hold blocks instead of files, and blocks from a single file may be spread across multiple DataNodes. If you are interested in more detail about the architecture of HDFS, you can read the HDFS Architecture Guide.
MapReduce is made of a single JobTracker and multiple TaskTrackers. Users write MapReduce jobs, which consist of two types of tasks: maps and reduces. The JobTracker takes a description of these tasks and schedules them across the TaskTrackers. One TaskTracker runs on each DataNode, and the JobTracker ensures that tasks run close to the nodes that have the data for them.
The Hadoop API is the place to check for detailed information on what function to call. Some of the more useful classes are JobConf and the InputFormat subclasses. You will have to implement subclasses of Tool, Mapper, and Reducer. Extending MapReduceBase will give you empty versions of functions not all map or reduce tasks need.
All of the official documentation can be found at the main documentation index. A wealth of examples can be found on the web, which should guide you in building your own programs. When you use online resources, please make sure that they are up-to-date; the Hadoop API has undergone a few revisions and some functionality and API references you might find can be out of date. And of course, always use said examples as you should treat work by others; if you build upon them, add appropriate references to your code and your report.
/users/input/gutenberg
HDFS directory in the cluster.
Some words used in all of
the files are: for, one, thousand, and years. Some words used in only a
single file are: academics, faints, tag, and yams.
We provide another dataset in the /users/input/websters
HDFS
directory for performance testing.
FileSplit fileSplit = (FileSplit) reporter.getInputSplit(); String filename = fileSplit.getPath().getName();
/u/cs258/apps/jacobi/seq
./users/input/jacobi
,
marked as /users/input/jacobi/Number.dat
, where Number is the value N.
Use smaller values of N to test for correctness --- use the larger matrices
for performance evaluation.
According to our experience, Hadoop expects truly massive files to offer good
performance.
Hadoop can be used from any .csug host. (for example cycle1.csug.rochester.edu)
With the following commands, you can setup your environmental variables for Hadoop.
export JAVA_HOME=/usr/lib/jvm/java
export HADOOP_HOME=/u/hduser/hadoop
To use Hadoop Distributed File System (HDFS), you need to have a user working directory. This working directory is the place on the HDFS that you can use to store your files.
(example is written for user cs258)
Make a working directory:
$HADOOP_HOME/bin/hdfs dfs -ls /users
$HADOOP_HOME/bin/hdfs dfs -mkdir /users/cs258
Please note that in above commands, after dfs
, you can use any commands for distributed file system (e.g., -ls
, -mkdir
, -put
etc.). The list of possible commands can be found here.
All the users' directories are under /users/
where you should make your own directory. Please make sure that you set proper access permissions using the chmod
command for your working directory.
NOTE: Please make sure to remove your directory from the HDFS after you are done with the assignment.
If your code has a bug in it that causes it to run indefinitely, then you can
stop it with the command $HADOOP_HOME/bin/hadoop job -kill <job_ID>
where job_ID
is the ID given in the log at the start of your job,
on the cluster status page, or in the output of the job list command above.
Please note that simply killing the program running on your own computer is
not enough to stop the job on the cluster.
If you suspect that someone else's program is left hanging in the system,
please e-mail him/her and ask him/her to kill the task, and if the problem is
not resolved let the TA know.
$HADOOP_HOME/bin/hadoop job -list
will provide the same information.
export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools.jar
Next, you can use the following commands to compile your WordCount.java source to a jar file:
$HADOOP_HOME/bin/hadoop com.sun.tools.javac.Main WordCount.java
jar cf WordCount.jar WordCount*.class
You can run a class from that jar file with the following command:
$HADOOP_HOME/bin/hadoop jar WordCount.jar <class> <arguments>Where
class
is the name of your main class (i.e., WordCount in this example) and the
arguments
are passed to your program (e.g., can be the directories for input and output). For this example we have:
$HADOOP_HOME/bin/hadoop jar WordCount.jar WordCount [path to input file] [path to output file]
The hadoop
program
will automatically read the configuration to run your program on the cluster.
It will output a log file containing any errors and a simple progress meter.
At the end it will output a small pile of counters. You may find the following
useful when attempting to debug your programs:
* Map input records * Map output records * Combine input records * Combine output records * Reduce input records * Reduce output recordsMapReduce tasks generally expect their input and output to be in the HDFS cluster. You need to create your home directories in the HDFS cluster and they should be named as /users/
$HADOOP_HOME/bin/hdfs dfs -put <localfile /users/username/.>
,
which will upload localfile
into your home directory on the HDFS cluster.
Since the input files for your program are read-only, you don't have to copy
them to your HDFS home directories --- you can just instruct your applications to
get input from the /users/input/
HDFS directory.
Similarly, files can be downloaded via $HADOOP_HOME/bin/hdfs dfs -get <file> .
, which will download
the file from HDFS to your current directory.
You will be provided with useful help info when you try to use a utility without
parameters; for example, $HADOOP_HOME/bin/hdfs dfs
will give you
a set of commands that you can apply to HDFS.
Each MapReduce task must have an output path that must not exist before running
the program; it will be created as a directory in HDFS. That directory will
contain logs of your program. The output from the reduce tasks will be saved as
part-00000
(large outputs will create part-00001
and so
on, but that is unlikely to happen for this assignment). If your output is a
text file, you can read it without downloading it with the command
$HADOOP_HOME/bin/hdfs dfs -cat <output_directory>/part-00000
(where output_directory
is the path set by your program). If you
used a SequenceFileOutputFormat, you can get a text version with
$HADOOP_HOME/bin/hdfs dfs -text <output_directory>/part-00000
.
You can check the log of your program for a "Launched map tasks" counter, which will tell you how many tasks were run at once to handle your data.
You should electronically turn in the required files. Instructions for electronic turn-in can be found on the course Web site.