Assignment #2 - Hadoop Parallel Data Processing

Due by 11:59pm, Friday, October 11.

You earn 5% bonus if you make your last (therefore final) turn-in by 11:59pm, Wednesday, October 9. You earn 10% bonus if you make your last turn-in by 11:59pm, Monday, October 7.

Please direct your questions to the TA about this assignment.

In this assignment, you will program three applications on the Hadoop parallel data processing platform. This will allow you gain practical experience on MapReduce programming and learn the performance implication of parallel data processing.

Hadoop System Structure:

A Hadoop cluster is made of two parallel systems: the Hadoop Distributed File System (HDFS) and the MapReduce framework. Each of these systems performs a distinct task but work together to process large amounts of data. HDFS stores files across the cluster, spreading them out so it can hold files that are larger than any individual node could store. MapReduce takes Java programs and handles both running them near where the data is stored and scheduling tasks to not overload the nodes.

HDFS is made of a single NameNode and multiple DataNodes. The NameNode tracks the location of the files in the DataNodes and is responsible for coordinating them to ensure consistency of data. When a user wishes to read data, it asks the NameNode for which DataNodes hold the data and then contacts the DataNodes to get the data. Similarly, when writing data the user asks the NameNode where to store the data and then uploads the data directly to the DataNodes. To handle large files, HDFS splits files into blocks of uniform size. DataNodes hold blocks instead of files, and blocks from a single file may be spread across multiple DataNodes.

MapReduce is made of a single JobTracker and multiple TaskTrackers. Users write MapReduce jobs, which consist of two types of tasks: maps and reduces. The JobTracker takes a description of these tasks and schedules them across the TaskTrackers. One TaskTracker runs on each DataNode, and the JobTracker ensures that tasks run close to the nodes that have the data for them.

In our cluster, the NameNode and JobTracker both run on node17.cs.rochester.edu. On larger clusters, these servers run on separate computers. A DataNode and TaskTracker run on node18, node20, node21, node23, node24, node25, node26, node27 and node28. From the university network, you can check the status of the HDFS cluster by visiting http://node17.cs.rochester.edu:50070/ which will display that status of the DataNodes and allow you to view the files in the cluster. The status of the MapReduce cluster is at http://node17.cs.rochester.edu:50030/ which will show the status of the TaskTrackers and the jobs that will run, are running, and have run. There will be a link for every job which will display the current status, number of tasks, and various statistics for the run.

You do not need to log into any of these machines to run your programs. You instead run your program on another machine on the research network such as cycle1, cycle2, or cycle3 and the program will connect to the NameNode and JobTracker as needed. More details on how to do this and how to read the status of the MapReduce cluster are below. If you are interested in more detail about the architecture of HDFS, you can read the HDFS Architecture Guide.

Hadoop Programming:

The MapReduce Tutorial provides an excellent start to understand the general structure of a MapReduce program and as a template to start with your own programs. Please note the "WordCount v2.0" code near the bottom contains several useful features including extending the Tool class (which enables your program to understand several useful common options), and the DistributedCache (which is useful for the centers for k-means and the vector for matrix-vector multiplication).

The Hadoop API is the place to check for detailed information on what function to call. Some of the more useful classes are JobConf and the InputFormat subclasses. You will have to implement subclasses of Tool, Mapper, and Reducer. Extending MapReduceBase will give you empty versions of functions not all map or reduce tasks need.

All of the official documentation can be found at the main documentation index.

Applications:

You are asked to implement the follow three applications in this assignment:
  1. Word Count
    The input to your word count program will be one or more text files. You should count the number of files each word appears in. Your word identification should be case-insensitive and ignore anything that isn't a letter. For correctness tests, you can find 37 text files from Project Gutenberg in the /input/gutenberg directory in the cluster. Some words used in all of the files are: for, one, thousand, and years. Some words used in only a single file: academics, faints, tag, and yams. We provide another dataset in /input/webster.1 for performance testing.

    Possibly helpful hint: Here is the code for a mapper to get a file name for its input:
        FileSplit fileSplit = (FileSplit) reporter.getInputSplit();
        String filename = fileSplit.getPath().getName();
    	  
  2. Matrix-Vector Multiplication
    Matrix-vector multiplication takes a file describing a matrix as an input and has to also load a file with a vector. The matrix will be split across map tasks, but every task will need to load the entire vector. Both files are text files with one entry per line. The matrix file has an integer row, a space, an integer column, a space, and a floating point value for that location in the matrix. The rows and columns are numbered starting from 1, not 0. The vector file is simply one floating point value per line. For examples:
        matrix.dat
            1 1 1.1
            1 2 2.2
            2 1 3.3
            2 2 4.4
    
        vector.dat
            1
            2
    	  
    The vector and matrix files can be found in HDFS at /input/vector.dat and /input/matrix.1. The first entry in the result matrix should be 10240.0.

  3. K-means
    The data file for k-means is a text file with one point per line. The X and Y coordinates are separated by a tab. (Note that this makes it suitable for reading by KeyValueTextInputFormat.) The coordinates are both integers and range from 0 to 1048576 (2^20). You may find it useful to use a counter (see the MapReduce tutorial) to track the change in the centers during the reduce step. The TA's program was designed to stop after the centers moved less than 10 units each, or 50 units total. A set of points to use as input can be found at /input/points.1. It was generated at random in 5 clusters. The TA's code stopped after 7 iterations and was able to find one of the clusters near 665539, 159651. Due to the way k-means works, your code may have slightly different coordinates than that.

    Possibly helpful hint: Reusing a JobConf for iterations. If you don't call JobConf.setJarByClass with the job's class, it can't be reused for multiple iterations.

Build/Run Your Programs:

The TA recommends using ant to build your programs. The ant utility is already installed on the department machines. The TA has created a build.xml file that will build your code and already includes the paths to the Hadoop libraries. Download the build.xml file and put all of your java source files under a subdirectory src/. After running ant, it will create a jar file build/Hadoop.jar.

You can run a class from that jar file with the following command:

    ~cs576/hadoop-1.2.1/bin/hadoop jar build/Hadoop.jar <class> <arguments>
      
Where class is the name of your main class and the arguments are passed to your program. The hadoop program will automatically read the configuration to run your program on the cluster. It will output a log file containing any errors and a simple progress meter. At the end it will output a small pile of counters. You may find the following useful when attempting to debug your programs:
    * Map input records
    * Map output records
    * Combine input records
    * Combine output records
    * Reduce input records
    * Reduce output records
      
MapReduce tasks generally expect their input and output to be in the HDFS cluster. You can upload a file with the command ~cs576/hadoop-1.2.1/bin/hadoop fs -put <file> ., which will upload file into your home directory on the HDFS cluster. If this fails, you may need to create your home directory with the command ~cs576/hadoop-1.2.1/bin/hadoop fs -mkdirs /user/<username>, where username is your username on the department computers. Similarly, files can be downloaded via ~cs576/hadoop-1.2.1/bin/hadoop fs -get <file> ., which will download the file from HDFS to your current directory.

Each mapreduce task must have an output path, which must not exist before running the program, that will be created as a directory in HDFS. That directory will contain logs of your program. The output from the reduce tasks will be saved as part-00000 (large outputs will create part-00001 and so on, but that should not be required for this assignment). If your output is a text file, you can read it without downloading it with the command ~cs576/hadoop-1.2.1/bin/hadoop fs -cat <output_directory>/part-00000 (where output_directory is the path set by your program). If you used a SequenceFileOutputFormat, you can get a text version with ~cs576/hadoop-1.2.1/bin/hadoop fs -text <output_directory>/part-00000.

Measuring Parallelism:

In order to measure the performance of your code, you need to test it with varying levels of parallelism. You can control the number of map tasks run by your code by varying the number of blocks in your input data. For the sample data provided above, all of the data is stored in a single block. You can get versions that have 2, 4, and 8 blocks by changing the `.1` in the file name to `.2`, `.4`, or `.8`.

You can check the log of your program for a "Launched map tasks" counter, which will tell you how many tasks were run at once to handle your data.

Cluster Courtesy:

In order to test your code and ensure that others can test their code, you need to pay attention to what jobs are running on the cluster. You can do this by checking the cluster status page, or by the command ~cs576/hadoop-1.2.1/bin/hadoop job -list. If someone is running a job already, please wait until it is complete to start your own. Conversely, please give some space between your jobs so others have a chance to run.

If your code has a bug in it that causes it to run indefinitely, then you can stop it with the command ~cs576/hadoop-1.2.1/bin/hadoop job -kill <job_ID>> where job_ID is the ID given in the log at the start of your job, on the cluster status page, or in the output of the job list command above. Please note that simply killing the program running on your own computer is not enough to stop the job on the cluster. If you suspect that someone else's program is left hanging in the system, please let the TA know.

Turn-in:

You are asked to turn in your source files and a README file. The README file should be in plain text format. It should contain a description of your design, what is and what is not realized in your implementation. If your program requires any special compilation flag to build, you need to specify the full build command in the README file. The README file should also include any additional information needed to run your code.

You should electronically turn in the required files. Instructions for electronic turn-in can be found on the course Web site.

Grading guideline:

By turning in early, you help relieve the congestion of using the shared cluster and therefore we strongly encourage that.

Late turn-in policy:

Late turn-ins will be accepted for up to three days, with 10% penalty for each late day. No turn-ins more than three-day late will be accepted.