CSC 2/458: Build Notes

Makefiles

Please use Makefiles. Samples are provided in the apps directory.


Machines

You may use any of the following machines on the instructional csug network.

Turn In

USING THE TURN-IN SCRIPT

The turn-in script will recursively copy all files and subdirectories in your current directory to a location in the course's account. Please avoid running it in your home directory or any other directory that contains files irrelevant to the current project. For any problems while using it please contact the TA with an email (sending the script's output will help).You may turn-in your project as many times as you need. However, only the MOST RECENT submission will be graded. DO NOT FORGET to submit a README file (text or pdf format) with your code, and include your name and e-mail address in the README file. Check the appropriate assignment page for information related to the content of the README file.


INSTRUCTIONS:
1) Go to the directory that contains ALL the files that are related to your current project.
2) Run the script: On the undergrad/instructional network (i.e., csug.rochester.edu):

	/u/cs258/bin/turnin

Upon successful completion you will get an output of the follwing form:
 
Submitting `/home/vax8/u50/<your netid>/sample_submission`
Submission successful!

You will also receive through email a message with subject ''Submission from <your netId>'', which will contain a listing of all the files you submitted along with their last modification date and size.


Assignment #2 Build Notes

Below are instructions for compilation and execution for two programming models (1) MPI and (2) OpenMP. Besides the difference between message passing and shared memory, MPI is a library interface whereas OpenMP consists of a set of directives that need special compiler support. Although both of them are language independent, we will be focusing on C/C++.

MPI

MPI is a message-passing library in which the programmer is expected to explicitly specify the communication and transfer of data between executing threads. The data communication can be implemented using a number of underlying protocols, shmem (shared memory) or TCP/IP (messaging across a network).

Setup

First you need to load MPI with the following command:

$ module load mpi

How to compile

Like gcc, mpicc performs both compilation and linking. A sample Makefile and program can be found at /u/cs458/apps/sor/mpi. You may use the directory as a template and plug in your own programs.
You can compile your code with the following command:

$ mpicc <your source> -o output
$ mpic++ <your source> -o output

How to program

Take a look at the MPI tutorial from Lawrence Livermore National Lab.

How to run

To run your compiled binary use the following command:

$ mpirun [ options ] <program> [ <args> ]

Some useful options:

 -np: [Number of processes to be spawned]
 -ppn: [Sets the number of processes to launch on each node.]

NOTE: If this option is not specified, processes are assigned to the physical cores on the first node; if the number of cores is exceeded, the next node is used.

-H, -host, --host <host1,host2,...,hostN>: [List of hosts on which to invoke processes.]
-hostfile, --hostfile <hostfile>:   [Provide a hostfile to use.]

Please see the following links for more information:
https://www.open-mpi.org/faq/?category=mpi-apps
https://www.open-mpi.org/doc/v4.0/man1/mpirun.1.php

To use Node01 - Node06:

NOTE: You need SSH Passwordless Login to use these nodes. These nodes will serve as the hosts
that you need for your experiments. This can be done as follows:

$ ssh-keygen -t rsa
$ for i in 1 2 3 4 5 6; do ssh-copy-id "node0$i"; done;
$ for i in 1 2 3 4 5 6; do ssh-copy-id "node0$i.csug.rochester.edu"; done;
$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

After running above commands, you can test that PasswordLess SSH is working correctly as follows:

$ ssh -v -l [username] -i id_rsa node02

OpenMP

Setup

Current versions of gcc should support OpenMP by default.

How to Compile

In addition to using gcc 4.2 or higher, you need to compile with the -fopenmp command-line option:
$ gcc -fopenmp <your source> -o output
$ g++ -fopenmp <your source> -o output

How to program

For a programming guide you may refer to this tutorial. We have also included a simple hello world program at /u/cs258/apps/test_omp.

Hadoop

Hadoop System Structure:

A Hadoop cluster has two parts: the Hadoop Distributed File System (HDFS) and the MapReduce framework. Each of these systems performs a distinct task, but they work together to process large amounts of data. HDFS stores files across the cluster, spreading them out so it can hold files that are larger than what any individual node could store. MapReduce takes Java programs and handles both running them near where the data is stored and scheduling tasks to not overload the nodes.

HDFS is made of a single NameNode and multiple DataNodes. The NameNode tracks the location of the files in the DataNodes and is responsible for coordinating them to ensure consistency of data. When a user wishes to read data, it asks the NameNode for which DataNodes hold the data and then contacts the DataNodes to get the data. Similarly, when writing data the user asks the NameNode where to store the data and then uploads the data directly to the DataNodes. To handle large files, HDFS splits files into blocks of uniform size. DataNodes hold blocks instead of files, and blocks from a single file may be spread across multiple DataNodes. If you are interested in more detail about the architecture of HDFS, you can read the HDFS Architecture Guide.

MapReduce is made of a single JobTracker and multiple TaskTrackers. Users write MapReduce jobs, which consist of two types of tasks: maps and reduces. The JobTracker takes a description of these tasks and schedules them across the TaskTrackers. One TaskTracker runs on each DataNode, and the JobTracker ensures that tasks run close to the nodes that have the data for them.

Hadoop Programming:

The MapReduce Tutorial provides an excellent start to understand the general structure of a MapReduce program and as a template to start with your own programs. Please consider the following as you search the web for useful examples: try to find simple algorithms that, in principle, follow the mechanism you are trying to implement for your program. For example, as you are trying to implement a matrix-vector multiplication, perhaps a matrix transpose operation (which is simpler) will serve as a good guide.

The Hadoop API is the place to check for detailed information on what function to call. Some of the more useful classes are JobConf and the InputFormat subclasses. You will have to implement subclasses of Tool, Mapper, and Reducer. Extending MapReduceBase will give you empty versions of functions not all map or reduce tasks need.

All of the official documentation can be found at the main documentation index. A wealth of examples can be found on the web, which should guide you in building your own programs. When you use online resources, please make sure that they are up-to-date; the Hadoop API has undergone a few revisions and some functionality and API references you might find can be out of date. And of course, always use said examples as you should treat work by others; if you build upon them, add appropriate references to your code and your report.

How to Use the CSUG Hadoop Cluster Setup:

Hadoop can be used from any .csug host. (for example cycle1.csug.rochester.edu)

Setting up the "ENVIRONMENT VARIABLES"

With the following commands, you can setup your environmental variables for Hadoop.

	 export JAVA_HOME=/usr/lib/jvm/java 
	 export HADOOP_HOME=/u/hduser/hadoop 

Setting up the "Working Directory"

To use Hadoop Distributed File System (HDFS), you need to have a user working directory. This working directory is the place on the HDFS that you can use to store your files.

(example is written for user cs258)

Make a working directory:

	 $HADOOP_HOME/bin/hdfs dfs -ls /users 
	 $HADOOP_HOME/bin/hdfs dfs -mkdir /users/cs258 

Please note that in above commands, after dfs, you can use any commands for distributed file system (e.g., -ls, -mkdir, -put etc.). The list of possible commands can be found here.

All the users' directories are under /users/ where you should make your own directory. Please make sure that you set proper access permissions using the chmod command for your working directory.

NOTE: Please make sure to remove your directory from the HDFS after you are done with the assignment.

Cluster Courtesy:

In order to test your code and ensure that others can test their code, you need to pay attention to what jobs are running on the cluster. You can do this by checking the cluster status page (see details below). If someone is running a job already, please wait until it is complete to start your own. Conversely, please make sure to introduce a delay between your job submissions so others have a chance to run.

If your code has a bug in it that causes it to run indefinitely, then you can stop it with the command $HADOOP_HOME/bin/hadoop job -kill <job_ID> where job_ID is the ID given in the log at the start of your job, on the cluster status page, or in the output of the job list command above. Please note that simply killing the program running on your own computer is not enough to stop the job on the cluster. If you suspect that someone else's program is left hanging in the system, please e-mail him/her and ask him/her to kill the task, and if the problem is not resolved let the TA know.

On csug:

From the university network, you can check the status of the HDFS cluster at http://node01.csug.rochester.edu:50070/
using a web browser or using the lynx tool from a terminal, which shows the status of the DataNodes and allows you to view the files in the cluster. Similarly, http://node01.csug.rochester.edu:8088/ shows the status of the TaskTrackers and the jobs that will run, are running, and have run on the MapReduce cluster. There is a link for every job, which will display the current status, number of tasks, and various statistics for the run. Alternatively, the command $HADOOP_HOME/bin/hadoop job -list will provide the same information.

Build/Run Your Programs:

To compile your code (e.g., WordCount.java) you should first set the environmental variable for hadoop's java class path (use setenv if using csh):
	export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools.jar
Next, you can use the following commands to compile your WordCount.java source to a jar file:
	 $HADOOP_HOME/bin/hadoop com.sun.tools.javac.Main WordCount.java
	 jar cf WordCount.jar WordCount*.class 

You can run a class from that jar file with the following command:

      $HADOOP_HOME/bin/hadoop jar WordCount.jar <class> <arguments>
Where class is the name of your main class (i.e., WordCount in this example) and the arguments are passed to your program (e.g., can be the directories for input and output). For this example we have:
	 $HADOOP_HOME/bin/hadoop jar WordCount.jar WordCount [path to input file] [path to output file] 
The hadoop program will automatically read the configuration to run your program on the cluster. It will output a log file containing any errors and a simple progress meter. At the end it will output a small pile of counters. You may find the following useful when attempting to debug your programs:
    * Map input records
    * Map output records
    * Combine input records
    * Combine output records
    * Reduce input records
    * Reduce output records
MapReduce tasks generally expect their input and output to be in the HDFS cluster. You need to create your home directories in the HDFS cluster and they should be named as /users/, where is your csug login. You can upload a file from csug machines to HDFS with the command $HADOOP_HOME/bin/hdfs dfs -put <localfile /users/username/.> , which will upload localfile into your home directory on the HDFS cluster. Since the input files for your program are read-only, you don't have to copy them to your HDFS home directories --- you can just instruct your applications to get input from the /users/input/ HDFS directory. Similarly, files can be downloaded via $HADOOP_HOME/bin/hdfs dfs -get <file> ., which will download the file from HDFS to your current directory. You will be provided with useful help info when you try to use a utility without parameters; for example, $HADOOP_HOME/bin/hdfs dfs will give you a set of commands that you can apply to HDFS.

Each MapReduce task must have an output path that must not exist before running the program; it will be created as a directory in HDFS. That directory will contain logs of your program. The output from the reduce tasks will be saved as part-00000 (large outputs will create part-00001 and so on, but that is unlikely to happen for this assignment). If your output is a text file, you can read it without downloading it with the command $HADOOP_HOME/bin/hdfs dfs -cat <output_directory>/part-00000 (where output_directory is the path set by your program). If you used a SequenceFileOutputFormat, you can get a text version with $HADOOP_HOME/bin/hdfs dfs -text <output_directory>/part-00000.