How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs on a cluster. The torque software is installed on node33 through node72. This is a quick guide to getting started; for complete information try:

man qsub - the basic command for running jobs

man qstat - show currently running jobs

man pbsnodes - pbsnodes -a gives the current status of all nodes

These commands are in /usr/bin on nodes 33 through 72. Jobs should be launched from node64.

Node Usage Policy

The node usage policy can be summarized in two rules:

  • Any job using nodes 33 and above must be scheduled with qsub from node64. This is so as not to disrupt time-sensitive programs running on other nodes. Because nodes 32 and below are not mediated by torque, you are free to login directly to those machines to start your programs.

  • Limit your cluster usage at any given time to 32 processors [roughly a quarter of the processing power].

All the nodes are dual-processor machines, but nodes 33 and above are newer and a bit faster.

To Use a Node Interactively

first, ssh to node64, and then type

% qsub -I

This reserves a node, gives you a shell on it, and releases the node when you log out.

To Use a Specific Node

% qsub -I -l nodes=node50

You are free to ask for any node, but, according to node policy, make sure that you make your request from node64.

To Run a Batch Job

% qsub your_script.sh

Standard out and standard error from the job will be copied to files in your current directory named your_script.sh.oXX and your_script.sh.eXX, where XX is the job number. For the files to be copied successfully, you must have ssh authentication configured, and have a host key for machine you start the job from, using the fully qualified domain name, eg node64.cs.rochester.edu rather than just node64. If you can log into a machine without typing your password by using ssh with the full name, eg

% ssh node64.cs.rochester.edu

before launching a job, the copying of output files should work.

To Run a Job on Several Nodes

The batch software creates a file that tells your job which nodes it has reserved. Here is a simple example of a job that figures out which nodes it has and does something on them:

% cat > ~/test.sh
#!/bin/bash

mkdir -p $HOME/tmp

for i in `cat $PBS_NODEFILE` ; do
  ssh $i "cat /proc/cpuinfo > $HOME/tmp/$i.out; sleep 60" &
done

# wait for ssh'ed jobs to finish before releasing node
wait
^D
% qsub -l nodes=4 ~/test.sh

The nodes have two cpus; by default if you request 4 nodes, you will get one cpu on each of four different nodes (and other jobs may be scheduled for the other cpu). To request both processors on each of four nodes (eight cpus total) use:

% qsub -l nodes=4:ppn=2

Handy Scripts

/usr/grads/bin/qwho-cpu - see who is currently using how many cpus on the cluster.

/usr/grads/bin/qsh - ssh'es into node64 and runs qsub. Run "qsh -I" from any machine in the dept. to get a node for interactive use, or "qsh script.sh" to run a batch script.

-- DanGildea - 14 Dec 2005

Problems

You may find that your cluster jobs quit unexpectedly and result in an email like the following:

Job Name:   train.pl
An error has occurred processing your job, see below.
Post job file processing error; job 5785.node64.cs.rochester.edu on host node37/1+node36/0+node35/0+node34/0+node33/1

Unable to copy file /usr/var/lib/torque/spool/5785.node64.OU to post@node64.cs.rochester.edu:/scratch/post/run/train.pl.o5785
>>> error from copy
>>> end error output
Output retained on that host in: /usr/var/lib/torque/undelivered/5785.node64.OU

This happens because torque uses ssh to start jobs on multiple machines, which requires that the hostkey is in your ~/.ssh/known_hosts file. To do this, you need to ssh to node64.cs.rochester.edu using the fully-qualified domain name (e.g., node64.cs.rochester.edu instead of just node64).

If /tmp on a cluster node is full, you may get empty (zero-length) files for the stdout and stderr from your job.

Edit | Attach | Watch | Print version | History: r26 < r25 < r24 < r23 < r22 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r26 - 2008-03-04 - MattPost
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2014 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding URCS? Send feedback