How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs on a cluster. The torque software is installed on node33 through node66. This is a quick guide to getting started; for complete information try:

man qsub - the basic command for running jobs

man qstat - show currently running jobs

man pbsnodes - pbsnodes -a gives the current status of all nodes

These commands are in /usr/bin on nodes 33 through 66. You can launch a job from any of these machines (but not directly from other machines in the dept).

Node Usage Policy

Torque is installed on nodes 33 through 66, and any jobs run on those machines should be scheduled through torque.

All the node machines (0 through 66) are dual-processor machines. Please limit your cluster usage at any given time to 32 processors, which is roughly a quarter of the total computing power of the cluster.

To Use a Node Interactively

type

% qsub -I

This reserves a node, gives you a shell on it, and releases the node when you log out.

To Use a Specific Node

% qsub -I -l nodes=node50

To Run a Batch Job

% qsub your_script.sh

Standard out and standard error from the job will be copied to files in your current directory named your_script.sh.oXX and your_script.sh.eXX, where XX is the job number. For the files to be copied successfully, you must have ssh authentication configured, and have a host key for machine you start the job from, using the fully qualified domain name, eg node64.cs.rochester.edu rather than just node64. If you can log into a machine without typing your password by using ssh with the full name, eg

% ssh node64.cs.rochester.edu

before launching a job, the copying of output files should work.

To Run a Job on Several Nodes

The batch software creates a file that tells your job which nodes it has reserved. Here is a simple example of a job that figures out which nodes it has and does something on them:

% cat > ~/test.sh
#!/bin/bash

mkdir -p $HOME/tmp

for i in `cat $PBS_NODEFILE` ; do
  ssh $i "cat /proc/cpuinfo > $HOME/tmp/$i.out; sleep 60" &
done

# wait for ssh'ed jobs to finish before releasing node
wait
^D
% qsub -l nodes=4 ~/test.sh

The nodes have two cpus; by default if you request 4 nodes, you will get one cpu on each of four different nodes (and other jobs may be scheduled for the other cpu). To request both processors on each of four nodes (eight cpus total) use:

% qsub -l nodes=4:ppn=2

-- DanGildea - 14 Dec 2005

Problems

You may find that your cluster jobs quit unexpectedly and result in an email like the following:

Job Name:   train.pl
An error has occurred processing your job, see below.
Post job file processing error; job 5785.node64.cs.rochester.edu on host node37/1+node36/0+node35/0+node34/0+node33/1

Unable to copy file /usr/var/lib/torque/spool/5785.node64.OU to post@node64.cs.rochester.edu:/scratch/post/run/train.pl.o5785
>>> error from copy
>>> end error output
Output retained on that host in: /usr/var/lib/torque/undelivered/5785.node64.OU

This happens because torque uses ssh to start jobs on multiple machines, which requires that the hostkey is in your ~/.ssh/known_hosts file. To do this, you need to ssh to every host using the fully-qualified domain name (e.g., node64.cs.rochester.edu instead of just node64).

-- MattPost - 05 Jul 2006

Edit | Attach | Watch | Print version | History: r26 | r20 < r19 < r18 < r17 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r18 - 2006-08-08 - MattPost
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding URCS? Send feedback