How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs on a cluster. The torque software is installed on node33 through node66. This is a quick guide to getting started; for complete information try:

man qsub - the basic command for running jobs

man qstat - show currently running jobs

man pbsnodes - pbsnodes -a gives the current status of all nodes

These commands are in /usr/bin on nodes 33 through 66. Jobs should be launched from node64.

Node Usage Policy

The node usage policy can be summarized in two rules:

  • Any job using nodes 33 through 66 must scheduled with qsub from node64. This is so as not to disrupt time-sensitive programs running on other nodes. Because nodes 32 and below are not mediated by torque, you are free to login directly to those machines to start your programs.

  • Limit your cluster usage at any given time to 32 processors [roughly a quarter of the processing power].

All the nodes (0 through 66) are dual-processor machines, but nodes 33 and above are newer and a bit faster.

To Use a Node Interactively

first, ssh to node64, and then type

% qsub -I

This reserves a node, gives you a shell on it, and releases the node when you log out.

To Use a Specific Node

% qsub -I -l nodes=node50

You are free to ask for any node, but, according to node policy, make sure that you make your request from node64.

To Run a Batch Job

% qsub your_script.sh

Standard out and standard error from the job will be copied to files in your current directory named your_script.sh.oXX and your_script.sh.eXX, where XX is the job number. For the files to be copied successfully, you must have ssh authentication configured, and have a host key for machine you start the job from, using the fully qualified domain name, eg node64.cs.rochester.edu rather than just node64. If you can log into a machine without typing your password by using ssh with the full name, eg

% ssh node64.cs.rochester.edu

before launching a job, the copying of output files should work.

To Run a Job on Several Nodes

The batch software creates a file that tells your job which nodes it has reserved. Here is a simple example of a job that figures out which nodes it has and does something on them:

% cat > ~/test.sh
#!/bin/bash

mkdir -p $HOME/tmp

for i in `cat $PBS_NODEFILE` ; do
  ssh $i "cat /proc/cpuinfo > $HOME/tmp/$i.out; sleep 60" &
done

# wait for ssh'ed jobs to finish before releasing node
wait
^D
% qsub -l nodes=4 ~/test.sh

The nodes have two cpus; by default if you request 4 nodes, you will get one cpu on each of four different nodes (and other jobs may be scheduled for the other cpu). To request both processors on each of four nodes (eight cpus total) use:

% qsub -l nodes=4:ppn=2

-- DanGildea - 14 Dec 2005

Problems

You may find that your cluster jobs quit unexpectedly and result in an email like the following:

Job Name:   train.pl
An error has occurred processing your job, see below.
Post job file processing error; job 5785.node64.cs.rochester.edu on host node37/1+node36/0+node35/0+node34/0+node33/1

Unable to copy file /usr/var/lib/torque/spool/5785.node64.OU to post@node64.cs.rochester.edu:/scratch/post/run/train.pl.o5785
>>> error from copy
>>> end error output
Output retained on that host in: /usr/var/lib/torque/undelivered/5785.node64.OU

This happens because torque uses ssh to start jobs on multiple machines, which requires that the hostkey is in your ~/.ssh/known_hosts file. To do this, you need to ssh to every host using the fully-qualified domain name (e.g., node64.cs.rochester.edu instead of just node64).

-- MattPost - 05 Jul 2006

My case is using just node64 instead of node64.cs.rochester.edu Another problem I have is that "qsub -l nodes=# ~/test.sh" does not work with # no more than 2. It outputs two files(*.eXXX and *.oXXX) but tmp/ has nothing, so something wrong with this execution. Here test.sh is the example Dan gave above. Yes, "qsub -l nodes=4 ~/test.sh" works but "qsub -l nodes=2 ~/test.sh" does not. Does anyone have same problem?

OK. Add -S path_list solves it. "qsub -S /bin/bash -l nodes=2 ~/test.sh". It seems that torque may ignor the first line(#!/bin/bash) of my script and the execution will use my login shell on the execution host. But I still dont know why it happens only when nodes=2. Anyway, special thanks to Hao!

-- XiaoZhang - 25 Jan 2007

Edit | Attach | Watch | Print version | History: r26 | r23 < r22 < r21 < r20 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r21 - 2007-01-25 - XiaoZhang
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding URCS? Send feedback