Difference: ClusterJobs (1 vs. 26)

Revision 262008-03-04 - MattPost

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 20 to 20
  The node usage policy can be summarized in two rules:
Changed:
<
<
  • Any job using nodes 33 through 72 must be scheduled with qsub from node64. This is so as not to disrupt time-sensitive programs running on other nodes. Because nodes 32 and below are not mediated by torque, you are free to login directly to those machines to start your programs.
>
>
  • Any job using nodes 33 and above must be scheduled with qsub from node64. This is so as not to disrupt time-sensitive programs running on other nodes. Because nodes 32 and below are not mediated by torque, you are free to login directly to those machines to start your programs.
 
  • Limit your cluster usage at any given time to 32 processors [roughly a quarter of the processing power].
Changed:
<
<
All the nodes (0 through 72) are dual-processor machines, but nodes 33 and above are newer and a bit faster.
>
>
All the nodes are dual-processor machines, but nodes 33 and above are newer and a bit faster.
 

To Use a Node Interactively

Revision 252007-11-16 - MattPost

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs on a cluster.

Changed:
<
<
The torque software is installed on node33 through node66.
>
>
The torque software is installed on node33 through node72.
 This is a quick guide to getting started; for complete information try:
Line: 13 to 13
  man pbsnodes - pbsnodes -a gives the current status of all nodes
Changed:
<
<
These commands are in /usr/bin on nodes 33 through 66.
>
>
These commands are in /usr/bin on nodes 33 through 72.
 Jobs should be launched from node64.

Node Usage Policy

The node usage policy can be summarized in two rules:

Changed:
<
<
  • Any job using nodes 33 through 66 must be scheduled with qsub from node64. This is so as not to disrupt time-sensitive programs running on other nodes. Because nodes 32 and below are not mediated by torque, you are free to login directly to those machines to start your programs.
>
>
  • Any job using nodes 33 through 72 must be scheduled with qsub from node64. This is so as not to disrupt time-sensitive programs running on other nodes. Because nodes 32 and below are not mediated by torque, you are free to login directly to those machines to start your programs.
 
  • Limit your cluster usage at any given time to 32 processors [roughly a quarter of the processing power].
Changed:
<
<
All the nodes (0 through 66) are dual-processor machines, but nodes 33 and above are newer and a bit faster.
>
>
All the nodes (0 through 72) are dual-processor machines, but nodes 33 and above are newer and a bit faster.
 

To Use a Node Interactively

Revision 242007-09-10 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 114 to 114
 Output retained on that host in: /usr/var/lib/torque/undelivered/5785.node64.OU
Changed:
<
<
This happens because torque uses ssh to start jobs on multiple machines, which requires that the hostkey is in your ~/.ssh/known_hosts file. To do this, you need to ssh to every host using the fully-qualified domain name (e.g., node64.cs.rochester.edu instead of just node64).

-- MattPost - 05 Jul 2006

>
>
This happens because torque uses ssh to start jobs on multiple machines, which requires that the hostkey is in your ~/.ssh/known_hosts file. To do this, you need to ssh to node64.cs.rochester.edu using the fully-qualified domain name (e.g., node64.cs.rochester.edu instead of just node64).
  If /tmp on a cluster node is full, you may get empty (zero-length) files for the stdout and stderr from your job.
Changed:
<
<
-- DanGildea - 30 Jan 2007
>
>

Revision 232007-06-07 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 20 to 20
  The node usage policy can be summarized in two rules:
Changed:
<
<
  • Any job using nodes 33 through 66 must scheduled with qsub from node64. This is so as not to disrupt time-sensitive programs running on other nodes. Because nodes 32 and below are not mediated by torque, you are free to login directly to those machines to start your programs.
>
>
  • Any job using nodes 33 through 66 must be scheduled with qsub from node64. This is so as not to disrupt time-sensitive programs running on other nodes. Because nodes 32 and below are not mediated by torque, you are free to login directly to those machines to start your programs.
 
  • Limit your cluster usage at any given time to 32 processors [roughly a quarter of the processing power].

Revision 222007-01-30 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 90 to 90
  % qsub -l nodes=4:ppn=2
Added:
>
>

Handy Scripts

/usr/grads/bin/qwho-cpu - see who is currently using how many cpus on the cluster.

/usr/grads/bin/qsh - ssh'es into node64 and runs qsub. Run "qsh -I" from any machine in the dept. to get a node for interactive use, or "qsh script.sh" to run a batch script.

 -- DanGildea - 14 Dec 2005

Problems

Line: 111 to 118
  -- MattPost - 05 Jul 2006
Changed:
<
<
My case is using just node64 instead of node64.cs.rochester.edu Another problem I have is that "qsub -l nodes=# ~/test.sh" does not work with # no more than 2. It outputs two files(*.eXXX and *.oXXX) but tmp/ has nothing, so something wrong with this execution. Here test.sh is the example Dan gave above. Yes, "qsub -l nodes=4 ~/test.sh" works but "qsub -l nodes=2 ~/test.sh" does not. Does anyone have same problem?

OK. Add -S path_list solves it. "qsub -S /bin/bash -l nodes=2 ~/test.sh". It seems that torque may ignor the first line(#!/bin/bash) of my script and the execution will use my login shell on the execution host. But I still dont know why it happens only when nodes=2. Anyway, special thanks to Hao!

>
>
If /tmp on a cluster node is full, you may get empty (zero-length) files for the stdout and stderr from your job.
 
Changed:
<
<
-- XiaoZhang - 25 Jan 2007
>
>
-- DanGildea - 30 Jan 2007

Revision 212007-01-25 - XiaoZhang

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 116 to 116
 so something wrong with this execution. Here test.sh is the example Dan gave above. Yes, "qsub -l nodes=4 ~/test.sh" works but "qsub -l nodes=2 ~/test.sh" does not. Does anyone have same problem?
Added:
>
>
OK. Add -S path_list solves it. "qsub -S /bin/bash -l nodes=2 ~/test.sh". It seems that torque may ignor the first line(#!/bin/bash) of my script and the execution will use my login shell on the execution host. But I still dont know why it happens only when nodes=2. Anyway, special thanks to Hao!
 -- XiaoZhang - 25 Jan 2007

Revision 202007-01-25 - XiaoZhang

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 110 to 110
 This happens because torque uses ssh to start jobs on multiple machines, which requires that the hostkey is in your ~/.ssh/known_hosts file. To do this, you need to ssh to every host using the fully-qualified domain name (e.g., node64.cs.rochester.edu instead of just node64).

-- MattPost - 05 Jul 2006

Added:
>
>
My case is using just node64 instead of node64.cs.rochester.edu Another problem I have is that "qsub -l nodes=# ~/test.sh" does not work with # no more than 2. It outputs two files(*.eXXX and *.oXXX) but tmp/ has nothing, so something wrong with this execution. Here test.sh is the example Dan gave above. Yes, "qsub -l nodes=4 ~/test.sh" works but "qsub -l nodes=2 ~/test.sh" does not. Does anyone have same problem?

-- XiaoZhang - 25 Jan 2007

Revision 192006-10-30 - MattPost

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 14 to 14
 man pbsnodes - pbsnodes -a gives the current status of all nodes

These commands are in /usr/bin on nodes 33 through 66.

Changed:
<
<
You can launch a job from any of these machines (but not directly from other machines in the dept).
>
>
Jobs should be launched from node64.
 

Node Usage Policy

Changed:
<
<
Torque is installed on nodes 33 through 66, and any jobs run on those machines should be scheduled through torque.
>
>
The node usage policy can be summarized in two rules:
 
Changed:
<
<
All the node machines (0 through 66) are dual-processor machines. Please limit your cluster usage at any given time to 32 processors, which is roughly a quarter of the total computing power of the cluster.
>
>
  • Any job using nodes 33 through 66 must scheduled with qsub from node64. This is so as not to disrupt time-sensitive programs running on other nodes. Because nodes 32 and below are not mediated by torque, you are free to login directly to those machines to start your programs.

  • Limit your cluster usage at any given time to 32 processors [roughly a quarter of the processing power].

All the nodes (0 through 66) are dual-processor machines, but nodes 33 and above are newer and a bit faster.

 

To Use a Node Interactively

Changed:
<
<
type
>
>
first, ssh to node64, and then type
  % qsub -I
Line: 37 to 40
  % qsub -I -l nodes=node50
Added:
>
>
You are free to ask for any node, but, according to node policy, make sure that you make your request from node64.
 

To Run a Batch Job

Revision 182006-08-08 - MattPost

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 17 to 17
 You can launch a job from any of these machines (but not directly from other machines in the dept).
Changed:
<
<

Problems

You may find that your cluster jobs quit unexpectedly and result in an email like the following:

Job Name:   train.pl
An error has occurred processing your job, see below.
Post job file processing error; job 5785.node64.cs.rochester.edu on host node37/1+node36/0+node35/0+node34/0+node33/1

Unable to copy file /usr/var/lib/torque/spool/5785.node64.OU to post@node64.cs.rochester.edu:/scratch/post/run/train.pl.o5785
>>> error from copy
>>> end error output
Output retained on that host in: /usr/var/lib/torque/undelivered/5785.node64.OU
>
>

Node Usage Policy

 
Changed:
<
<
This happens because torque uses ssh to start jobs on multiple machines, which requires that the hostkey is in your ~/.ssh/known_hosts file. To do this, you need to ssh to every host using the fully-qualified domain name (e.g., node64.cs.rochester.edu instead of just node64).
>
>
Torque is installed on nodes 33 through 66, and any jobs run on those machines should be scheduled through torque.
 
Changed:
<
<
-- MattPost - 05 Jul 2006
>
>
All the node machines (0 through 66) are dual-processor machines. Please limit your cluster usage at any given time to 32 processors, which is roughly a quarter of the total computing power of the cluster.
 

To Use a Node Interactively

Line: 100 to 87
 % qsub -l nodes=4:ppn=2

-- DanGildea - 14 Dec 2005

Added:
>
>

Problems

You may find that your cluster jobs quit unexpectedly and result in an email like the following:

Job Name:   train.pl
An error has occurred processing your job, see below.
Post job file processing error; job 5785.node64.cs.rochester.edu on host node37/1+node36/0+node35/0+node34/0+node33/1

Unable to copy file /usr/var/lib/torque/spool/5785.node64.OU to post@node64.cs.rochester.edu:/scratch/post/run/train.pl.o5785
>>> error from copy
>>> end error output
Output retained on that host in: /usr/var/lib/torque/undelivered/5785.node64.OU

This happens because torque uses ssh to start jobs on multiple machines, which requires that the hostkey is in your ~/.ssh/known_hosts file. To do this, you need to ssh to every host using the fully-qualified domain name (e.g., node64.cs.rochester.edu instead of just node64).

-- MattPost - 05 Jul 2006

Revision 172006-07-05 - MattPost

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 17 to 17
 You can launch a job from any of these machines (but not directly from other machines in the dept).
Added:
>
>

Problems

You may find that your cluster jobs quit unexpectedly and result in an email like the following:

Job Name:   train.pl
An error has occurred processing your job, see below.
Post job file processing error; job 5785.node64.cs.rochester.edu on host node37/1+node36/0+node35/0+node34/0+node33/1

Unable to copy file /usr/var/lib/torque/spool/5785.node64.OU to post@node64.cs.rochester.edu:/scratch/post/run/train.pl.o5785
>>> error from copy
>>> end error output
Output retained on that host in: /usr/var/lib/torque/undelivered/5785.node64.OU

This happens because torque uses ssh to start jobs on multiple machines, which requires that the hostkey is in your ~/.ssh/known_hosts file. To do this, you need to ssh to every host using the fully-qualified domain name (e.g., node64.cs.rochester.edu instead of just node64).

-- MattPost - 05 Jul 2006

 

To Use a Node Interactively

type

Revision 162006-03-06 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 13 to 13
  man pbsnodes - pbsnodes -a gives the current status of all nodes
Changed:
<
<
These commands are in /usr/bin.
>
>
These commands are in /usr/bin on nodes 33 through 66. You can launch a job from any of these machines (but not directly from other machines in the dept).
 

To Use a Node Interactively

Line: 35 to 37
 % qsub your_script.sh

Standard out and standard error from the job will be copied to

Changed:
<
<
a file in your current directory. For the files to be copied successfully, you must have ssh authentication configured, meaning you should be able to run
>
>
files in your current directory named your_script.sh.oXX and your_script.sh.eXX, where XX is the job number. For the files to be copied successfully, you must have ssh authentication configured, and have a host key for machine you start the job from, using the fully qualified domain name, eg node64.cs.rochester.edu rather than just node64. If you can log into a machine without typing your password by using ssh with the full name, eg
 
Changed:
<
<
% ssh node64.cs.rochester.edu hostname
>
>
% ssh node64.cs.rochester.edu
 
Changed:
<
<
without typing your password.
>
>
before launching a job, the copying of output files should work.
 

To Run a Job on Several Nodes

Revision 152006-02-16 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs on a cluster.

Changed:
<
<
The torque software is installed on node50 through node59.
>
>
The torque software is installed on node33 through node66.
 This is a quick guide to getting started; for complete information try:
Line: 13 to 13
  man pbsnodes - pbsnodes -a gives the current status of all nodes
Changed:
<
<
These commands are in /usr/local/bin.
>
>
These commands are in /usr/bin.
 

To Use a Node Interactively

Line: 35 to 35
 % qsub your_script.sh

Standard out and standard error from the job will be copied to

Changed:
<
<
a file in your current directory.
>
>
a file in your current directory. For the files to be copied successfully, you must have ssh authentication configured, meaning you should be able to run

% ssh node64.cs.rochester.edu hostname

without typing your password.

 

To Run a Job on Several Nodes

Revision 142005-12-15 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 60 to 60
 % qsub -l nodes=4 ~/test.sh
Added:
>
>
The nodes have two cpus; by default if you request 4 nodes, you will get one cpu on each of four different nodes (and other jobs may be scheduled for the other cpu). To request both processors on each of four nodes (eight cpus total) use:
 
Added:
>
>
% qsub -l nodes=4:ppn=2
  -- DanGildea - 14 Dec 2005

Revision 132005-12-14 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 54 to 54
  ssh $i "cat /proc/cpuinfo > $HOME/tmp/$i.out; sleep 60" & done
Added:
>
>
# wait for ssh'ed jobs to finish before releasing node
 wait ^D % qsub -l nodes=4 ~/test.sh

Revision 122005-12-14 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

Torque is a system for reserving nodes and running jobs

Line: 7 to 7
 This is a quick guide to getting started; for complete information try:
Changed:
<
<
man qsub - the basic script for running jobs
>
>
man qsub - the basic command for running jobs
  man qstat - show currently running jobs

Revision 112005-12-14 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

Added:
>
>
Torque is a system for reserving nodes and running jobs on a cluster.
 The torque software is installed on node50 through node59. This is a quick guide to getting started; for complete information try:

Revision 102005-12-14 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

The torque software is installed on node50 through node59.

Line: 19 to 19
  % qsub -I
Changed:
<
<
This reserves a node, give you a shell on it,
>
>
This reserves a node, gives you a shell on it,
 and releases the node when you log out.

Revision 92005-12-14 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

The torque software is installed on node50 through node59.

Line: 17 to 17
  type
Changed:
<
<
qsub -I
>
>
% qsub -I
  This reserves a node, give you a shell on it, and releases the node when you log out.
Line: 25 to 25
 

To Use a Specific Node

Changed:
<
<
qsub -I -l nodes=node50
>
>
% qsub -I -l nodes=node50
 

To Run a Batch Job

Changed:
<
<
qsub your_script.sh
>
>
% qsub your_script.sh
  Standard out and standard error from the job will be copied to a file in your current directory.

Revision 82005-12-14 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

The torque software is installed on node50 through node59.

Line: 9 to 9
  man qstat - show currently running jobs
Changed:
<
<
man pbsnodes - pbsnodes -a gives the current status of all nodes
>
>
man pbsnodes - pbsnodes -a gives the current status of all nodes
  These commands are in /usr/local/bin.

Revision 72005-12-14 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

The torque software is installed on node50 through node59.

Line: 11 to 11
  man pbsnodes - pbsnodes -a gives the current status of all nodes
Added:
>
>
These commands are in /usr/local/bin.
 

To Use a Node Interactively

type

Revision 62005-12-14 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

The torque software is installed on node50 through node59. This is a quick guide to getting started; for complete

Changed:
<
<
information try man qsub, man qstat, man pbsnodes.
>
>
information try:

man qsub - the basic script for running jobs

man qstat - show currently running jobs

man pbsnodes - pbsnodes -a gives the current status of all nodes

 

To Use a Node Interactively

Revision 52005-12-14 - DanGildea

Line: 1 to 1
 

How to Use the Cluster Manager

The torque software is installed on node50 through node59.

Line: 6 to 6
 information try man qsub, man qstat,
Changed:
<
<
man pbsnode.
>
>
man pbsnodes.
 

To Use a Node Interactively

Revision 42005-12-14 - DanGildea

Line: 1 to 1
Added:
>
>

How to Use the Cluster Manager

 The torque software is installed on node50 through node59. This is a quick guide to getting started; for complete information try
Line: 6 to 8
 man qstat, man pbsnode.
Changed:
<
<

To Use a Node Interactively

>
>

To Use a Node Interactively

  type
Line: 16 to 18
 and releases the node when you log out.
Changed:
<
<

To Use a Specific Node

>
>

To Use a Specific Node

  qsub -I -l nodes=node50
Changed:
<
<

To Run a Batch Job

>
>

To Run a Batch Job

  qsub your_script.sh

Standard out and standard error from the job will be copied to a file in your current directory.

Changed:
<
<

To Run a Job on Several Nodes

>
>

To Run a Job on Several Nodes

  The batch software creates a file that tells your job which nodes it has reserved. Here is a simple example of a

Revision 32005-12-14 - DanGildea

Line: 1 to 1
 The torque software is installed on node50 through node59. This is a quick guide to getting started; for complete information try

Revision 22005-12-14 - DanGildea

Line: 1 to 1
 The torque software is installed on node50 through node59. This is a quick guide to getting started; for complete information try
Line: 12 to 12
  qsub -I
Changed:
<
<
This reserve a node, give you a shel on it, and release the node when you log out.
>
>
This reserves a node, give you a shell on it, and releases the node when you log out.
 

To Use a Specific Node

Line: 26 to 26
 qsub your_script.sh

Standard out and standard error from the job will be copied to

Changed:
<
<
a file in the current directory.
>
>
a file in your current directory.
 

To Run a Job on Several Nodes

Changed:
<
<
The batch software creates a file that tells your jobs which
>
>
The batch software creates a file that tells your job which
 nodes it has reserved. Here is a simple example of a job that figures out which nodes it has and does something on them:

Revision 12005-12-14 - DanGildea

Line: 1 to 1
Added:
>
>
The torque software is installed on node50 through node59. This is a quick guide to getting started; for complete information try man qsub, man qstat, man pbsnode.

To Use a Node Interactively

type

qsub -I

This reserve a node, give you a shel on it, and release the node when you log out.

To Use a Specific Node

qsub -I -l nodes=node50

To Run a Batch Job

qsub your_script.sh

Standard out and standard error from the job will be copied to a file in the current directory.

To Run a Job on Several Nodes

The batch software creates a file that tells your jobs which nodes it has reserved. Here is a simple example of a job that figures out which nodes it has and does something on them:

% cat > ~/test.sh
#!/bin/bash

mkdir -p $HOME/tmp

for i in `cat $PBS_NODEFILE` ; do
  ssh $i "cat /proc/cpuinfo > $HOME/tmp/$i.out; sleep 60" &
done

wait
^D
% qsub -l nodes=4 ~/test.sh

-- DanGildea - 14 Dec 2005

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding URCS? Send feedback