Showing posts with label qsub. Show all posts
Showing posts with label qsub. Show all posts

06 June 2012

176. Weaning people onto SGE, one script at a time

On a five node (1 front + 4 exec; each node has 8 cores and 8 GB RAM) cluster that I know and hang out with, people have been submitting jobs one by one. As in, doing it manually, without a queue manager.

I got one of the users to start using my Very Simple Python Queue Manager to prevent too much idle time, but not everyone is using it yet.

Another downside when people aren't using queue managers is that they use top and kill to manage jobs, and that has a way of screwing things up for everyone. SGE is a much better solution in every possible sense.

To make it easier for the users to switch to using qsub i.e. make the change as undisruptive as possible, I wrote a little bash function and set up some standard qsub files.

The user navigates to the  directory where their .in file is (e.g. test.in) and runs
presub test
which open test.in and creates test.qsub

The user then submits by doing
qsub test.qsub


It's easy enough to customize the function and the output files (.e.g using .com, .g03in etc.). This script obviously only does g09, but I'll post a more general script later.




The .bashrc function:
presub () {

    paste -s -d "\n" ~/.qsub/qsub.head $1.in ~/.qsub/qsub.tail > $1.qsub
    return 0
}



The files:
I put the following files in ~/.qsub/

qsub.head:

#$ -S /bin/sh
#$ -cwd
#$ -l h_rt=99:30:00
#$ -l h_vmem=8G
#$ -j y
#$ -pe orte 8



export GAUSS_SCRDIR=/tmp
export GAUSS_EXEDIR=/share/apps/gaussian/g09/bsd:/share/apps/gaussian/g09/local:/share/apps/gaussian/g09/extras:/share/apps/gaussian/g09
/share/apps/gaussian/g09/g09 << END >> g09.log

qsub.tail:





END

The empty lines above are on purpose since gaussian can be annoying in that sense.


03 June 2012

172. ECCE and a ROCKS cluster: step by step

This is quite similar to a recent post, but here's a step-by-step, detailed account of how to set up ECCE for remote job submission to a ROCKS 5.4.3 cluster (one front node, 4 subnodes)

Coming soon (give it a week): Setting up a virtualbox machine with ecce for (stubborn) windows and ROCKS/CentOS users.

What isn't shown are all the failed attempts and dead-ends I went through and encountered getting to the point where I had a working system. I compiled ECCE. I compiled tcsh. I tried compiling bsd csh, which required me to compile bmake etc. This stuff looks simple, and it is simple -- but not obvious.

NOTE: From the outside we connect to rocks.university.edu. From inside the cluster the submit node is called rocks.local, and the subnodes are called node0, node1, etc. Refer to this naming if you get confused late.

Step 1. Create the site in ecce
From the terminal, do
ecce -admin
and add a new machine

Don't forget to hit Add/Change queue to make the changes to the queue part take effect. Then hit Add/Change. Oh, and pay attention to the Allocation Account tick box - if it's ticked you can't submit anything unless you add an account.  Important: the machine name you add here is the local name or local IP of the submit node -- it's not the 'public' name or url. We'll add that somewhere else later. Don't forget to select the queue manager (I forgot in the screen shot)

Close.

2. Editing your CONFIG file
Since you're already in the terminal, go to ecce-v6.3/apps/siteconfig

Take a quick peek at your Machines file (no editing):
Machines line
rocks rocks.local Dell beo Intel 40:5 ssh :NWChem:Gaussian-03 MN:RD:SD:UN:PW:Q:TL

Take another look at rocks.Q -- there's probably nothing to edit here either:

rocks.Q# Queue details for rocks
Queues:    nwchem
nwchem|minProcessors:       1
nwchem|maxProcessors:       40
nwchem|runLimit:       100000
nwchem|memLimit:       0
nwchem|scratchLimit:       0
Finally, do some editing of your CONFIG.rocks file.

CONFIG.rocks

NWChem: /share/apps/nwchem/nwchem-6.1/bin/LINUX64/nwchem
Gaussian-03: /share/apps/gaussian/g09/g09
perlPath: /usr/bin/
qmgrPath: /opt/gridengine/bin/lx26-amd64
sourcefile: /home/rocksuser/.cshrc
frontendMachine: rocks.university.edu

SGE {
#$ -S /bin/csh
#$ -cwd
#$ -l h_rt=$walltime
#$ -l h_vmem=$memoryG
#$ -j y
#$ -pe orte $totalprocs  
}

NWChemEnvironment{
            LD_LIBRARY_PATH /usr/lib/openmpi/1.3.2-gcc/lib/
}

NWChemCommand {
        /opt/openmpi/bin/mpirun -n $totalprocs $nwchem $infile > $outfile
}
Gaussian-03Command {
    setenv GAUSS_SCRDIR /tmp
    setenv GAUSS_EXEDIR /share/apps/gaussian/g09/bsd:/share/apps/gaussian/g09/local:/share/apps/gaussian/g09/extras:/share/apps/gaussian/g09
        time /share/apps/gaussian/g09/g09 $infile  $outfile }

Obviously, your variables will be different. NOTE that memory is in gigabyte here. You could also do $memoryM for megabyte. Just adjust your launcher requirements accordingly.

Step 3. Making csh modifications on the ROCKS cluster
On the main node just use the root password (or become sudo) and move /etc/csh.cshrc and /etc/csh.login out of the way (backing them up is a good idea). It doesn't seem like you need to make any changes csh-wise to the subnodes.

Step 4. Finalising our set up
Start ecce the normal way (e.g. run ecce from the terminal)
In the Gateway, start the Machine Browser, highlight 'rocks' and click on Setup Remote Access.
Do what you're told.

Step 5. Submit to your heart's content!

NOTE: the option to set the amount of memory is not shown in the launcher window above: my mistake. You can edit you apps/siteconfig/Machines file and add :MM at the end of the line, e.g.
Dynamic beryllium       Unspecified     Unspecified     Unspecified     18:3    ssh     :NWChem:Gaussian-03     MN:RD:SD:UN:PW:Q:TL:MM

28 May 2012

167. ECCE/Nwchem on An Australian University computational cluster using qsub with g09/nwchem

EDIT:
I've just learned the First Rule of Remote Computing:
always start by checking the number of concurrent processes you're allowed on the head node, or you can lock yourself out faster that you can say "IT support'.

do
ulimit -u
If it's anywhere under 1000, then you need to be careful.
Default ulimit on ROCKS: 73728
Default ulimit on Debian/Wheezy:  63431
Ulimit on the Oz uni cluster: 32

ECCE launches FIVE processes per job.
Each pipe you add to a command launches another proc. Logging in launches a proc -- if you've reached your quota, you can't log in until a processes finishes.

cat test.text|sed 's/\,/\t/g'|gawk '{print $2,$3,$4}' 
yields three processes -- ten percent of my entire quota.

NOTE:
Running something on a cluster where you have limited access is very different from a cluster you're managing yourself. Apart from knowing the physical layout, you normally have sudo powers on a local cluster.

On potential issue is excessive disk usage -- both in terms of storage space and in terms of raw I/O (writing to an nfs mounted disk is not efficient anyway)
So in order to cut down on that:
1. Define a scratch directory using e.g. (use the correct path)
scratch_dir /scratch
The point being that /scratch is a local directory on the execution node

2. Make sure that you specify
dft
     direct
     ..
end
or even
dft
    noio
    ...
end
to do as little disk caching as possible.

I accidentally ended up storing 52 GB of aoints files from a single job. It may have been what locked me out of the submit node for three hours...

A good way to check your disk-usage is
ls -d * |xargs du -hs

Now, continue reading:



Setting everything up the first time:
First figure out where the mpi libs are:
qsub.tests:

#!/bin/sh
#$ -S /bin/sh
#$ -cwd
#$ -l h_rt=00:14:00
#$ -l h_vmem=4G
#$ -j y
locate libmpi.so
Assuming that the location is /usr/lib/openmpi/1.3.2-gcc/lib/, put 
export LD_LIBRARY_PATH=/usr/lib/openmpi/1.3.2-gcc/lib/
in your ~/.bashrc


Next, look at ls /opt/sw/nwchem-6.1/data -- if there's a default.nwchemrc file, then
ln -s /opt/sw/nwchem-6.1/data/default.nwchemrc ~/.nwchemrc

If not, create ~/.nwchemrc with the locations of the different basis sets, amber files and plane-wave sets listed as follows:

nwchem_basis_library /opt/sw/nwchem-6.1/data/libraries/
nwchem_nwpw_library /opt/sw/nwchem-6.1/data/libraryps/
ffield amber
amber_1 /opt/sw/nwchem-6.1/data/amber_s/
amber_2 /opt/sw/nwchem-6.1/data/amber_q/
amber_3 /opt/sw/nwchem-6.1/data/amber_x/
amber_4 /opt/sw/nwchem-6.1/data/amber_u/
spce /opt/sw/nwchem-6.1/data/solvents/spce.rst
charmm_s /opt/sw/nwchem-6.1/data/charmm_s/
charmm_x /opt/sw/nwchem-6.1/data/charmm_x/


Using nwchem:
A simple qsub file would be:

#!/bin/sh
#$ -S /bin/sh
#$ -cwd
#$ -l h_rt=00:14:00
#$ -l h_vmem=4G
#$ -j y
#$ -pe orte 4
module load nwchem/6.1
time mpirun -n 4 nwchem  test.nw > nwchem.out


with test.nw being the actual nwchem input file which is present in your cwd (current working directory).


Using nwchem with ecce:
This is the proper way of using nwchem. If you haven't already, look here: http://verahill.blogspot.com.au/2012/05/setting-up-ecce-with-qsub-on-australian.html

Then edit your  ecce-6.3/apps/siteconfig/CONFIG.msgln4  file:

NWChem: /opt/sw/nwchem-6.1/bin/nwchem
Gaussian-03: /usr/local/bin/G09
perlPath: /usr/bin/perl
qmgrPath: /usr/bin/qsub

SGE {
#$ -S /bin/csh
#$ -cwd
#$ -l h_rt=$wallTime
#$ -l h_vmem=4G
#$ -j y
}

NWChemFilesToDelete{ core *.aoints.* }

NWChemEnvironment{
    LD_LIBRARY_PATH /usr/lib/openmpi/1.3.2-gcc/lib/
}

NWChemCommand {
#$ -pe mpi_smp4  4
module load nwchem/6.1

mpirun -n $totalprocs $nwchem $infile > $outfile
}

Gaussian-03Command {
#$ -pe g03_smp4 4
module load gaussian/g09

time G09< $infile > $outfile }

Gaussian-03FilesToDelete{ core *.rwf }

Wrapup{
find /scratch/* -name "*" -user $USER |xargs -I {} rm {} -rf
}

And you should be good to go. IMPORTANT: don't copy the settings blindly -- what works at your uni might be different from what works at my uni. But use the above as an inspiration and validation of your thought process. The most important thing to look out for in terms of performance is probably your -pe switch.

Since I'm having problems with the low ulimit, I wrote a small bash script which I've set to run every ten minutes as a cronjob. Of course, if you've used up your 32 procs you can't run the script...also, instead of piping stuff right and left (each pipe creates another fork/proc) I've written it so it dumps stuff to disk. That way you have a list over procs in case you need to kill something manually:

 The script: ~/clean_ps.sh
date 
ps ux>~/.job.list
ps ux|gawk 'END {print NR}'

cat ~/.job.list|grep "\-sh \-i">~/.job2.list
cat ~/.job2.list|gawk '{print$2}'>~/.job3.list
cat ~/.job3.list|xargs -I {} kill -15 {}

cat ~/.job.list|grep "echo">~/.job4.list
cat ~/.job4.list|gawk '{print$2}'>~/.job5.list
cat ~/.job5.list|xargs -I {} kill -15 {}

cat ~/.job.list|grep "notty">~/.job6.list
cat ~/.job6.list|gawk '{print$2}'>~/.job7.list
cat ~/.job7.list|xargs -I {} kill -15 {}

cat ~/.job.list|grep "perl">~/.job8.list
cat ~/.job8.list|gawk '{print$2}'>~/.job9.list
cat ~/.job9.list|xargs -I {} kill -15 {}

qstat -u ${USER} 
ps ux |gawk 'END {print NR}' 
echo "***" 

and the cron job is set up using
crontab -e
 */10 * * * * sh ~/clean_ps.sh>> ~/.cronout

Obviously this kills any job monitoring from the point of view of ecce. However, it keeps you from being locked out. You can manually check the job status using qstat -u ${USER}, then reconnect when a job is ready. Not that convenient, but liveable.

21 May 2012

158. Setting up ecce with qsub at An Australian University computational cluster

EDIT: this works for G09 on that particular cluster. Come back in a week or two for a more general solution (end of May 2012/beginning of June 2012).

I don't feel comfortable revealing where I work. But imagine that you end up working at an Australian University in, say, Melbourne. I do recognise that I will be giving enough information here to make it possible to identify  who I am (and there are many reasons not to want to be identifiable -- partly because students can be mean and petty, and partly because I suffer from the delusion that IT rules apply to Other People, and not me -- and have described ways of doing things you're not supposed to be doing in this blog)

Anyway.

My old write-ups of ecce are pretty bad, if not outright inaccurate. Anyway, I presume that in spite of that you've managed to set up ECCE well enough to run stuff on nodes of your local cluster.

Now it's time for the next level -- on a remote site using SGE/qsub

So far I've only tried this out with G09 -- they are currently looking to set up nwchem on the university cluster. Not sure what the best approach to the "#$ -pe g03_smp2 2" switch is for nwchem.

--START HERE --

EVERYTHING I DESCRIBE IS DONE ON YOUR DESKTOP, NOT ON THE REMOTE SYSTEM. Sorry for shouting, but don't got a-messing with the remote computational cluster -- we only want to teach ecce how to submit jobs remotely. The remote cluster should be unaffected.

1. Creating the Machine
To set up a site with a queue manager, start
ecce -admin

Do something along the lines of what's shown in the figure above.

If you're not sure whether your qsub belongs to PBS or SGE, type qstat -help and look at the first line returned, e.g. SGE 6.2u2_1.

2. Configure the site
Now, edit your ecce-6.3/apps/siteconfig/CONFIG.msgln4  (local nodes go into ~/.ECCE  but remote SITES go in apps/siteconfig --  and that's what we're working with here).

   NWChem: /usr/local/bin/NWCHEM
   Gaussian-03: /usr/local/bin/G09
   perlPath: /usr/bin/perl
   qmgrPath: /usr/bin/qsub
 
   SGE {
   #$ -S /bin/csh
   #$ -cwd
   #$ -l h_rt=$wallTime
   #$ -l h_vmem=4G
   #$ -j y
   #$ -pe g03_smp2 2

   module load gaussian/g09
    }
A word of advice -- open the file in vim (save using :wq!) or do a chmod +w on it first since it will be set to read-only by default.


3. Queue limits
The same goes for the next file, which controls various job limits, ecce-6.3/apps/siteconfig/msgln4.Q:
# Queue details for msgln4
Queues:    squ8

squ8|minProcessors:       2
squ8|maxProcessors:       6
squ8|runLimit:       4320
squ8|memLimit:       4000
squ8|scratchLimit:       0
4. Connect
In the ecce launcher-mathingy click on Machine Browser, and Set Up Remote Access for the remote cluster. Basically, type in your user name and password.

Click on machine status to make sure that it's connecting

5.Test it out!
If all is well you should be good to go

157. Restarting gaussian (g09) job on an SGE system (qsub)

The Australian University I'm working at has a computational cluster where jobs are submitted using qsub. This post is more like a personal note to myself, but the point about resubmitting jobs may be of use to someone.

This page is useful reading for how these types of scripts with mixed shell and gaussian stuff work: http://www.gaussian.com/g_tech/g_ur/m_running.htm

0. The qsub header (Notes To Myself)

http://cf.ccmr.cornell.edu/cgi-bin/w3mman2html.cgi?qsub(1B)
#$ -S /bin/sh Shell. Can be csh, tcsh etc..
#$ -cwd execute in Current Working Directory
#$ -l h_rt=12:00:00 Maximum allowed run time in hours. -l is a list with resource limits.
#$ -l h_vmem=4G Memory limit. http://www.biostat.jhsph.edu/bit/cluster-usage.html#MemSpec. Should match %mem 4000mb

#$ -j y"-j join. Declares if the standard error stream of the job will be merged with the standard output stream of the job." It creates *.o* and *.p* files with what would've been echoed in the terminal.

#$ -pe g03_smpX XSeems to stand for parallel execution. X would be the number of slots it seems. The g03_smpX seems to be the message passing interface, but not entirely sure. 

1. Setting up simple jobs
First create a standard template, let's call it qsub.header

#!/bin/sh
#$ -S /bin/sh
#$ -cwd
#$ -l h_rt=12:00:00
#$ -l h_vmem=4G
#$ -j y
#$ -pe g03_smp2 2
module load gaussian/g09
time G09 << END > g09_output.log
and save it in your home folder ~.

Create an input file, e.g. water.in and put it in your work directory, e.g. ~/g09


%chk=water
#P ub3lyp/6-31G* opt

water energy

0  1
O
H  1  1.0
H  1  1.0  2  120.0

Put them together:
cat ~/qsub.header > water.qsub
cat ~/g09/water.in >>water.qsub

#!/bin/sh
#$ -S /bin/sh
#$ -cwd
#$ -l h_rt=12:00:00
#$ -l h_vmem=4G
#$ -j y
#$ -pe g03_smp2 2
module load gaussian/g09
time G09 << END > g09_output.log

%chk=water.chk
%nprocshared=6
#P ub3lyp/6-31G* opt

water energy

0  1
O
H  1  1.0
H  1  1.0  2  120.0

2. Restarting jobs

Most likely your home folder is shared across the nodes via nfs.

To find out, submit
#!/bin/sh
#$ -S /bin/sh
#$ -cwd
#$ -l h_rt=12:00:00
#$ -l h_vmem=4G
#$ -j y
#$ -pe g03_smp2 2
pwd
ls -lah
tree -L 1 -d
to get some directory information about the nodes.

Once you have that, just put the absolute path to your .chk file in your restart script, e.g.

#!/bin/sh
#$ -S /bin/sh
#$ -cwd
#$ -l h_rt=12:00:00
#$ -l h_vmem=4G
#$ -j y
#$ -pe g03_smp2 2
module load gaussian/g09
time G09 << END > g09_output.log

%chk=/nfs/home/hpcsci/username/g09/water.chk
%nprocshared=2
#P ub3lyp/6-31G* opt guess=read geom=allcheck