Cluster Installation: Difference between revisions

From XDSwiki
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
XDS can be run in cluster mode using any command line job scheduling software such as Grid Engine, Condor, Torque/PBS, LSF, SLURM etc. We implemented Grid Engine. It is a distributed resource management system which monitors the CPU and memory usage of the available computing resources and schedules the job to the least used computer. Grid Engine was chosen due to its high scalability, cost effectiveness, ease of maintenance and high throughput.  Grid Engine was developed by Sun Microsystems (Sun Grid Engine, SGE) and later acquired by Oracle and subsequently acquired by UNIVA. The latest versions became closed source, but the older ones are open source supplied with many Linux distributions including Redhat/CentOS 6.x. There is also open source Open Grid Scheduler [[http://gridscheduler.sourceforge.net/]], Son of Gridengine [[https://arc.liv.ac.uk/trac/SGE ]]
The following does ''not'' refer to the  [http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_parameters.html#CLUSTER_NODES= CLUSTER_NODES=] setup. The latter does ''not'' require a queueing system!


The following does ''not'' refer to the  [http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_parameters.html#CLUSTER_NODES= CLUSTER_NODES=] setup. The latter does not require a queueing system.  
XDS can be run in a cluster using any command line job scheduling software such as Grid Engine, Condor, Torque/PBS, LSF, SLURM etc. These are distributed resource management system which monitor the CPU and memory usage of the available computing resources and schedule jobs to the least used computers.  


== setup of XDS for a batch queue system ==
== setup of XDS for a batch queue system ==


In order to setup XDS for a queuing system, the ''forkxds'' script need to be changed to access the environment and send jobs to different machines. Example scripts used for Univa Grid Engine (UGA) at Diamond (from https://github.com/DiamondLightSource/fast_dp/tree/master/etc/uge_array - thanks to Graeme Winter!) are below; they may need to be changed according to the environment. Observe this uses the ''qsub'' command which submits forkxds_job to grid engine.
In order to setup XDS for a queuing system, the ''forkxds'' script needs to be changed to access the environment and send jobs to different machines, and to switch off the ssh-based mechanism which the CLUSTER_NODES keyword employs. Example scripts used for Univa Grid Engine (UGA) at Diamond (from https://github.com/DiamondLightSource/fast_dp/tree/master/etc/uge_array - thanks to Graeme Winter!) are below; they may need to be changed according to the environment. Observe this uses the ''qsub'' command which submits forkxds_job to grid engine.


<pre>
<pre>

Revision as of 10:35, 10 August 2017

The following does not refer to the CLUSTER_NODES= setup. The latter does not require a queueing system!

XDS can be run in a cluster using any command line job scheduling software such as Grid Engine, Condor, Torque/PBS, LSF, SLURM etc. These are distributed resource management system which monitor the CPU and memory usage of the available computing resources and schedule jobs to the least used computers.

setup of XDS for a batch queue system

In order to setup XDS for a queuing system, the forkxds script needs to be changed to access the environment and send jobs to different machines, and to switch off the ssh-based mechanism which the CLUSTER_NODES keyword employs. Example scripts used for Univa Grid Engine (UGA) at Diamond (from https://github.com/DiamondLightSource/fast_dp/tree/master/etc/uge_array - thanks to Graeme Winter!) are below; they may need to be changed according to the environment. Observe this uses the qsub command which submits forkxds_job to grid engine.

# forkxds
#!/bin/bash
#                    forkxds          Version DLS-2017/08
#
# enables  multi-tasking by splitting the COLSPOT and INTEGRATE
# steps of xds into independent jobs. Each job is carried out by 
# a Fortran main program (mcolspot, mcolspot_par, mintegrate, or
# mintegrate_par). The jobs are distributed among the processor 
# nodes of the NFS cluster network.
#
# 'forkxds' is called by xds or xds_par by the Fortran instruction
# CALL SYSTEM('forkxds ntask maxcpu main rhosts'),
#    ntask  ::total number of independent jobs (tasks)
#   maxcpu  ::maximum number of processors used by each job
#    main   ::name of the main program to be executed; could be
#             mcolspot | mcolspot_par | mintegrate | mintegrate_par
#   rhosts  ::names of CPU cluster nodes in the NFS network 
#
# DLS UGE port of script to operate nicely with cluster 
# scheduling system - will work with any XDS usage but is 
# aimed for fast_dp see fast_dp#3. Options passed through environment:
#
# FORKXDS_PRIORITY - priority within queue, e.g. 1024
# FORKXDS_PROJECT - UGE project to assign for this
# FORKXDS_QUEUE - queue to submit to

ntask=$1  #total number of jobs
maxcpu=$2 #maximum number of processors used by each job
main=$3   #name of the main program to be executed

rm -f forkxds.params
itask=1
while test $itask -le $ntask
do
   echo $main >> forkxds.params
   itask=`expr $itask + 1`
done

# save environment
echo "PATH=$PATH" > forkxds.env
echo "LD_LIBRARY_PATH=$LD_LIBRARY_PATH" >> forkxds.env

# check environment for queue; project; priority information
qsub_opt=""
if [[ -n "$FORKXDS_PRIORITY" ]] ; then
    qsub_opt="$qsub_command -p $FORKXDS_PRIORITY"
fi

if [[ -n "$FORKXDS_PROJECT" ]] ; then
    qsub_opt="$qsub_command -P $FORKXDS_PROJECT"
fi

if [[ -n "$FORKXDS_QUEUE" ]] ; then
    qsub_opt="$qsub_command -q $FORKXDS_QUEUE"
fi

qsub $qsub_opt -sync y -V -cwd -pe smp $maxcpu -t 1-$ntask `which forkxds_job`

# forkxds_job

#!/bin/bash

params=$(awk "NR==$SGE_TASK_ID" forkxds.params)
JOB=`echo $params | awk '{print $1}'`

# load environment
. forkxds.env

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH
export PATH=$PATH
echo $SGE_TASK_ID | $JOB