cwb_lsfΒΆ
This page provides a description of the cwb_lsf command.
Usage
This command prepares and submits the analysis jobs on the considered computing cluster by using the LSF batch system. The functionality is similar to the ones provided by the cwb_condor command.
Syntax
cwb_lsf
(without arguments) Prints helpcwb_lsf action [lsf_file/cwb_stage] [input_dir]
Prepares and submits the jobs
Furhter informations
The following options can be passed to cwb_lsf:
action = create : it creates the lsf and tgz files under the condor directory submit : it submits the jobs to the considered computing cluster recovery : it compares the list of jobs in the dag file and checks the number of jobs completed (from history). It produces the dag file data_label.dag.recovery.x (x = recovery version). benchmark : it shows the computation load and related statistics (see cwb_condor benchmark). queue : it shows the lsf queue status. status : it shows the jobs status. status jobID : it shows the log job (jobID is the ID reported with the status option). status jobName : it dumps the log job (jobName is the name reported with the status option). kill : kill all jobs. kill jobID : kill one job (jobID is the ID reported with the status option). stop : suspend all jobs. stop jobID : suspend one job (jobID is the ID reported with the status option). resume : resume all jobs. resume jobID : resume one job (jobID is the ID reported with the status option). utar dir_name : uncompress tgz files produced by the jobs in the output directory (the output is compressed and must be uncompress manually).lsf_file/cwb_stage (optional) = lsf_file : path to the dag file dag_file to be submitted (used as cwb_lsf submit lsf_file). cwb_stage : used in the 2G analysis [FULL(default)/INIT/STRAIN/CSTRAIN/COHERENCE/SUPERCLUSTER/LIKELIHOOD]input_dir (optional) : it is used with the recovery option. The default input directory is the output_dir directory defined in cwb_parameters.C
Examples
To see how the cwb_lsf works see the examples reported for the cwb_condor command. The working directory used for this example is in the svn directory :
tools/cwb/examples/O1_12Sep19Jan_C01_BKG_LF_rMRA_CNAF_vs_ATLAS
The instructions are in the README* files
The following command lines launch a full stage analysis:cwb_lsf create-> files createdoutput lsf file : O1_12Sep19Jan_C01_BKG_LF_rMRA_CNAF_vs_ATLAS.lsfoutput tgz file : condor/O1_12Sep19Jan_C01_BKG_LF_rMRA_CNAF_vs_ATLAS.tgzcwb_lsf submit-> console outputinput dag file : O1_12Sep19Jan_C01_BKG_LF_rMRA_CNAF_vs_ATLAS.dagoutput lsf file : O1_12Sep19Jan_C01_BKG_LF_rMRA_CNAF_vs_ATLAS.lsfLSFFILE = O1_12Sep19Jan_C01_BKG_LF_rMRA_CNAF_vs_ATLAS.lsfJob <13907643> is submitted to queue <virgo>.Your LSF jobs has been submittedTo monitor the jobs do : cwb_lsf statusTo monitor the queue : cwb_lsf queueTo kill the all jobs do : cwb_lsf killTo resubmit paused jobs do : cwb_lsf resumeTo suspend all jobs do : cwb_lsf stopcwb_lsf queueQUEUE_NAME PRIO STATUS MAX JL/U JL/P JL/H NJOBS PEND RUN SUSPvirgo 30 Open:Active 1200 - - 4 2173 973 1200 0cwb_lsf status-> console outputJOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME13907643 vedovat PEND virgo ui01-virgo A9 Jan 29 13:48cwb_lsf status 13907643-> show log produced by the cwb jobWhen job finish the final output file are :-> output root fileoutput/9_O1_12Sep19Jan_C01_BKG_LF_rMRA_CNAF_vs_ATLAS.lsf.tgz-> output log err/out fileslog/9_O1_12Sep19Jan_C01_BKG_LF_rMRA_CNAF_vs_ATLAS.lsf.errlog/9_O1_12Sep19Jan_C01_BKG_LF_rMRA_CNAF_vs_ATLAS.lsf.outcwb_lsf utar output-> files are uncompressed in the output directoryoutput/supercluster_xxx_C01_BKG_LF_rMRA_CNAF_vs_ATLAS.lsf_job9.rootoutput/wave_xxx_C01_BKG_LF_rMRA_CNAF_vs_ATLAS.lsf_slag0_lag0_1_job9.rootNote
the supercluster root file is optional it is produced only if the config/user_parameters.C file contains the following option : jobfOptions |= CWB_JOBF_SAVE_TRGFILE;