.. include:: images.rst How to Use ---------- This page provides a concise overview of how to perform an analysis with the cWB pipeline. **Parameter files for the production and post-production stages**: +----------------------------------------------------------+-------------------------------------------------+ | `user_parameters.C <#user-parameters-c>`__ | parameter file for the production stage | +----------------------------------------------------------+-------------------------------------------------+ | `user_pparameters.C <#user-pparameters-c>`__ | parameter file for the post-production stage | +----------------------------------------------------------+-------------------------------------------------+ **Procedure to set up the working environment and perform the analysis**: - `Main Steps <#main-steps>`__ - Detailed information on the main analysis stages are available at the following links: +-------------------------------------------------+----------------------------------------------------------------------------+ | `pre-production <#pre-production>`__ | setting up the working environment | +-------------------------------------------------+----------------------------------------------------------------------------+ | `production <#production>`__ | running the analysis | +-------------------------------------------------+----------------------------------------------------------------------------+ | `post-production <#post-production>`__ | collecting the results and generating figures of merit | +-------------------------------------------------+----------------------------------------------------------------------------+ | `trigger parameters <#trigger-parameters>`__ | lists of triggers parameters | +-------------------------------------------------+----------------------------------------------------------------------------+ | `web pages <#web-pages>`__ | overview of the web pages collecting the analysis results | +-------------------------------------------------+----------------------------------------------------------------------------+ | `FAD howto <#fad-howto>`__ | guidelines on the calculation of the false alarm rate-density statistic | +-------------------------------------------------+----------------------------------------------------------------------------+ **Examples on how to perform analyses with cWB**: +---------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+ | `Background Example <#background-example>`__ | an example of a background estimation | +---------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+ | `Simulation Example <#simulation-example>`__ | an example of a simulation study | +---------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+ | `Example of a search for intermediate mass black hole binaries <#example-of-a-search-for-intermediate-mass-black-hole-binaries>`__ | | +---------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+ | `Interactive multistages 2G analysis Example `__ | a step-by-step guide to perform a multi-stage analysis | +---------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+ | `Batch 2 stages 2G analysis Example `__ | a step-by-step guide to perform a two-stage analysis | +---------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+ | `Merging multiple backgrounds into a single report `__ | How to merge multiple backgrounds into a single report | +---------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+ | `Analysis of the loudest events `__ | How to do the analysis of the most significative events | +---------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+ | `Parameter Estimation Analysis `__ | How to do the Parameter Estimation analysis | +---------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+ | Flow-chart ~~~~~~~~~~~~~~ | | |image22| .. ==================================================================== .. configuration files .. ==================================================================== .. include:: cwb_parameters.rst .. include:: cwb_pparameters.rst Main Steps ~~~~~~~~~~~~~~ - **Prepare a dir** #. Choose a name: (ex: WORK_LABEL) #. Create dir .. code-block:: bash cwb_mkdir WORK_LABEL #. Put the `user_parameters.C <#user-parameters-c>`__ file inside the `config <#config>`__ directory. #. Put all the needed files in the `input <#input>`__ directory #. If you need some `Plugins <#plugins>`__, put it on the `user_parameters.C <#user-parameters-c>`__ file - **Run the pipeline** - run interactively a single job (JOBNUMBER): .. code-block:: bash cwb_inet JOBNUMBER - run more jobs submitting on a cluster (using condor as batch system): .. code-block:: bash cwb_condor create cwb_condor submit - **Create figure of merits** #. Put the `user_pparameters.C <#user-pparameters-c>`__ file inside the `config <#config>`__ directory. #. Merge the files in output directory: .. code-block:: bash cwb_merge M1 #. Apply Data quality and Veto (optional) .. code-block:: bash cwb_setveto M1 #. Produce figure of merits and web pages .. code-block:: bash cwb_report M1 create .. ==================================================================== .. pre-production .. ==================================================================== pre-production ~~~~~~~~~~~~~~~~~~ .. ==================================================================== .. create working directory .. ==================================================================== Pre-production stage prepares the directory where all the scripts and files necessary to run the analysis are collected. First of all the directory name must be unambiguous. A possible criterion to set the name is to compose the following tags: .. code-block:: bash "DATA_TAKING_RUN"_"CALIBRATION_VERSION"_"ANALYSIS_TYPE"_"MDC_TAG"_"NETWORK"_"USER_TAG"_"ANALYSIS_RUN" where : .. code-block:: bash DATA_TAKING_RUN : S4, S5, S6A, ... CALIBRATION_VERSION : R2, R5, ... ANALYSIS_TYPE : SIM (simulation), BKG (time shift analysis) MDC_TAG : (only for simulations) BRST, SGQ9, NINJA2, ... NETWORK : network ifo list - L1H1, L1H1V1, ... USER_TAG : TEST, PHASE_MISCALIB, ... ANALYSIS_RUN : run1, run10, ... Example: .. code-block:: bash * timeshift analysis : S6A_R4_BKG_L1H1V1_PHASE_MISCALIB_run1 * simulation : S6A_R4_SIM_BRST_L1H1V1_PHASE_MISCALIB_run1 This name will be used for all the files produced automatically by the various `Commands <#commands>`__. The working environment is created by `cwb_mkdir <#cwb-mkdir>`__ command, for instance: .. code-block:: bash cwb_mkdir WORK_LABEL | creates the a dir with name WORK_LABEL which contains the following subdirectories: +-------------------------+----------------------------+ | `config <#config>`__: | cwb configuration files | +-------------------------+----------------------------+ | `input <#input>`__: | input data files | +-------------------------+----------------------------+ | `condor <#condor>`__: | condor files (dag,sub) | +-------------------------+----------------------------+ - tmp: dir for temporary job files - log: condor log files - output: output job files - merge: merge of output job files - report: dir for data to be pubblished on web - - ced: dir for web pages created by Coherent Event Display - - dump: dir for information files - - postprod: dir for web pages with figure of merits - macro: user macros - data: user output job files The sub-directories input and config (eventually macro) are the one which (usually) contains the necessary information to run the analysis. The command `cwb_clonedir <#cwb-clonedir>`__ allows to take an existing working dir and copy the primary information (input/config/macro directories) to repeat the same analysis of the original dir (or eventually make minor changes). .. ==================================================================== .. setup environment variables .. ==================================================================== config ^^^^^^^^^^^^ In the config dir there are two files: - `user_parameters.C <#user-parameters-c>`__. parameter file for `production <#production>`__ stage - `user_pparameters.C <#user-pparameters-c>`__. parameter file for `post-production <#post-production>`__ stage | These files contains the parameter values for the following stages (`production <#production>`__ and `post-production <#post-production>`__). | Before proceeding with these two steps, be sure that these files are contain the right values. **Note**: the pipeline automatically searches these two files. If you are interested to use different files for the same purpose, use the command `cwb_setpars `__ and `cwb_setppars `__. .. ==================================================================== .. node list of input data file .. ==================================================================== .. include:: input.rst .. ==================================================================== .. create condor stuff .. ==================================================================== condor ^^^^^^^^^^^^ The condor directory stores the files for the submission of jobs. After the command `cwb_mkdir `__ The command `cwb_condor `__, associated to different instructions, is used to manage this procedure. In particular: .. code-block:: bash cwb_condor create creates in the condor dir two files: #. WORK_LABEL.dag #. WORK_LABEL.sub which contains the necessary information to submit via condor a job for each segment in the list. To submit jobs: .. code-block:: bash cwb_condor submit or alternatively, enter in the dir condor and use the specific condor commands: .. code-block:: bash cd condor condor_submit_dag WORK_LABEL.dag .. ==================================================================== .. node production .. ==================================================================== production ~~~~~~~~~~~~~~ | This stage is the main part of the analysis, i.e. the extraction of interesting triggers from the data. | The analysis divides the total observation time into small sub-periods (segments) to be analysed separately (jobs). | The segment lenght is defined in the `user_parameters.C <#user-parameters-c>`__, in particular in the `Job settings <#job-settings>`__ section. The total list of segment is defined according to `data quality `__. There are two steps in the analysis addressing the way time periods are considered. #. Segment list is made on period surviving DQ_CAT0, DQ_CAT1 and DQ_CAT4 In these period the pipeline make data conditioning and whitening procedure. #. Trigger extraction is made on period surviving DQ_CAT0, DQ_CAT1, DQ_CAT4 and DQ_CAT2 | Using batch system utility, it is possible to spread the jobs on a cluster of nodes and execute them in parallel, see `cwb_condor `__ (See also `condor <#condor>`__). | If you are interested to execute a single job (for example for testing purposes) on the working shell, instead of using batch system submission, use `cwb_inet `__. For each job, the pipeline produces a .root file which is stored in the output directory. .. ==================================================================== .. post-production .. ==================================================================== post-production ~~~~~~~~~~~~~~~~~~~ | In this stage the user can produce figure of merits and text files from the list of triggers generated during the production step. | Particularly, events are chosen applying selection thresholds on trigger parameters. | The final output is shown on web pages. The post-production steps are the following: #. **Decide thresholds** Prepare the `user_pparameters.C <#user-pparameters-c>`__ in the `config <#config>`__ directory with all the necessary information. #. **Merging** This process merges the results files contained in the output directory. Files produced in this stage are contained in the merge directory and are properly labeled to distinguish different merging process. The following are typical merged files (in this case the label is M1): - merge_WORK_LABEL.M1.lst file listing all root files inside output directory used for merging - wave_WORK_LABEL.M1.root file containing triggers information - live_WORK_LABEL.M1.root file containing livetime informations referred to lags (*only in production mode*) - mdc_WORK_LABEL.M1.lst file containing MDC informations (*only in simulation mode*) The command for merging is `cwb_merge `__. .. code-block:: bash cwb_merge M1 #. **Apply data quality and veto** (optional) It is possible to apply one or more Data Quality/Veto files to the central times of triggers extracted in the `production <#production>`__ stage. If the trigger is accepted/discarded by Data Quality it is flagged 1/0. See `user_pparameters.C <#user-pparameters-c>`__ for the use and definition of Data Quality files. The command for this step is `cwb_setveto `__. .. code-block:: bash cwb_setveto M1 (*please note that all steps related to the same set of results must report the same label - M1 in this case*) after the application of this command we will have new .root file in the merge directory, with an additional string addressing the DQ/Veto used. A typical label could be: .. code-block:: bash M1.V1hveto_V1cat3 #. **Web page production** | Produces a web page containing figures of merit. Web pages are stored in report/postprod directory. The command is `cwb_report `__ followed by the proper label as for previous steps: .. code-block:: bash cwb_report M1 create or .. code-block:: bash cwb_report M1.V1hveto_V1cat3 create depending on the application or not of data quality. .. ==================================================================== .. post production parameters .. ==================================================================== .. include:: pp_parameters.rst .. ==================================================================== .. web pages .. ==================================================================== .. include:: pp_webpages.rst .. ==================================================================== .. FAD howto .. ==================================================================== .. include:: fad_howto.rst .. ==================================================================== .. node Background Example .. ==================================================================== .. include:: bkg_example.rst .. ==================================================================== .. node Simulation Example .. ==================================================================== .. include:: sim_example.rst .. ==================================================================== .. Example of a search for intermediate mass black hole binaries .. ==================================================================== .. include:: imbhb_example.rst