BESIII Offline Software System#
Warning
These pages and are under development and originate from the BOSS GitBook. Official documentation for BOSS can be found here.
Note
Feedback on these pages is very welcome! See Contribute for more details.
This website describes the use of the BESIII Offline Software System (BOSS). The pages started as a collection of notes, but now aim to serve several purposes:
Provide accessible and up-to-date tutorials on working with BOSS. These pages written as step-by-step guides and are particularly aimed at beginners, but also provide background information of what the software is doing.
Serve as an inventory of packages and libraries often used within BOSS. Ideally, this should allow analyzers to navigate through the tools that are already available.
Serve as a platform where analyzers can easily and continuously update their documentation.
Maintain an updated list of references to must-read web pages or literature on BESIII.
What goes for all of the above is that, whatever your background or level, your feedback is vital, because these tutorial pages need testing and improvement. More importantly, the more people contribute, the more these pages can become a source of reference and are more likely to remain up-to-date.
So if you read this and like the idea, have a look at the Contribute! Contributions from all levels is highly appreciated.
Hint
If you do not have an IHEP networking account, it is better to check out the official Offline Software page of BESIII. For this, you in turn need to be a BESIII member and have an SSO account, which can be done here.
BOSS can only be of use if you are a member of the BESIII collaboration and if you have access to the software of this collaboration. You can also have a look at the links in the section Further reading.
Contents of the tutorial pages
Here are shortcuts that you might want to take:
Getting started with BOSS. If you are not familiar with BOSS, it is best to start with this part of the tutorial. It will help you set up the BOSS environment in your account on the IHEP server (âinstall BOSSâ), explain you some basics of the package structure on which BOSS is built, and guide you through the process of submitting jobs.
Major BOSS packages. Here, you will find descriptions of some of the important BOSS packages used in initial event selection, most notably, the
RhopiAlg
package. This section is to serve as an inventory of BOSS packages.Physics at BESIII. An inventory of important physics principles behind of data analysis at BESIII.
Todo
(These pages have not yet been written.)
Tips, Tricks, and Troubleshooting. These pages are used to collect problems that are frequently encountered when working with BOSS. As such, these notes are useful no matter your level. New suggestions are most welcome!
Introduction to BESIII#
The Beijing Spectrometer (BESIII) is a particle detector experiment situated at the Beijing Electron-Positron Collider (BEPCII). It is primarily designed to perform studies of charmonium and charm physics, light hadrons, determination of the tau mass, and \(R\) scans at a center-of-mass energy of ranging between 2 to 5 GeV.
Todo
Elaborate or refer to official pages.
See for instance:
Physics Accomplishments and Future Prospects of the BES Experiments at the BEPC Collider (2016) https://arxiv.org/abs/1603.09431
Physics at BESIII (2009) https://arxiv.org/abs/0809.1869
BESIII White Paper (requires login, not yet published) https://docbes3.ihep.ac.cn/cgi-bin/DocDB/ShowDocument?docid=759
âDesign and construction of the BESIII detectorâ, Nucl. Instrum. Meth. A 614, 345 (2010). https://www.sciencedirect.com/science/article/pii/S0168900209023870
Output of the detector is analyzed using the BESIII Offline Software System (BOSS).
BOSS Tutorials#
Getting started with BOSS#
This part of the tutorial focuses on setting up your BOSS environment on the IHEP server. It is essential to follow these steps if you havenât already done so, but you can also just browse through these steps to see if you missed anything. These tutorial pages also aim at providing more context about what you are actually doing, so they can also be useful if you are not a beginner.
Contents#
The role of the IHEP server (
lxslc
), where we explain the structure of the IHEP server, how to access it, and go through the directories that are most important to BOSS.What is BOSS? Here, we go through some of the key ingredients of the BOSS framework, such as CMT and Gaudi.
Setup of your BOSS environment A step-by-step guide that explains you how to âinstallâ BOSS.
Set up a BOSS package, where we go through the mechanisms of CMT used to create, configure, and broadcast a BOSS package.
Running jobs In this part, we will explain the
boss.exe
mechanism, used to run an analysis package as a job.Summary Finally, we will give a practical overview of the steps you usually go through when debugging an analysis package and submitting a corresponding job.
The IHEP server (lxslc)#
Within BESIII, most analysis tasks are run on a server that is hosted by IHEP. The server is also where you will use BOSS. You will need to apply for an IHEP computing account to be able to log in.
Accessing the server#
The IHEP server runs on Scientific Linux CERN (SLC). The
server offers several versions. Usually, people use either SLC5, SLC6, or SLC7. The
domain names for these are lxslc7.ihep.ac.cn
, where the 7
in this case refers to
SLC7. If you are running on Linux or a Linux terminal, the server can be easily accessed
using:
ssh -Y <your user name>@lxslc7.ihep.ac.cn
Here, the option -Y
ensures X11 forwarding, allowing you to open graphical
applications from the server.
Note
If you donât like having to enter your password every time you log in, have a look at the section Key generation for SSH.
In Windows, there are some nice tools that allow you to access the server. First of all, to be able to use SSH, use will either have to use PuTTY or more extensive software like Xmanager. You can also just search for some Linux terminals for Windows. In addition, have a look at the (S)FTP client WinSCP. It allows you to easily navigate the file structure of the IHEP server and to quickly transferâeven synchronizeâfiles up and down to your own computer.
Note
Once in the server, you can switch to other versions of SLC using hep_container
. So
for instance, if you are in SLC7 (CentOS) and want to use SL6, you can use:
hep_container shell SL6
where shell
can be replaced with your shell of choice.
Important data paths#
Some other important directories for the BESIII Collaboration are the following:
-
/cvmfs/bes3.ihep.ac.cn/bes3sw/Boss
(also referred to with$BesArea
)
-
/bes3fs/offline/data/raw
/besfs5/offline/data/randomtrg
(random trigger data)
-
/besfs3/offline/data/
/besfs/offline/data/
(older versions)
Reconstructed Monte Carlo sets (latest version available is
6.6.4
):/besfs2/offline/data/664-1/jpsi/09mc/dst
(2009; 225M)/besfs2/offline/data/664-1/jpsi/12mc/dst
(2012; 200M)/besfs2/offline/data/664-1/jpsi/12mc/grid/dst
(2012; 800M)(no reconstructed MC samples available yet for 2018)
These directories will be important later in this âtutorialâ.
Note
For the latest data file locations, see this page.
Data quota#
When you have logged into the server, you usually start in your home (~
) folder. Move
to the root of the server (cd /
) and youâll see that is a large number of other
directories. A few of these directories contain space that is assigned to your user
account. Here is an overview:
Path |
Data quota |
Max. number of files |
Remark |
---|---|---|---|
|
200 MB |
NA |
|
|
500 MB |
NA |
home ( |
|
50 GB |
300,000 |
|
|
200 MB |
NA |
|
|
5 GB |
50,000 |
no |
|
(500 GB) |
NA |
max. 2 weeks |
In practice, files remain on this server indefinitely. In fact, scratchfs
seems to
follow a less strict policy then other folders.
Warning
Do not exceed these quotas! If you do, the folder of which you are exceeding its quota will be locked by the Computing Center after a few weeks and it is quite a hassle to regain access.
Official information on the quota can be found here.
What is BOSS?#
BOSS is the BESIII Offline Software System with which all data from the BESIII detector is processed and analyzed. As a data analyzer, you will use BOSS to make an initial event selection and collision info that is relevant for your analysis to an output ROOT file. In the final event selection, you use that ROOT file to produce the relevant plots for your analysis.
In this section, we will discuss the three most important components that form BOSS:
The Gaudi Framework, which streamlines algorithms used in analyzes.
CMT, which is used to manage packages designed by different groups.
CVS, which is the version control system used to maintain BOSS.
BOSS has been built on several other external libraries. The source files and binaries
can be found here on the lxslc
server:
/cvmfs/bes3.ihep.ac.cn/bes3sw/ExternalLib/SLC6/ExternalLib
You can also have a look at the BOSS External Libraries repository and the documentation there.
The Gaudi Framework#
An event selection program usually consists of three steps:
Initialize. Here, you for instance load raw data and set some variables.
Execute. For each collision event, you for instance extract parameters from the tracks.
Finalize. You write the data you collected to an output file.
Gaudi utilizes that idea in the form of
an Algorithm
class.
Your analysis is defined by deriving from this class and specifying what you want to be
performed in the initialize
, execute
, and finalize
steps.
Note
For up to date tutorials about Gaudi, see
this GitBook by the LHCb collaboration.
A small warning: LHCb runs analysis through Python, while BESIII jobs are run through
boss.exe
. In addition, LHCb uses an extended version of the Algorithm
class,
called GaudiAlgorithm, so
the instructions cannot be replicated for BOSS.
Configuration Management Tool (CMT)#
The BOSS analysis framework is organized according to the so-called âpackage-oriented principleâ. The underlying idea is that in a software framework that is developed by several people is best split up into several packages that are each developed independently or in subgroups.
The task of the CMT is to streamline and checkout different versions of these packages, that is, to name them automatically based on content and modifications and to connect the packages to each other (to manage dependencies). This is done in conjunction with CVS (see below). CMT additionally allows users to ascribe properties to the packages and their constituents.
See for more information:
Official website of CMT (partially French)
LHCb on CMT (historical page)
Concurrent Versions System (CVS)#
Packages and source code of BOSS are monitored and maintained by CVS. This is a revision control system comparable to Subversion and Git.
More information:
Set up your BOSS environment#
Warning
In itâs current version, this tutorial assumes you use a bash
terminal. It should work
for TC-shell as well, but if you experience problems, please visit Contribute or
click the edit or issue buttons above!
Tip
See last section of this page for an overview of all commands. If you are in a very lazy mood, you can also checkout the BOSS Starter Kit, which this whole set-up for you.
In this section, you will learn how to âinstallâ BOSS. Since BOSS has already been
compiled on the server, installing actually means that you set up path variables in
the bash
shell. In short, your user account then âknowsâ where to locate BOSS and how
to run it.
Set up the BOSS environment#
Step 1: Define your local install folder#
In this part of the tutorial, we will do two things: (1) setup the necessary references
to BOSS and (2) preparing your workarea
folder. You will be developing your own BOSS
packages (mainly code for event selection) in this workarea
folder. Next to your
workarea
, there will be a
CMT folder
(cmthome
), which manages access to the BOSS installation. In the end you will have a
file structure like this:
/besfs5/users/$USER/boss/
(local install area)cmthome
(manages access to BOSS)workarea
(contains your analysis code)MyEventSelectionPackage
(could be several packages)TestRelease
(loads and checks essential BOSS packages)InstallArea
(binaries and header files are collected here after compiling)
For the sake of making this tutorial work in a general setting, we will first define a
bash
variable here (you can just execute this command in bash
):
BOSS_INSTALL="/besfs5/users/${USER}/boss"
The above is equivalent to
BOSS_INSTALL=/besfs5/users/$USER/boss
Why the quotation marks ("..."
) and curly braces ({...}
)? Itâs just a good habit in
bash
scripting to avoid bugs and improve readability. The quotation marks ensure that
we are storing a string here and allow you to use spaces, while the curly braces clarify
the extend of the variable name (USER
in this case).
This variable points to the path that will contain your local âinstallâ of BOSS. You can
change what is between the quotation marks by whatever folder you prefer, in case you
want your local BOSS install to be placed in some other path, for instance by
/ihepbatch/bes/$USER
.
At this stage, youâll have to decide which version of BOSS you have to use. At the time
of writing, version 7.0.5 is the latest stable version, though it could be that for
your analysis you have to use data sets that were reconstructed with older versions of
BOSS. Here, weâll stick with 7.0.5
, but you can replace this number with whatever
version you need.
For convenience, weâll again define the version number as a variable here.
BOSS_VERSION="7.0.5"
Tip
An overview of all BOSS versions and their release notes can be found here (requires login).
Step 2: Import environment scripts#
We first have to obtain some scripts that allow you to set up references to BOSS. This
is done by copying the cmthome
folder from the BOSS Software directory (which contains
all source code for BOSS) to your local install area
mkdir -p "$BOSS_INSTALL/cmthome"
cd "$BOSS_INSTALL/cmthome"
cp -Rf /cvmfs/bes3.ihep.ac.cn/bes3sw/cmthome/cmthome-$BOSS_VERSION/* .
Note that we have omitted the version from the original folder name. You can choose to
keep that number as well, but here we chose to use the convention that cmthome
and
workarea
without a version number refers to the latest stable version of BOSS.
Step 3: Modify requirements
#
In cmthome*
, you now have to modify a file called requirements
, so that it handles
your username properly. Weâll use the vi
editor here, but you can use whatever editor
you prefer:
vi requirements
The file contains the following lines:
macro WorkArea "/ihepbatch/bes/maqm/workarea"
path_remove CMTPATH "${WorkArea}"
path_prepend CMTPATH "${WorkArea}"
The first line needs to be modified so that the variable ${WorkArea}
points to your
quotation marks with the path to your workarea. In our case, the first line becomes:
macro WorkArea "/besfs5/users/$USER/boss/workarea"
What is this requirements
file actually?
A requirements
file is used by CMT and is written in a syntax that CMT understands.
For instance, path_remove
letâs CMT removes the value of "${WorkArea}"
from the
variable $CMTPATH
(a :
-separated list!). Next, path_prepend
prepends the value
"${WorkArea}"
back to that same $CMTPATH
list.
The $CMTPATH
is an important variable for
CMT. It is
comparable to $PATH
in that it lists all directories that contain CMT packages. When
CMT searches, it will start by searching in the first directory listed under $CMTPATH
.
Since you want your own packages in your $WorkArea
to supersede those of the BOSS
installation, you path_prepend
it.
Step 4: Set references to BOSS#
Now you can use the scripts in cmthome
to set all references to BOSS at once, using:
source setupCMT.sh # starts connection to the CMT
cmt config # initiates configuration
source setup.sh # sets path variables
Just to be sure, you can check whether the path variables have been set correctly:
echo $CMTPATH
If everything went well, it should print something like:
/besfs5/users/$USER/boss/workarea:
/cvmfs/bes3.ihep.ac.cn/bes3sw/Boss/7.0.5:
/cvmfs/bes3.ihep.ac.cn/bes3sw/ExternalLib/SLC6/ExternalLib/gaudi/GAUDI_v23r9:
/cvmfs/bes3.ihep.ac.cn/bes3sw/ExternalLib/SLC6/ExternalLib/LCGCMT/LCGCMT_65a
The paths listed here (separated by :
columns) will be used to look for packages
required by the requirements
files of packages (see
Set up a BOSS package). The first of these paths points to
your workarea
, the second to the BOSS version you use (also called $BesArea
), and
the rest point to external libraries such as
Gaudi.
Step 5: Create a workarea
sub-folder#
As mentioned in Set up the BOSS environment, the
local install area contains a workarea
folder next to the cmthome
folder we have
been using so far. In our case, it will be:
mkdir -p "${BOSS_INSTALL}/workarea"
Weâll get back to the workarea
folder when we
Set up a BOSS package.
Step 6: Implement the TestRelease
package#
BOSS is built up of a large number of packages, such as VertexFit
. Your local account
needs to load the essential ones in order for you to be able to run the boss.exe
executable. For this, all versions of BOSS come with the TestRelease
package. This
package helps you to load those essential packages.
Copy the latest TestRelease
package from the $BesArea
(where the source code of the
BOSS version you chose is located) to your workarea
:
cd $BOSS_INSTALL/workarea
cp -Rf $BesArea/TestRelease .
Then move into the cmt
folder that comes with it and source scripts in there:
cd TestRelease/TestRelease-*/cmt
cmt broadcast # load all packages to which TestRelease refers
cmt config # perform setup and cleanup scripts
cmt broadcast make # build executables
source setup.sh # set bash variables
Step 7: Test BOSS using boss.exe
#
To test whether everything went correctly, you can try to run BOSS:
boss.exe
It should result in a (trivial) error message like this:
BOSS version: 7.0.5
************** BESIII Collaboration **************
the jobOptions file is: jobOptions.txt
ERROR! the jobOptions file is empty!
If not, something went wrong and you should carefully recheck what you did in the above steps.
Step 8: Modify your .bashrc
#
In order to have the references to BOSS loaded automatically every time you log in on
the server, we can add some of the steps we did above to your bash
profile
(.bash_profile
) and run commands file (.bashrc
).
First, add the following lines to your bash profile (use vi ~/.bash_profile
):
if [[ -f ~/.bashrc ]]; then
source ~/.bashrc
fi
These lines force the server to source your .bashrc
run commands file when you log in.
In that file, you should add the following lines:
export BOSS_INSTALL="/besfs5/users/${USER}/boss"
export BOSS_VERSION="7.0.5"
CMTHOME="/cvmfs/bes3.ihep.ac.cn/bes3sw/cmthome/cmthome-${BOSS_VERSION}"
source "${BOSS_INSTALL}/cmthome/setupCMT.sh"
source "${BOSS_INSTALL}/cmthome/setup.sh"
source "${BOSS_INSTALL}/workarea/TestRelease/TestRelease-"*"/cmt/setup.sh"
export PATH=$PATH:/afs/ihep.ac.cn/soft/common/sysgroup/hep_job/bin/
Notice that the commands we used the previous steps appear here again. The last line
allows you to submit BOSS jobs to the âqueueâ (using the hep_sub
command) â for now,
donât worry what this means.
To reload the run commands, either just log in again or use source ~/.bashrc
.
Summary of commands#
The following summarizes all commands required to âinstallâ BOSS on lxslc
on your IHEP
user account. If you donât know what you are doing, go through the sections above to
understand whatâs going on here.
BOSS_INSTALL=/besfs5/users/$USER/boss
BOSS_VERSION=7.0.5
mkdir -p $BOSS_INSTALL/cmthome
cd $BOSS_INSTALL/cmthome
cp -Rf /cvmfs/bes3.ihep.ac.cn/bes3sw/cmthome/cmthome-$BOSS_VERSION/* .
vi requirements
Now uncomment and change the lines containing WorkArea
to
/besfs5/users/$USER/boss/workarea
. Then:
source setupCMT.sh
cmt config
source setup.sh
mkdir -p $BOSS_INSTALL/workarea
cd $BOSS_INSTALL/workarea
cp -Rf $BesArea/TestRelease .
cd TestRelease/TestRelease-*/cmt
cmt broadcast # load all packages to which TestRelease refers
cmt config # perform setup and cleanup scripts
cmt broadcast make # build executables
source setup.sh # set bash variables
If you want, you can add the source
commands above your .bash_profile
so that BOSS
is sourced automatically setup scripts automatically each time you log in. In simple
copy-paste commands:
OUT_FILE=~/.bash_profile
echo >> $OUT_FILE
echo "export BOSS_INSTALL=/besfs5/users/$USER/boss" >> $OUT_FILE
echo "source \$BOSS_INSTALL/cmthome/setupCMT.sh" >> $OUT_FILE
echo "source \$BOSS_INSTALL/cmthome/setup.sh" >> $OUT_FILE
echo "source \$BOSS_INSTALL/workarea/TestRelease/TestRelease-*/cmt/setup.sh" >> $OUT_FILE
echo "export PATH=\$PATH:/afs/ihep.ac.cn/soft/common/sysgroup/hep_job/bin" >> $OUT_FILE
Set up a BOSS package#
How to set up a BOSS package?#
Now that you have configured your BOSS work area, you can start developing packages. In theory, you can start from scratch. Weâll have a short look at that procedure here, because it gives some insight into the default structure of a package in CMT. After that, we can look into some tutorial packages.
Todo
The tutorial packages are to be developed soon, see the `BOSS_Tutorials repository. See Shanghai conference for updates on this matter.
Structure of a default CMT package#
As explained in Configuration Management Tool (CMT), BOSS is organized through packages. Packages are components of the entire BOSS framework on which individuals like you work independently. Each package on itself can have several versions that are maintained by you through CMT.
To create an empty package (with a default format), use the following command:
cmt create MyFirstPackage MyFirstPackage-00-00-00
Here, the name MyFirstPackage
is just an example name of the package. The name will be
used as the folder name as well. The second string is the so-called tag of the
package. Within BESIII,
the convention is that the tag is just the package name followed by 6 digits:
-<major id>-<minor id>-<patch id>
. These digits should increase along with changes you
make. Increase the:
patch id
if you only made some simple bug fixes that donât change the interface (.h
header file);minor id
if you only made changes that are backward compatible, such as new functionality;major id
if you modified the interface (header file) that require you to completely recompile the package.
For more information on this numbering scheme, read more about this semantic versioning here (many languages available). The above only becomes relevant as when you start developing packages, so you can forget about this for now.
The result of the above command is a new folder, that weâll navigate into:
cd MyFirstPackage/MyFirstPackage-00-00-00
Note that the folder structure MyFirstPackage/MyFirstPackage-00-00-00
is required
for cmt
to work properly within BOSS. If you donât have a sub-folder with a string for
the version as above, cmt broadcast
wonât work!
Within this folder, you see the core of a default CMT package:
cmd
A folder that contains all files necessary for administration of the package through CMT. There are 6 files:cleanup.csh
This is atcsh
script that allows you to clean all installation files of the package (for instance useful when you are moving to a new version).cleanup.sh
The same ascleanup.csh
, but than inbash
shell script format.Makefile
A file that is necessary for compilation throughmake
/cmake
/gmake
.requirements
The most important of file! Here, you define which other packages within BOSS your own package requires (it defines the dependencies). You can have a closer look at this file for theTestRelease
example package or on this page to see how this file is ordinarily formatted.setup.csh
Another important file. It is used when âbroadcastingâ your package.setup.sh
Same assetup.csh
, but now inbash
shell script format.
src
An empty folder that will hold yourc++
source code (.cxx
files). Optionally, corresponding headers of these files are usually placed in a folder calledshare
but this folder is not generated by default.
For more information, see this nice introduction to CMT.
Additional files you should create#
In addition to the default files above, it is advised that you also create the following files/directories:
A subdirectory with the name of your package. In our case, it should be called
MyFirstPackage
.A subdirectory named
test
. You use this for private testing of your package.A subdirectory named
doc
for documentation files.A subdirectory named
share
for platform-independent configuration files, scripts, etc.A file named
README
that briefly describes the purpose and context of the package.A file named
ChangeLog
that contains a record of the changes.
The above is based on the official BOSS page on how to create a new package (minimal explanations).
Origin of the BOSS in Gaudi
From here on, you can develop a package from scratch. For the basics of how to follow
the guidelines of the BOSS framework (which is based on Gaudi), see
this Hello World
example for Gaudi.
Updating a package#
Whenever you are planning to modify the code in your package (particularly the header
code in the MyFirstPackage
and the source code in src
), it is best if you first make
a copy of the latest version. You can then safely modify things in this copy and use CMT
later to properly tag this new version later.
Copy and rename#
First create some copy (of course, youâll have to replace the names here):
cd MyFirstPackage
cp -fR MyFirstPackage-00-00-00 MyFirstPackage-00-00-01
Now, imagine you have modified the interface of the package in its header files. This,
according to the
BOSS version naming convention,
requires you to modify the major id
. So you will have to rename the folder of the
package:
mv MyFirstPackage-00-00-01 MyFirstPackage-01-00-00
Tag your version using CMT#
Finally, it is time to use CMT to tag this new version. The thing is, simply renaming
the package is not sufficient: files like setup.sh
need to be modified as well.
Luckily, CMT does this for you automatically for. First go into the cmt
folder of your
new package:
cd MyFirstPackage-01-00-00/cmt
Now create new CMT setup and cleanup scripts using:
cmt config
If you for instance open the setup.sh
file you will see that it has deduced the new
version number from the folder name.
Build package#
Now build the executables from the source code:
make
It is in this step that you âtellâ CMT which version of your package to use. First of
all, executables (.o
) and libraries (.d
) are built in the package version folder (in
a folder like x86_64-slc6-gcc46-opt
). Then, symbolic links to the header files of your
package are placed in a sub-folder called InstallArea
in your workarea
. It are the
symbolic links that determine which version of your package uses BOSS.
At this stage, you should verify in the terminal output whether your code is actually
built correctly. If not, go through your cxx
and h
files.
Make package accessible to CMT#
If it does build correctly, you can make the package accessible to BOSS using:
source setup.sh
This sets certain bash
variables so that BOSS will use your version of this package.
One of these variables is called $<PACKAGE_NAME>ROOT
and can be used to call your
package in job options file (see for example $RHOPIALGROOT
in
this template).
Congratulations, you have created an update of your package!
Remark on TestRelease
#
As mentioned in Step 3: Modify requirements, when we were modifying the requirements
of the BOSS
environment, CMT will use the first occurrence of a package in the $CMTPATH
. Thatâs
why we used path_prepend
to add your BOSS workarea to the $CMTPATH
: in case of a
name conflict with a package in the $BesArea
and one in your workarea, CMT will use
the one in your workarea.
Just to be sure, while modifying and debugging your package, you can do the entire build-and-source procedure above in one go, using:
cmt config
source setup.sh
make
BESIII has some documentation on working with CMT available here. It seems, however, that you need special admission rights to CVS to successfully perform these steps. The documentation is therefore probably outdated.
Compare package output
Another reason for working with a copy of the old version of your package is that you can still checkout and run that old version (just repeat the above procedure within the folder of that old version). This allows you to run the same analysis (see Running jobs) again in the old package so that you can compare the output. Making sure that structural updates of components of software still result in the same output is a vital part of software development!
Adding packages to BOSS#
Todo
Go through Chinese documentation and this page and write out.
Note
It seems special access rights are needed for this procedure, so these procedures have not yet been tested.
Summary#
Whenever you have set up a package, set it up using:
cd cmt # navigate into its cmt folder
cmt config # OPTIONAL: reset the package
source setup.sh # set bash variables for this package
make # compile the source code
If this package is a Gaudi algorithm, you can run it as a BOSS job.
Example packages#
Within BOSS, there are already a few âexampleâ packages available. All of these are accessible through the so-called TestRelease package, which will be described and set up first. We then focus on one of its main dependencies: the RhopiAlg algorithm. Within BESIII, this package is typically used as an example for selecting events and usually forms the start of your research.
The TestRelease package#
The TestRelease
package is used to run certain basic packages that are already
available within BOSS. The original source code is located here:
/cvmfs/bes3.ihep.ac.cn/bes3sw/Boss/$BOSS_VERSION/TestRelease
If you followed the tutorial steps for installing BOSS, you can find
TestRelease
in your local install area under $BOSS_INSTALL/workarea/TestRelease
. If
you move into the latest version (probably TestRelease-00-00-86
), you can see (using
ls
) that it contains the following folders:
cmt
: the Configuration Management Tool that you will use to connect to BOSS.CVS
: a folder used for version management.run
: which contains some examplejobOptions
that can be run withboss.exe
.
We can set up the TestRelease
by going into cmt
and âbroadcastingâ to BOSS from
there:
cd cmt
cmt broadcast # connect your workarea to BOSS
cmt config # perform setup and cleanup scripts
source setup.sh # set bash variables
cmt broadcast make # build and connect executables to BOSS
The term broadcast
is important here: as opposed to config
, broadcast
will first
compile all the required packages and then require the package itself. The idea of the
TestRelease
it that make it require packages that you are interested in so that, if
you broadcast
it, all these dependents will be compiled.
We have now initialized the package, so that you can run it in BOSS from the run
folder. This is done using boss.exe
:
cd ../run
boss.exe jobOptions_sim.txt
which, in this case, will run a Monte Carlo simulation.
Note that, in Step 8: Modify your .bashrc when we set up the workarea, we added a line
source setup.sh
to the .bashrc
. This ensures that the TestRelease
package is
loaded every time you log in, so you wonât have to do this every time yourself.
BOSS Example Packages#
Physics related example packages are described here. Within BESIII, RhopiAlg and PipiJpsiAlg are commonly used as examples for initial event selection.
Tip
BOSS Tutorials are under development and can be found in the BOSS_Tutorials repository.
Running jobs#
Todo
Write section about job submission through macros.
Particle physicists perform analyzes on either data from measurements or on data from Monte Carlo simulation. In BOSS, it is possible to generate your own Monte Carlo simulations and to treat its output as ordinary data. There are there for three basic steps in running a Monte Carlo job on BOSS:
sim
: you perform a Monte Carlo simulation and generate a raw data file (rtraw
).rec
: you reconstruct particle tracks from the raw data and write out a reconstructed data file (dst
).ana
: you analyze the reconstructed tracks and generate a CERN ROOT file containing trees that describe event and track variables (root
).
When you are analyzing measurement data, you wonât have to perform steps 1 and 2: the BESIII collaboration reconstructs all data samples whenever a new version of BOSS is released. (See Organization of the IHEP server, under âReconstructed data setsâ, for where these files are located.)
The steps are performed from jobOptions*.txt
files of your own package in your work
area. What is a job options file? Job options contain parameters that are loaded by the
algorithm of your package at run-time (have a look at declareProperty
in the
RhopiAlg). These parameters can be an
output file name, certain cuts, boolean switches for whether or not to write certain
NTuples, etc.
A job is run using the boss.exe
command, with the path to a job option file as
argument. You can use the example job option files in TestRelease
as a try:
cd "$TESTRELEASEROOT/run/"
boss.exe jobOptions_sim.txt
boss.exe jobOptions_rec.txt
boss.exe jobOptions_ana_rhopi.txt
This is essentially it! Of course, for your own analysis, you will have to tweak the
parameters in these jobOptions_*.txt
files and in TestRelease
to integrate and run
your own packages.
In the following, we will go through some extra tricks that you will need to master in order to do computational intensive analyzes using BOSS.
Analyzing all events
In data analysis, you usually want to use all events available: cuts are applied to get
rid of events you donât want. It is therefore better to use -1
, which stands for â
all eventsâ, for the maximum number of events in ApplicationMgr.EvtMax
.
Submitting a job#
The TestRelease
package typically simulates, reconstructs, and analyzes only a few
hundred events. For serious work, you will have to generate thousands of events and this
will take a long time. You can therefore submit your job to a so-called âqueueâ. For
this, there are two options: either you submit them using the command hep_sub
or using
the command boss.condor
. The latter is easiest: you can use it just like boss.exe
.
With hep_sub
, however, you essentially forward a shell script to the queue, which then
executes the commands in there. So you will first put the command for your job in make
a shell script (.sh
). Letâs say, you make a shell script test.sh
in the run
folder that looks like this:
#!/bin/bash
boss.exe jobOptions_sim.txt
The first line clarifies that you use bash
, the second does what you did when running
a job: calling boss.exe
, but of course, you can make this script to execute whatever
bash
commands you want.
The âqueueâ (hep_sub
) executes bash scripts using ./
, not the command bash
. You
therefore have to make the script executable. This is done through
chmod +x <your_script>.sh
(âchange mode to executableâ).
Now you can submit the shell script to the queue using:
hep_sub -g physics test.sh
and your job will be executed by the computing centre. Here, the option -g
tells that
you are from the physics
group. A (more or less) equivalent to this command is
boss.condor test.sh
.
You can check whether the job is (still) running in the queue using:
hep_q -u $USER
Note that hep_q
would list all jobs from all users. The first column of the table you
see here (if you have submitted any jobs) is the job ID. If you have made some mistake
in your analysis code, you can use this ID to remove a job, like this:
hep_rm 26345898.0
Alternatively, you can remove all your jobs from the queue using hep_rm -a
.
Splitting up jobs#
Jobs that take a long time to be executed in the queue will be killed by the server. It
is therefore recommended that you work with a maximum of 10,000 events per job if
you perform Monte Carlo simulations (the sim
step consumes much computer power). Of
course, you will be wanting to work with much larger data samples, sou you will have to
submit parallel jobs. This can be done by writing different jobOptions*.txt
files,
where you modify the input/output files and random seed number.
You can do all this by hand, but it is much more convenient to generate these files with
some script (whether C++, bash or tcsh
) that can generate jobOptions*.txt
files from
a certain template file. In these, you for instance replace the specific paths and
seed number you used by generic tokens like INPUT_FILE
, OUTPUT_FILE
, and
RANDOM_SEED
. You can then use the script to replace these unique tokens by a path or a
unique number. Have a look at the awk
and sed
commands to get the idea.
Splitting scripts using the BOSS Job Submitter#
Summary#
Todo
Write summary of the steps you go through when updating and performing an analysis job.
As opposed to :doc:Key aspects of analysis at BESIII </tutorials/final>
, this summary
is to be a more practical step-by-step guide.
Compile
Generate job files
Test using
boss.exe
Submit to the âqueueâ
Perform your final event selection and/or analysis of the output of the initial event selection
Data sets#
This section explains the different data sets that can be processed using BOSS. There are essentially three types of data files of relevance here:
Raw data
These are data files containing raw recorded data from either real measurements in BESIII or from Monte Carlo simulations. Extension:rtraw
orraw
, see here for how to convert them into each other.Reconstructed data
Raw files are too large to be handled in an analysis: the recorded data first has to be converted to tracks data. The output of this reconstruction step is a DST file. Extension:dst
.Output from the initial event selection
In the analysis step, you analyze the events contained in the DST files. The output of that analysis is stored to aTTree
in a ROOT file. Extension:root
Locations on the IHEP Server#
Inventories of the latest file locations are found on the Offline Software pages (requires login):
In general, all data files are located on the BESIII file system (besfs5
) folders on
the IHEP Server. There are a few different
folders, because the files have been distributed to different servers.
besfs5
: contains user files onlybesfs2
: a symbolic link that points to/besfs3/offline/data/besfs2
. Contains inclusive Monte Carlo samples.besfs3
: file system that contains files of the runs before 2018bes3fs
: a newer file system that contains for instance 2018 data
Within these folders, the data files are located under offline/data
(e.g.
/besfs3/offline/data
) and then the BOSS version with which these files have been
created.
Warning
Make sure you do not confuse the numbers when navigating these paths.
Querying for data sets#
On lxslc
#
You can find all information about the data sets through MySQL on lxslc
. To open the
database, type:
mysql --user=guest --password=guestpass -h bes3db2.ihep.ac.cn offlinedb
Now itâs a matter of searching through the database through MySQL query commands. Some examples (in this case to find the exact energies of the data set):
show tables;
select * from MeasuredEcms where sample = "4360";
select * from MeasuredEcms2 limit 20;
For a reference of MySQL queries, see here.
Note that there are a few BOSS packages that allow you to fetch data from the MySQL database from the C++ code. The main one is DatabaseSvc. For fetching exact beam energy values, use MeasuredEcmsSvc.
Web interface#
Alternatively, you can have a look at this page
http://bes3db.ihep.ac.cn/online/webserver/runinfo/runparams.php
for an overview of run numbers et cetera.
BESIII measurements#
\(J/\psi\) samples#
Year |
Round |
\(N_{J/\psi}\) (\(\times 10^6\)) |
Uncertainty |
Location |
---|---|---|---|---|
2009 |
|
\(233.7 \pm 1.4\) |
\(0.63\%\) |
|
2012 |
|
\(1\,086.9 \pm 6.0\) |
\(0.55\%\) |
|
2017â2018 |
|
\(4.6 \times 10^3\) |
(\(0.53\%\)) |
|
2018â2019 |
|
\(4.1 \times 10^3\) |
($0.53%) |
|
See Chinese Physics C Vol. 41, No. 1 (2017) 013001 for the calculation of the number of \(J/\psi\) events in the 2009 and 2012 data samples. The total number for both is \(N_{J/\psi} = (1310.6\pm7.0) \times 10^6\), which is \(0.53\%\) systematic uncertainty.
See an indication of the number of \(J/\psi\) events in this presentation (requires login). Systematic uncertainty is not yet determined, but could be comparable.
Inclusive Monte Carlo samples#
Reconstructed \(J/\psi\) samples#
The latest \(J/\psi\) samples have been reconstructed with BOSS 6.6.4. They are located here:
InclJpsi="/besfs3/offline/data/besfs2/offline/data/664-1/jpsi/"
Year |
Round |
Inclusive MC |
Sub-folder |
---|---|---|---|
2009 |
|
\(225 \times 10^6\) |
|
2012 |
|
\(1.0 \times 10^9\) |
|
2017â2018 |
|
\(0.3 \times 10^9\) |
hasnât been reconstructed yet |
2018â2019 |
|
\(10 \times 10^9\) |
hasnât been reconstructed yet |
See an indication of the number of \(J/\psi\) events in this presentation (requires login).
Generating exclusive Monte Carlo data#
Todo
Write contextual introduction.
Designing decay cards#
When generating a Monte Carlo sample, a decay card (usual extension: dec
) can be used
to overwrite the decay of certain particles. This allows you to generate a Monte Carlo
sample that only contains events with the signal topology which you are studying.
A decay card is a text file that lists certain particle decays. If your decay card specifies the decay channels of a certain particle, the ânormalâ decay channels (those listed in the PDG) for that particle will be overwritten. The decay channels of one particle should follow this pattern:
Decay <particle name>
<branching fraction 1> <daughter 1a> <daughter 1b> <generator> <parameters>;
<branching fraction 2> <daughter 2a> <daughter 2b ><generator> <parameters>;
Enddecay
Here, there are only two decay channels, but you can add more of course. Note that a decay card has to end with the line:
End
Warning
Due to a small bug in BOSS, a decay card has to end in with an empty white line, otherwise the simulation job will crash.
If you do not specify the decay channels of a certain particle, the decay card called
DECAY.dec
in the BesEvtGen
package will be used. This file essentially follows the
PDG listings. In addition, definitions of particles (including their physical widths)
can be found in the file pdt.table
. Both files are located here (in the case of BOSS
7.0.4):
/cvmfs/bes3.ihep.ac.cn/bes3sw/Boss/7.0.4/Generator/BesEvtGen/BesEvtGen-00-03-98/share
Initial event selection#
Final event selection#
Warning
These pages will sketch some usual procedures of analysis after an initial event selection has been performed using BOSS. The final event selection is usually made outside of BOSS and has therefore not been widely documented.
The aim of these pages is to provide a clear overview and motivation of the key aspects that go into the more practical details behind memos and publications by BESIII.
Analyzing background#
Inclusive Monte Carlo simulations#
Analysis#
Decay topologies
Background fits
Sideband plots
Background fits#
Sideband plots#
Error studies#
Analyzing signal shape#
Exclusive Monte Carlo simulations#
Analysis#
Signal width
Rough estimate based on statistics
Performing fits
Dalitz plots
Introduction#
The aim of this part of the website is to provide accessible instructions for usage of certain package as well as to provide a platform where documentation on all packages is collected. A vast amount of tools is already available, but an overview of these package does not yet exist, let alone an overview that can be continuously contributed to and updated by any BESIII member interested.
Following the Software Guide of the BESIII Offline Software Group (login required), the packages are categorized in the following four categories:
Contributions to these pages are vital, as there are continuous improvements to the BOSS analysis framework.
Generation#
Warning
Work-in-progress
This page of the tutorial is to be based on this page. It is to contain a short description of the generator packages that are used most commonly. Understanding how generators work is crucial for performing proper designing your physics analysis properly.
Simulation#
Warning
Work-in-progress
This part of the tutorial is to be base on
this page.
The headers below are notes only.
Documentation already existing (limited and required login):
https://docbes3.ihep.ac.cn/~offlinesoftware/index.php/MC_truth
See also class documentation for McParticle (header and cxx file).
Reconstruction#
Work-in-progress
This part of the tutorial is to be base on this page.
Analysis#
BOSS Example Packages#
RhopiAlg Example Package#
What does this example package teach?#
The RhopiAlg
is the starting point for beginners using BOSS. It teaches:
The usual Gaudi Algorithm structure of the initialize, execute, and finalize steps.
The use of logging using MsgStream.
Declaring and booking NTuple::Tuples (the eventual
TTree
) and adding items (the eventual branches) using NTuple::Tuple::addItem.Accessing data of charged tracks and neutral tracks in the using data from EvtRecEvent and EvtRecTrack classes.
Identifying particles (PID) using the ParticleID class.
Making a selection of these tracks (using iterators) over which you loop again later.
Applying a Kalman kinematic fit with constraints and a resonance using KalmanKinematicFit.
Computing invariant masses using
HepLorentzVector
from the CLHEP library.Computing the angle between a photon and a pion.
Reconstructed data from the detectors is accessed through the classes in the below table. This package only makes use of the MDC, EMC, and TOF detectors.
Detector |
Class |
Accessed through |
|
---|---|---|---|
MDC |
Main Drift Chamber |
||
MDC |
\(dE/dx\) info |
||
MDC |
Kalman track |
||
TOF |
Time-of-Flight |
||
EMC |
EM-Calorimeter |
||
MUC |
Muon Chamber |
||
<> |
Extension through all |
Introduction#
One of the basic physics analysis packages that is already provided in BOSS is the
RhopiAlg
package. Within BESIII, almost everyone knows it, because it is used as the
starting point for developing your own initial event selection packages. RhopiAlg
is
an illustration of a typical procedure in particle physics: reconstructing a decayed
particle. For this, you will have to make apply cuts on measured parameters and this
package is an illustration of this procedure.
The RhopiAlg
analyzes the decay of the \(\rho(770)\) meson. As you can see in the in the
PDG listing for this meson,
the \(\rho(770)\) meson predominantly decays through \(\rho\rightarrow\pi\pi\) (almost
\(100\%\)), whether it concerns a \(\rho^+\), \(\rho^0\), or \(\rho^-\). This means that we can
reconstruct this meson purely through this 2-particle decay mode.
Additionally, when we consider the charged \(\rho^\pm\) mesons, one of the decay products is the neutral pion: \(\rho^\pm \rightarrow \pi^\pm\pi^0\). This meson is again neutral and cannot be detected, so has to be reconstructed. But here again, there is one dominant decay mode: \(\pi^0 \rightarrow \gamma\gamma\) (\(98.823 \pm 0.034 \%\), see its PDG listing). This means that we can reconstruct the \(\rho^\pm\) meson almost exclusively through its \(\rho^\pm \rightarrow \pi^\pm\pi^0 \rightarrow \pi^\pm\gamma\gamma\) decay channel.
In reconstructing \(\pi^0\) and \(\rho^0\), you will run into another common phenomenon in hadron research: the width of the decaying particle. The width of \(\rho^0\) is much wider than \(\pi^0\) and therefore results in interesting differences in the eventual invariant mass spectra. In the final event selection, you will for instance see that a fit of the invariant mass peaks results in different widths.
Where to find it?#
The original RhopiAlg
package (version 0.0.23
) is located here,
/cvmfs/bes3.ihep.ac.cn/bes3sw/Boss/$BOSSVERSION/Analysis/Physics/RhopiAlg/RhopiAlg-00-00-23
You can also find the RhopiAlg package in the BOSS Tutorials repository.
How to compile and run?#
See
summary of Set up a BOSS package
and Running jobs. An
example of a analysis job option file for RhopiAlg
is found under run
in the
TestRelease
package.
The parameter EventCnvSvc.digiRootInputFile
lists the input files. This is currently
rhopi.dst
(namely the output after running the jobOptions_rec.txt
job), but you can
also feed it other DST files, such as the ones reconstructed from BESIII
data or MC samples.
Description of source code#
Warning
The sections below are incomplete and it is not yet decided whether it is useful to describe the source code in words.
Declaring and defining properties like cuts#
See
header .h
file
for declarations and
source .cxx
code
for definitions of cuts.
Determining vertex position#
Writing properties#
Looping over charged and neutral tracks#
Kalman kinematic \(n\)-constraints fit procedure#
fit4c
refers to the 4-constraints coming from the original \(\pi^0 \rightarrow \gamma\gamma\) meson (or other mesons, depending on the collision energy), namely, the 4-momentum of the system (collision energy and sum of the 3-momenta). Note that the \(\chi^2_\text{red}\) of the fit is the same for any combination, as the for constraints are the same in each event.fit5c
is used when an additional constraint is applied. In theRhopiAlg
package, this fifth constraint refers to the constraint reconstruction of \(\rho^\pm \rightarrow \pi^\pm\pi^0 \rightarrow \pi^\pm\gamma\gamma\), namely the mass of the pion.
Cut flow#
Output root
file#
Warning
General description of how to read the output ROOT file.
PipiJpsiAlg#
See full Doxygen documentation for the PipiJpsiAlg
on GitPages.
What does this example package teach?#
This example package analyzes \(\psi' \rightarrow \pi\pi J/\psi \rightarrow \pi\pi l l\) (di-lepton) events. In particular, it will teach you:
How to access Monte Carlo truth from a DST file using Event::McParticle.
How to store arrays to a
TTree
using NTuple::Array and NTuple::addIndexedItem. This is useful for storing e.g. an \(n\)-array of information for \(n\) tracks in an event. Here, the array is used to store Monte Carlo truth.Identifying muons versus electrons using energy of the EMC shower: electrons deposit more energy in the EMC.
Todo
Still has to be written.
This package introduces several concepts additional to RhopiAlg
.
Other packages#
Warning
Open for suggestions
Please contact if there
are other BOSS packages that you find useful for your research and would like
to recommend to others.
DDecayAlg#
The DDecayAlg
can be found here.
/cvmfs/bes3.ihep.ac.cn/bes3sw/Boss/7.0.4/BesExamples/DDecayAlg/
FSFilter#
Location:
/afs/ihep.ac.cn/users/r/remitche/Algorithms-7.0.3/FSFilter/FSFilter-00-00-00
DDecayAlg#
DDecayAlg
is an algorithm used by BESIII to create NTuple
s to be used in charm
analysis (e.g. \(D\to K_S^0 h^+h^-\).
It is located in BesExamples
of BOSS and mainly uses the DTagTool
package to perform
tagged
analysis of D mesons.
DTagTool#
The algorithm starts the DTagTool
algorithm
DTagTool dtagTool;
DTagTool
has information about tagged decays at BESIII, used at the
\(\psi(3770)\to D^0 \bar{D}^0\) decay mode. We can either look at the âsingle tagâ or
âstagâ which looks at the decay \(D^0(\bar{D}^0) \to f\) where we include both \(D^0\) and
\(\bar{D}^0\).
Letâs take the \(K_{S}^{0} \pi^+ \pi^-\) decay, which DTagTool
assigns the decay mode
"100"
:
EvtRecDTag * stag = dtagTool.findSTag(100);
This stag
object now has the information relating to the candidate decay
\(D\to K *S^0 \pi^+ \pi^-\) such as \(\Delta E = E* \text{beam} - E_D\):
deltaE = stag->deltaE();
or the tracks from the event
tracks = stag->tracks();
BesDChain#
Location within BOSS ($BesArea
):
/cvmfs/bes3.ihep.ac.cn/bes3sw/Boss/7.0.4/Event/BesDChain/BesDChain-00-00-14
BaskeAnaTool#
âBaskeAnaToolâ means a basket of ana useful tools. You can use it to submit jobs to the computer servers, also generate simulation jobs, check the jobs status, check the whether the jobs is successful according to the job log files.
The package based on Python works independent of BOSS, but facilitates for instance MC
simulation. The package can be obtained from GitHub:
github.com/xxmawhu/BaskeAnaTool
Before using the package, have a look at
its README
. There is
also
a Chinese version.
How to install#
First, you need to clone the repository from âgithub.comâ
git clone https://github.com/xxmawhu/BaskeAnaTool.git
The environment configuration is set well in the âsetup.shâ, you need to source it.
source BaskeAnaTool/setup.sh
For the shell with tcsh users, there is one âsetup.cshâ file achieving same effect.
source BaskeAnaTool/setup.csh
What does the basket contain?#
submit jobs flexible
..
For example, assuming you are now at directory âjobsâ, after âlsâ, you find many jobs need to be submitted.
jobs_ana_001.txt jobs_ana_004.txt jobs_ana_007.txt jobs_ana_010.txt
jobs_ana_002.txt jobs_ana_005.txt jobs_ana_008.txt jobs_ana_011.txt
jobs_ana_003.txt jobs_ana_006.txt jobs_ana_009.txt jobs_ana_012.txt
Now, you only need one command
Hepsub -txt *.txt
If you find many jobs allocated in different directories at the âjobsâ. Also one command is enough
Hepsub -txt -r .
Donât forget to â.â, which denotes the current directory. You also can specify the file type, execute method, and submit way.
Hepsub type="C, Cpp, cxx" exe="root -l -b -q" sub="hep_sub -g physics"
Look into github.com/xxmawhu/BaskeAnaTool for more details.
Doing MC simulation is quite flexible. The following command is typical usage:
SimJpsi [decay.card] [number of events]
You can enjoy the physics and forget all dirty bash script!
How to create DIY MC?
Write the following into one file, for example doSim.py
#!/usr/env python
import SimAndRec
from SimAndRec import util
svc = SimAndRec.process("sim.txt","rec.txt")
if len(util.getArv()) == 0:
svc.Make()
svc.Sub()
elif '-make' in util.getArv():
svc.Make()
The you can use doSim.py
now
python doSim.py [decay.card] [number of events]
Itâs also recommended to put
alias SimDIY='python /path/to/doSim.py'
into your configuration file, once you use doSim.py
frequently. Look
into BaskeAnaTool/SimAndRec/gen.py for simpler way to generate your DIY
command.
Generate and submit typically BOSS event selection jobs
There is one class
ana
in moduleBes
. Main features:setJobOption() addDataSet() addcut() make() sub()
You can find some examples in the dirdirectory
BaskeAnaTool/tutorials
Running
ana_Psi2S_inc.py
, feeling it more directly.
TopoAna#
Note
Credit for the package goes to Zhou Xingyu
For more information, see the
corresponding paper on arXiv.
This package is an extremely helpful tool for analyzing the topologies of Inclusive Monte Carlo simulation. Inclusive MC samples give us valuable information about the background of your analysis, as it allows you to know the true contributions to that background. If you know what components that background exists of, you can:
try to make smart cuts to remove those background components;
use a particular function that describes that background component best when applying a fit to the real data.
The problem with inclusive samples, however, is that they can include thousands of decay
modes. The topoana
package allows you to make certain selections and to generate
tables that list frequencies of particles and decay modes that are of interest to you.
All versions of the package can be found here on the IHEP server:
/besfs5/users/zhouxy/tools/topoana
Preparing initial event selection#
The topoana
package has to be run over a ROOT file that you have to prepare yourself.
The ROOT file has to contain a TTree
with specific information of the Monte Carlo
truth:
the run ID number
the event ID number
the number of particles in this event, which is necessary for loading the following arrays
an array contain the PDG code for each track in this event
an array containing the PDG code for the mother of each track (if available)
You can design a procedure to write this MC truth information yourself, but you can also use either of the following two methods:
Add the
MctruthForTopo
algorithm package (see below) to the job options of your analysis.Go through the code of the
MctruthForTopo
algorithm and take over the relevant components in your own initial event selection package, so that you can implement it within your cut procedure.Use the
CreateMCtruthCollection
andWriteMcTruthForTopoAna
in theTrackSelector
base algorithm.
The MctruthForTopo
package#
MctruthForTopo
is an example package that comes with topoana
. It can be used for
preparing a ROOT file sample that contains a TTree
as described above. See the
documentation of MctruthForTopo
for how these branches are typically called within
MctruthForTopo-00-00-06
.
Version |
Data type |
---|---|
|
No selection: all |
|
Particles that donât come from a generator are rejected |
|
Specifically designed for \(J/\psi\) |
|
\(J/\psi\), but with bug fix for |
|
Designed for PID \(90022\) and \(80022\) (??) |
|
\(4,180\) MeV data |
See also decayFromGenerator
All versions of MctruthForTopo
can be found here on the IHEP server:
/besfs5/users/zhouxy/workarea/workarea-6.6.5/Analysis/Physics/MctruthForTopoAnaAlg
You may choose a different version of BOSS than 6.6.5
, the one used above. If you have
sourced one of these versions (using bash cmt/setup
), you can run it by adding the
following lines to your job options:
ApplicationMgr.DLLs += {"MctruthForTopoAnaAlg"};
ApplicationMgr.TopAlg += {"MctruthForTopoAna"};
Note: Using MctruthForTopoAna
is the quickest way to create a TTree
containing the
necessary data for topoana
, but it does not allow you to perform cuts: all the
events will be written to the TTree
and no cut will be applied.
Structure of the Event::McParticleCol
collection#
The TTree
containing Monte Carlo data that is needed for topoana
is created by
looping over the
Event::McParticleCol
in each event and writing the branches described above. To gain a better understanding
of what a package like MctruthForTopo
does, letâs have a look at the the contents of
the MC truth particle collection in one event:
Index |
Particle |
Index |
Mother |
||||
---|---|---|---|---|---|---|---|
|
23 |
|
\(Z^0\) |
||||
|
22 |
|
\(\gamma\) |
||||
|
4 |
|
\(c\) |
|
23 |
|
\(Z^0\) |
|
-4 |
|
\(\bar{c}\) |
|
23 |
|
\(Z^0\) |
|
91 |
|
|
-4 |
|
\(\bar{c}\) |
|
|
443 |
|
\(J/\psi\) |
|
|
||
|
11 |
|
\(e^-\) |
||||
|
421 |
|
\(D^0\) |
|
443 |
|
\(J/\psi\) |
|
333 |
|
\(\phi\) |
|
443 |
|
\(J/\psi\) |
|
-321 |
|
\(K^-\) |
|
421 |
|
\(D^0\) |
|
221 |
|
\(\pi^+\) |
|
421 |
|
\(D^0\) |
|
321 |
|
\(K^+\) |
|
333 |
|
\(\phi\) |
|
-321 |
|
\(K^-\) |
|
333 |
|
\(\phi\) |
|
-13 |
|
\(\mu^+\) |
|
321 |
|
\(K^+\) |
|
14 |
|
\(\nu_\mu\) |
|
321 |
|
\(K^+\) |
|
-11 |
|
\(e^+\) |
|
-13 |
|
\(\mu^+\) |
|
12 |
|
\(\nu_e\) |
|
-13 |
|
\(\mu^+\) |
|
-14 |
|
\(\bar{\nu}_{\mu}\) |
|
-13 |
|
\(\mu^+\) |
A few remarks about what we see here:
The structure of the decay chain is described by the index (see Event::McParticle::trackIndex). Each particle is labeled by this index and if there is a mother particle, it is âlinkedâ to its daughter by its index.
The decay chain starts with index
0
, a \(Z^0\) boson that emerges right after the \(e^+e^-\) collision, which then decays into a \(c\bar{c}\) charm pair. In the simulation, this pair is taken to be acluster
(which has code91
) or astring
(which has code92
).For
TopoAna
(or actually any physics analysis), we are only interested in what happens after the formation of the cluster. This is where the meson is created to which the beam energy is tuned, in this case \(J/\psi\). We therefore only store particles that come after either particle code 91 or 92, seeMctruthForTopoAna::execute
.From the remainder of the table, we can see that the rest of the decay chain becomes (a rather rare if not impossible decay):
The main takeaway is that topoana
requires you to store the branch with âtrack indexâ
defined above as
having an offset: the first particle is to be the initial meson (e.g. \(J/\psi\)) with
track index 0
, so that you can use the mother index as an array index. So you need to
subtract its original index from index of the the particles that come after. In
addition, the selection of MC truth particles is only to contain:
Particles that result from the initial cluster or string, that is, everything that in this case comes after \(J/\psi\).
Only particles that come from the generator. This means they are not background simulated in the detectors and that that they were included in the decay chain from the generator. (See Event::McParticle::decayFromGenerator.) In this case, this means that everything that comes after the decay of \(D^0\) and \(\phi\) is to be excluded, because the \(\mu^+\) and \(K^+\) decays take place outside the BESIII detector.
Only particles that have a mother particle (is not primaryParticle).
In table format, with these conventions, the result that should be stored for the
topoana
package would be:
Array index |
Particle |
Array index |
Mother |
||||
---|---|---|---|---|---|---|---|
|
443 |
|
\(J/\psi\) |
|
91 |
|
|
|
421 |
|
\(D^0\) |
|
443 |
|
\(J/\psi\) |
|
333 |
|
\(\phi\) |
|
443 |
|
\(J/\psi\) |
|
-321 |
|
\(K^-\) |
|
421 |
|
\(D^0\) |
|
211 |
|
\(\pi^+\) |
|
421 |
|
\(D^0\) |
|
321 |
|
\(K^+\) |
|
333 |
|
\(\phi\) |
|
-321 |
|
\(K^-\) |
|
333 |
|
\(\phi\) |
Installing topoana#
Execute
setup.sh
and see the instructions there on how to source it. If you have done this, you can use
the command topoana.exe
the output generated through the
previous step.
Format of a topoana card#
If you have
prepared a ROOT file
and installed topoana.exe, you can
analyze the output. The topoana
package will generate some tables containing
statistics of certain signal particles and signal decay modes. You can specify these
signal particles and branches through a topoana
card and run the analysis with the
command topoana.exe your_topoana.card
.
A topoana
card file (.card
extension) is a text file that defines the way in which
you execute topoana.exe
on your data set. In this file, you for instance specify the
input ROOT files that you want to analyze.
The syntax of the topoana
card is slightly reminiscent of bash
. Starting a line
with:
#
means that the line is a comment and is therefore ignored;%
means that the the line represents a field.
A opening curly brace ({
) following a %
sign means that a field block is opened. The
next line(s) contain the value(s) of that field. Close the block with a closing curly
brace (}
).
The following pages list all fields that can be used in your topoana
card:
required and
optional fields.
Tips on the results#
(From topoana
terminal output.)
Statistics of the topologies are summarized in three types files:
pdf
,tex
andtxt
. Although these are different formats, they contain the same information. Thepdf
file is the easiest to read. It has been converted from thetex
file using thepdflatex
command. If necessary, you can check the contents of thetxt
file as well (e.g. using text processing commands).Tags of the topologies are inserted in all the entries of
TTree
fortopoana
in the output ROOT file(s). The ROOT files may have been split up, in which case you should load them using aTChain
. Except for this, theTTree
fortopoana
data of the output ROOT file is entirely the same as that of the input ROOT file(s). In addition, the topology tags are identical with those listed in the txt, tex, and pdf files.
Submitting a topoana.exe
job#
Just like a BOSS job, you can submit a topoana
job to the queue. This is useful if
your data is extensive and you want to log out while the job is executed. Just write
your command in a bash
script like this:
{ topoana.exe your_topoana.card; } &> your_file.log
The pipe (>
) with the curly braces ensures that all output (including warnings) is
written to the log file (here, your_file.log
).
Make sure that you make the bash
script executable using chmod +x your_bash_file.sh
.
You can then submit your job to the queue using:
hep_sub -g physics your_bash_file.sh
and keep an eye on your jobs using:
hep_q -u $USER
Required fields#
Names of input root files#
One file per line, without tailing characters, such as comma, semicolon and period. Just
like in the TChain::Add
method, absolute, relative paths, and wildcards ([]?*
) are
supported.
Tree name#
Name of the TTree
that contains the MC truth data. Usually, this tree has been written
by the MctruthForTopo
algorithm and is called "MctruthForTopoAna"
.
Branch name of the number of particles#
This branch is required for reading the two arrays specified below. In the
MctruthForTopo
package, it is called "Nmcps"
.
Branch name of the array of particle identifications#
Usually called "Pid"
in the MctruthForTopo
package.
Branch name of the array of the mother indices of particles#
Usually called "Midx"
in the MctruthForTopo
package.
Main name of output files#
When you run topoana.exe
, four files with the same name but in different formats
(root/txt/tex/pdf) will be written as output. The filename extensions are appended
automatically, so it is not necessary to add these extensions to this field.
Optional fields#
Todo
Many of the below fields still have to be tested and described.
Maximum fields#
Maximum number of entries to be processed#
Speaks for itself:) Do not use scientific notations like 1e5
for \(10^5\), but use
10000
.
Maximum hierarchy of heading decay branches to be processed in each event#
Maximum number of decay trees to be printed#
Maximum number of decay final states to be printed#
Maximum number of entries to be saved in a single output root file#
Maximum number of decay trees to be analyzed#
Cuts
#
Cut to select entries#
This field only supports one line. The syntax should be the same as when you apply a cut selection for in the TTree::Draw method.
Method to apply cut to array variables (Two options: T and F. Default: F)#
Whether or not to apply the cut to the array variables as well. Set to true (T
) if you
want to apply the cut there as well.
Ignore fields#
Suppress the first branches of decay trees in the output txt/tex/pdf files#
Initial decays (e.g. \(e^+e^- \rightarrow J/\psi\)) are not listed in the tables.
Ignore gISR photons (Two options: Y and N. Default: Y)#
Ignore gFSR photons (Two options: Y and N. Default: Y)#
Ignore the decay of the following particles#
This field allows you to filter away certain mother particles. The decays will not be listed in any table.
Ignore the decay of the daughters of the following particles#
This field allows you to filter away certain daughter particles. The decays will not be listed in any table.
What to perform#
Process charge conjugate objects together (Two options: Y and N. Default: N)#
Adds two additional columns: nCcEtrs and nTotEtrs, where conjugate particles are counted together.
Skip the topology analysis of decay trees and decay final states (Two options:
Y and N. Default: N)#
Set this field to Y
if you do not want to generate the tables that list all decay
topologies. It is important to set this field if you are dealing with large data and
are only interested in certain inclusive decays! In this case, you should also make use
of the signal fields.
Perform the topology analysis of the decay branches begun with the following
particles#
For each particle you list here, a table of decays where this particle was the mother particle. No table is created if the particle does not decay in any of the events. You can limit the number of rows by adding a number on the same line, separated with some white-spaces. The remainder will then be collected into a final ârestâ row of \(\text{particle} \rightarrow \text{others}\).
Perform the topology analysis of the exclusive decay branches matched with the
following inclusive decay branches#
This field allows you to generate separate tables of decays involving a certain process. The lines should be numbered. The first line represents the initial state in a certain process, the following lines list the decay products you want to limit yourself to. The string \(+ \text{anything}\) will be added automatically (see terminal output). See here for an example of syntax.
Signal fields#
Signal particles#
If this field is filled, an additional table is generated with counts of the signal particles you specified. List the particles using line breaks (do not use commas).
Signal inclusive decay branches#
Here you can list the final state(s) of the signal decay(s) that you are looking for. Naturally, the order of the decay particles does not matter. The syntax is as follows:
Start a line with a number (starting with
0
), then in the same line add a space or tab, and name list the decay particle according to the PDG plain name (e.g. \(pi+\) for \(\pi^+\)).Continue the next line of the same state description with a
1
and so forth.You can name several inclusive decay states by starting each series with a
0
again.See an example for the syntax here.
Signal sequential decay branches#
See here for an example of syntax.
Signal inclusive sequential decay branches#
See an example of syntax here.
Signal intermediate-resonance-allowed decay branches#
See here for an example of syntax.
Signal inclusive or intermediate-resonance-allowed sequential decay branches#
See
here
for an example of syntax. The asterisk (*
) can be used as a short version of the word
âanythingâ in order to simplify your input.
Signal decay trees#
See here for an example of syntax.
Signal decay final states#
See here for an example of syntax.
Beyond BOSS#
BEAN#
BEAN is a ROOT-based analysis framework designed for the BES3 experiment. BEAN stands for the BES3AN alysis. The goal is to develop a lightweight analysis software, more simple and easy to use than BOSS. It is not a replacement of BOSS simulation and reconstruction software. Currently BEAN supports KinematicFit, ParticleID, MagneticField, EventTag and AbsCor analysis tools ported from BOSS. BEAN is capable for the parallel computing, and supports PROOF in very transparent way.
For more information, see here.
Physics at BESIII#
Warning
These pages are to serve as a collection of important aspects of the physics relevant for performing analysis at BESIII.
Some useful documents:
The Physics of the B Factories Bevan, A.J., Golob, B., Mannel, T. et al. Eur. Phys. J. C (2014) 74: 3026.
Physics Accomplishments and Future Prospects of the BES Experiments at the BEPC Collider (2016) [here]
Physics at BESIII (2009)
BESIII White Paper (requires login, not yet published)
âDesign and construction of the BESIII detectorâ, Nucl. Instrum. Meth. A 614, 345 (2010)
Statistics#
Overview of systematic error studies in BESIII:
https://docbes3.ihep.ac.cn/~offlinesoftware/index.php/Data_Quality/Software_Validation_related_reports
By far, the most comprehensive overview of statistical procedures in an experiment comparable to BESIII is the document Recommended Statistical Procedures for BaBar.
For a more theoretical treatment of statistic in high-energy physics, see section â39. Statisticsâ in the PDG (2018).
The BESIII Experiment#
Accelerator: BEPCII#
Detector: BESIII#
Main Drift Chamber (MDC)#
Electromagnetic Calorimeter (EMC)#
Time-Of-Flight System (TOF)#
Muon Chamber System (MUC)#
Cutting#
Typical cuts#
In papers from the BESIII Collaboration, you will usually encounter the following cuts. They are also listed here (requires login).
Charged tracks#
Distance of closest approach of the track to the interaction (IP) in \(xy\) plane: \(\left|\text{d}r\right| < 1\text{ cm}\).
Distance of closest approach of the track to the IP in \(z\) direction: \(\left|\text{d}z\right| < 10\text{ cm}\).
Polar angle: \(\left|\cos\theta\right| < 0.93\).
PID: usually making use of MDC and TOF and using a probability of at least \(0.001\).
Sometimes: events with non-zero net charge are rejected.
Neutral tracks#
Neutral tracks are reconstructed from electromagnetic showers in the EMC, which consists of a barrel and an end cap.
Energy for barrel showers |
\(\cos\theta < 0.8\) |
\(E > 25\text{ MeV}\) |
---|---|---|
Energy for end cap showers |
\(0.86 < \cos\theta < 0.93\) |
\(E > 50\text{ MeV}\) |
If there is more than one charged track, there is a time requirement of \(0 \leq T \leq 14\) (\(50\text{ ns}\)).
Kinematic fits#
\(\chi^2\) of the kinematic fit is often determined in the final event selection with a efficiency scan using a Figure-Of-Merit. To limit the amount of events stored, a cut-off value of \(\chi^2 < 200\) is usually used.
Cut flow#
Cut flow is usually represented in the form of a table that lists the cuts and the corresponding number of events that passed the cut. This gives you insight in how much signal remains after your cuts, but also gives some idea of efficiencies if you make a cut flow table for an exclusive Monte Carlo sample.
A typical example would be (with some thought up numbers):
Total number of events |
\(100,000\) |
\(100\%\) |
---|---|---|
Number of events with \(m\) number of charged tracks |
\(53,234\) |
\(53\%\) |
Number of events with at least \(n\) neutral tracks |
\(43,156\) |
\(43\%\) |
Number of events with exactly the final state particles |
\(20,543\) |
\(21\%\) |
Number of events with \(\chi^2\) for the kinematic fit |
\(18,163\) |
\(18\%\) |
Number of events that passed reconstructed mass cut |
\(15,045\) |
\(15\%\) |
Fitting procedures#
Fitting meson resonances#
Best overview of types of fits and theoretical motivations for each of those can be found in section âResonancesâ of the PDG.
BESIII is a formation experiment.
See documentation for all RooFit
parametrizations
here.
Single and double Gaussian#
Characterization of detector resolution(s).
Breit-Wigner parametrization#
Only works in case of narrow structure and if there are no other resonances nearby
Can be used to extract pole parameters such as particle width
Possible: energy dependent parameters
Convolution of a Breit-Wigner with a Gaussian is called a Voigtian (see RooVoigtian).
Flatté parametrization#
Analytical continuation of the Breit-Wigner parametrization
Does not allow for extraction of pole parameters, only ratios
Background shapes#
Polynomial â
âChebychev polynomial â
âArgus background shape â
Other literature#
Example scripts for RooFit (see overview of descriptions here)
Visualization#
See BESIII plot style recommendations here:
https://docbes3.ihep.ac.cn/~bes3/index.php/Bes3PlotStyles
Publication procedure#
Presenting for subgroups#
Writing a memo#
An overview of the current BESIII authors can be found here:
https://docbes3.ihep.ac.cn/bes3shift_db/bes3member/print1.php
Alternatively, you can use this repository to (re)generate the LaTeX code for you.
Appendices#
Troubleshooting#
I lost read-write access in my afs
home folder#
Formerly, this problem could be solved using the klog
command. Since August 2019, this
command has become:
kinit $USER
aklog -d
You should now be able to read-write in all your sessions.
Iâm sure my job is set up correctly, but it keeps resulting this error#
JobOptionsSvc ERROR # =======> <package>/share/jobOptions_<package>.txt
JobOptionsSvc ERROR # (22,1): parse error
...
JobOptionsSvc FATAL Job options errors.
ApplicationMgr FATAL Error initializing JobOptionsSvc
Yep, this is a weird one⊠So far, the cause was usually that the jobOptions_*.txt
ends in a comment. You can solve it by adding a new line to the file.
I cannot run a bash script, but Iâm sure it should work#
It could be that you wrote the .sh
script on Windows and the file wasnât stored with
Linux line endings. You can change these line endings back to Linux using:
sed -i 's/\r$//' $fileName
Some header files are not found when compiling my package#
Check your requirements
file. Packages that you need should be declared here as well.
For instance, if you want to use McTruth
packages such as McParticle.h
, you should
add the line:
use McTruth McTruth-* Event
I am not the right group for submitting jobs#
If you receive the error message
hep_sub: error: argument -g/--group: invalid choice: 'physics'
(choose from 'gpupwa', 'mlgpu')
or something with different group names, it means you are in the wrong job submit group.
Write an email to Ms. Wen Shuoping to ask to be put in the group physics
(or whatever
group you need).
No resources in job submit group#
If you receive the error message
No resources in your group(s). So the job can not be submitted.
you should ask to be put in a different group (probably physics
). Write an email to
Ms. Wen Shuoping.
ERROR: Failed to create new proc id
instead#
Two known causes:
In the case of
hep_sub
, you should submit an executable bash script. Make thesh
script executable usingchmod +x
. Useboss.condor
in exactly the same way asboss.exe
, that is, feed it a job options file (txt
), not a bash script.You sourced a bash script that contained an
export -f
statement (exporting a bashfunction
). While this is correct way of exporting a function, it somehow affects BOSS. Change this statement intoexport
(omit thef
option) and the issue is fixed.
I cannot try out boss.exe
without jobs#
It should be possible to run boss.exe
without jobs (see here). Does it
result in the following error message?
boss.exe: error while loading shared libraries: libReflex.so:
cannot open shared object file: No such file or directory
If so, you probably forgot to source TestRelease.
I get a message about sysInitialize()
when running a job#
If you receive the following error message:
**************************************************
BOSS version: 7.0.4
************** BESIII Collaboration **************
the jobOptions file is: jobOptions_sim.txt
JobOptionsSvc FATAL in sysInitialize(): standard std::exception is caught
JobOptionsSvc ERROR locale::facet::_S_create_c_locale name not valid
ApplicationMgr FATAL Error initializing JobOptionsS
it means the LANG
environment variable has been set to a value that BOSS cannot
handle. Set it to C
instead by running:
export LANG=C
I cannot use a graphical interface from lxslc
#
If, for instance, you cannot view a TBrowser
or cannot open the event display
besvis.exe
, but instead see
In case you run from a remote ssh session, reconnect with ssh -Y
you probably logged in with an SSH key and even using ssh -Y
wonât help. If you really
need the graphical interfaces from lxslc
, you will need to remove your public key from
the ~/.ssh/authorized_keys
file (just open and edit, itâs just a text file) and log in
again.
My analysis BOSS packages end in a segmentation fault#
A common error is that you didnât book the NTuple
or add the NTuple::Item
s with
NTuple::Tuple::addItem
. This usually results in the following error.
...
DatabaseSvc: Connected to MySQL database
mccor = 0
*** Break *** segmentation violation
__boot()
import sys, imp, os, os.path
Tips & Tricks#
Key generation for SSH#
If you do not like to keep having to enter your password, have a look at generating an ssh key here and here.
Generate a key with the command
ssh-keygen
. You can choose to leave the password empty.Add the SSH key to the
ssh-agent
and create a corresponding public key with the commands:
eval $(ssh-agent -s); ssh-add ~/.ssh/id_rsa
Copy the public key to the server using:
ssh-copy-id -i ~/.ssh/id_rsa <your user name>@lxslc7.ihep.ac.cn
You will be asked for your IHEP account password.Try to log in to the server with:
ssh -Y <your user name>@lxslc7.ihep.ac.cn
If all went correctly, you donât have to enter your password anymore.
Installing ROOT#
The BOSS Starter Kit comes with a
handy bash script
to download and install CERN ROOT6 on a Ubuntu platform. It requires you to have sudo
(admin) rights. The script can be run in one go using:
wget https://raw.githubusercontent.com/redeboer/BOSS_StarterKit/master/utilities/InstallCernRoot.sh
sudo bash InstallCernRoot.sh
For more information, see the official pages:
Warning
You will download around 1 GB of source code.
Visual Studio Code#
Visual Studio Code (VSCode) is a popular IDE that
is regularly updated, is configurable with easy-to-access json
files, and offers a
growing number of user-developed extensions. In recent years, it is has become
the most widely used editor
on the market.
Remote SSH#
For working with VSCode on lxslc
, you can use the
Remote - SSH
extension. This lets you with VSCode work on the server with full functionality, such as
using the Source Control Manager and language navigation from any other VSCode
extensions you installed. See
here for a
tutorial on how to connect to a remote SSH server.
There is one thing you need to change in your VSCode settings. Following these instructions, set the following option:
"remote.SSH.useLocalServer": false,
In addition, you may need to set the remote platform to "linux"
:
"remote.SSH.remotePlatform": {
"lxslc7.ihep.ac.cn": "linux"
},
where "lxslc7.ihep.ac.cn"
is the name of the host in your
SSH Config file.
Tip
VSCode Remote SSH installs some files into your home directory on the server, in a
folder called .vscode-server
. This will not work if you experience this (rather
common) problem: I lost read-write access in my afs home folder. It is therefore recommended that you move the
.vscode-server
folder to a directory where you always have read-write access and then
create a symbolic link to that folder in your actual home folder. Do this as follows:
cd ~
mkdir /besfs5/users/$USER/.vscode-server
ln -s /besfs5/users/$USER/.vscode-server
cd ~
mv -f .vscode-server /besfs5/users/$USER/
ln -s /besfs5/users/$USER/.vscode-server
Another major advantage of this set-up is that you wonât have problems with
data quota when the
.vscode-server
grows over time.
Conda#
The lxslc
server has a very outdated version of Python. If you do want to use Python
3, you can work with Conda, which is available on the server. Just add the following
script:
__conda_setup="$(
'/cvmfs/mlgpu.ihep.ac.cn/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null
)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/etc/profile.d/conda.sh" ]; then
. "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/cvmfs/mlgpu.ihep.ac.cn/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
You can then source it through whatever means you prefer, like creating an alias
alias condaenv="source <path_to_script>/conda_env.sh"
in your .bashrc
.
Next, just run conda activate tensorflow-gpu
and you have python3
, ipython
and
even import tensorflow
available! (At the time of writing, TensorFlow is version
1.13.1 though.)
Unfortunately, you donât have the rights to conda create
new environments. To see
which other environments are available, use conda info --envs
.
Note
If you donât want to go through this whole hassle (itâs quite slow indeed), and
just want to use python3
, you could also just add
/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow/bin
to your
PATH
. But keep in mind that you may run into trouble with certain
Python libraries!
Compiling#
For compiling outside ROOT (that is, not using the ROOT interpreter), you will need to
use a compiler like g++
. The compiler needs to be told where the libraries for
included ROOT header files are located. You can do this using flags that ROOT set during
its installation. In case of g++
, use:
g++ YourCode.C -I$(root-config --incdir) $(root-config --libs --evelibs
--glibs) -o YourBinaryOutput.o
Pro bash
tip: You might like to create an easy command for this. You can do this by
adding the following lines to your ~/.bashrc
.
function rtcompile () {
g++ "$1"
-I$(root-config --incdir) \
$(root-config --libs --evelibs --glibs) \
-lRooFit -lRooFitCore -lRooStats -lMinuit -o "${1/._/.o}"
}
function rtcompilerun () {
rtcompile "$1"
if [ $? -eq 0 ]; then
./${1/._/.o}
fi
}
function rtdebugcompile () {
g++ "$1"
-I$(root-config --incdir) \
$(root-config --libs --evelibs --glibs) \
-lRooFit -lRooFitCore -lRooStats -lMinuit -fsanitize=address -g -o "${1/.\*/}"
}
export -f rtcompile
export -f rtcompilerun
export -f rtdebugcompile
Note the flags added through root-config
: there are includes (preceded by option -I
)
and linked libraries (following that option, and preceding output option -o
). Note
also that flags have been added for RooFit
. For more information about ROOT flags, see
this page.
Here, we give three examples of commands, one for compiling only (rtcompile
), one for
compiling and executing if successful (rtcompilerun
), and one for compiling with
fsanitize
activated
(rtdebugcompile). The
latter is useful if you want to look for memory leaks etc â only use if you are
interested in this, because it will decrease run-time. In addition, there are many
issues in root (like TString
) that are identified by fsanitize
.
Compiling on Windows 10#
Although it is highly recommended to on a Linux OS such as Ubuntu or CentOS, there are still -certain advantages of working on Windows. As a developer, that brings problems, however, if you want to start compiling your code.
Since Windows 10, there exists an easy solution: Windows Subsystem for Linux (WSL). In the newest versions can be easily installed from the Windows Store (search for âUbuntuâ). After installing, search for âUbuntuâ in the start menu. This is a bash terminal that has full access to your windows system, but entirely through bash commands.
As such, you have access to convenient commands like apt install
, vi
, and g++
.
Best of all is that you can use this to
install ROOT. If you are having trouble
installing ROOT through bash, have a look
at this script
(ROOT6).
IHEP GitLab#
IHEP supplies a GitLab server, which allows you to put your analysis code in a git
repository. You can then enjoy all the benefits of version control, different branches
to collaborate as a team, a better overview through online access, etc. The IHEP GitLab
server can be accessed through code.ihep.ac.cn. Have a look
here at what git
does, itâs
worth it!
Note
Unfortunately, the IHEP GitLab server is only available on-campus through the LAN network. In theory, it is possible to connect through the IHEP VPN (ssl.ihep.ac.cn) using EasyConnect, though to set this up, you will first need to be in that LAN network. There are plans to make the server available through the standard SSO account.
Preparing access to the server#
To be able to push files to a repository on the IHEP GitLab server, you will first need to apply for an IHEP GitLab account. You can do this by sending an email to fanrh@ihep.ac.cn.
When you have received your login credentials, log in to code.ihep.ac.cn and have a look around. As you have probably noticed, there is a warning that you have to add an SSH key in order to pull and push to the server. The steps to create such a key are comparable to those for login in to the IHEP server.
Generate an SSH key with the command
ssh-keygen
. You can choose to leave the password empty.Add the SSH key to the
ssh-agent
and create a corresponding public key with the commands:
eval $(ssh-agent -s) ssh-add ~/.ssh/id_rsa
Now, obtain the corresponding public key using:
cat ~/.ssh/id_rsa.pub
and copy all the text you see there (fromssh-rsa
to@ihep.ac.cn
).Go to code.ihep.ac.cn/profile/keys, click âAdd SSH Keyâ, paste the code there, and âAdd keyâ.
Thatâs it!
See here for more elaborate instructions.
As a test, you can now create a new repository on the server. Just click
âNew projectâ and follow the instructions. This
is a nice way to start, as you will be immediately shown instructions on how to
configure git
locally (such as setting the user name).
Pushing existing code to a new repository#
Imagine the situation where you have already developed some code for your analysis and
you want to start tracking it using git
. Letâs say the directory containing this
called is TestRepo
. Here, we go through the steps required to put that code into a
repository and push it to the IHEP GitLab server.
Step 1: Go to the files you want to track#
Go to your the folder containing your code or, alternatively, make a directory
(mkdir
), and add some test files there.
Step 2: Initialize the repository#
Initialize this folder as an empty git
repository using:
git init
The name of the repository that you just initialized is the name of the folder.
Step 3: Add the files in the directory#
Files in the directory are not tracked by git
automatically. You have to add them
manually. This can be done through the git add
command, for instance:
git add temp.sh
git add config/testfile.txt
git add src/
git add inc/*.hpp
git add .
You now have staged these files, that is, made them ready for a commit to the
repository. Files that have been added, will be tracked from then onward: if you change
such a file git
allows you to compare the changes, move back to older version, compare
the file to its counterpart in parallel branches, etc.
Note that the paths are relative and that you can use git add
from any subdirectory in
the repository.
.gitignore
If there are certain files you never want to track (such as input data files or
output plots), you âshieldâ them by creating a file called .gitignore
(note the dot) in the main directory of the repository. This is a text file
contains relative paths of the files you want to ignore. Wildcards are allowed,
see here for more
information. Now, if you use git add .
, all new or modified files in
the folder will be staged, but for the ones excluded by .gitignore
.
Step 4: Commit the changes#
Once you have added the files, you can make commit
the changes using:
git commit -m "<some description>"
This will basically create a new point in the history of your git
repository to which
you can move back any time.
Step 5: Check the status of the repository#
Some commands that are useful from this stage onward:
Use
git status
to check which files have been tracked, which ones are modified compared to the previous commit, which ones removed, etc. If you added all the files you wanted to add, you cancommit
orpush
.Use
git log
to see the history of all your commits.Use
git diff <relative path>
to compare the differences in a tracked directory or file with its previous commit.Use
git checkout <relative path>
to retrieve the previous version of the file or directory.See here for a full reference of
git
commands.
Note
The above 5 steps are all you need to know if you just want to track your files through Git locally. You do not have to work with a GitLab server, though of course this does allow for team collaboration and is the best way to backup your work.
Step 6: Configure the Git repository#
If you have applied for an account and added an SSH key, you can push this new repository to code.ihep.ac.cn. If you havenât already done so, set the user name and email address for this repository:
git config user.name "<Your full name>"
git config user.email "<email>@ihep.ac.cn"
Use git config --global
if you want to use these credentials everywhere.
Now you can add the SSH location to which you want to write your repository:
git remote add origin git@code.ihep.ac.cn:<username>/TestRepo
Here, <user name>
should be the one you were given when you registered. Here, we use
the directory name TestRepo
as repository name, but it can be any name as long as it
is unique within your account.
Step 7: Create the repository on the server#
Unfortunately, access through SSH does not allow you to create a new repository on the server, so you have to do this through the web interface.
Go to code.ihep.ac.cn and click âNew repositoryâ. Use
TestRepo
as the â Project nameâ, then click âCustomize repository name?â to ensure
that the name of the repository is TestRepo
as well. (If you donât, it will be named
testrepo
, while the repository name should match the name of your directory. As
you see, the default option for a new repository is private, so only you can see it.
Step 8: Push your the first commit#
Now, back to your files, you can push the commit you made to that new TestRepo
on the
server:
git push -u origin master
Later, you can just use git push
without arguments, but this is to force the first
commit to the master branch.
Thatâs it, the connection has been established!
You can now edit and add files and then go through steps 3 (add), 4 (commit), 5 (status), and 8 (push) to track your files.
Note
If you work together with others, you can use git pull
to get the
latest changes that others added. Working together through git
is,
however, a bit more complicated because youâll have to think about different
branches and how to deal with merge conflicts. Have a look at the
Git Handbook for more
information.
Glossary#
Todo
Expand this inventory
Improve descriptions
- Argus background shape#
For
RooFit
object, see RooArgusBG.- BEPC#
Beijing Electron-Positron Collider. Currently the second âversionâ: BEPCII.
- BESIII#
Beijing Electron Spectrometer III
- BOSS#
BESIII Offline Software System
- Breit-Wigner parametrization#
Analytical continuation of the Breit-Wigner parametrization
Does not allow for extraction of pole parameters, only ratios
Only works in case of narrow structure and if there are no other resonances nearby
Can be used to extract pole parameters such as particle width
Possible: energy dependent parameters
- CAS#
- Chebychev polynomial#
For RooFit object, see RooChebychev.
- Dalitz plot#
Used in case of three-body decays.
See PDG on Kinematics, section 47.4.3.1.- Data driven background estimate#
- Efficiency#
- Exclusive Monte Carlo simulation#
You only generate events with decays that are of interest to your analysis.
- Flatté parametrization#
Analytical continuation of the Breit-Wigner parametrization
Does not allow for extraction of pole parameters, only ratios
- IHEP#
Institute of High Energy Physics, part of the CAS
- Inclusive Monte Carlo simulation#
You generate event as completely as possible. Since this takes a lot of computing resources, one usually makes use of the data sets that were reconstructed by the BOSS team for your version of BOSS. For the file locations of inclusive \(J/\psi\) samples, see this page [requires login].
- Partial wave analysis#
- Polynomial background#
For
RooFit
object, see RooPolynomial.- Rapidity#
- RooFit#
See tutorial scripts.
- Semantic versioning#
See here. This is essentially a numbering scheme used for tagging versions of a package. Within BOSS, a package can for instance be given tag
Package-01-04-03
, where01
is the major ID,04
the minor ID, and03
the patch ID.- Sideband plot#
- UCAS#
Further reading#
Warning
The below list is not (yet) exhaustive
IHEP and the BESIII collaboration#
The BOSS Analysis Framework#
BOSS software source code:
Doxygen documentation:
http://bes3.to.infn.it/Boss/7.0.2/html/classes.html (external)
https://boss.ihep.ac.cn/~offlinesoftware/MdcPatRecDoc04/classes.html (
MdcPatRec
Class Index)
Some introductions to BOSS:
The BESIII website on Offline Software Short introductory note on BOSS and notes on the conveners sof the software subgroups.
Offline Software Group website This is the official and most elaborate source on BOSS currently available. It can be somewhat outdated and concise, but it does provide some overview of the packages and functionality that BOSS offers.
HyperNews Software Updates [login required]
BESIII TWiki (seems outdated)
BES environment installation (unofficial paper)
On CMT:
On GaudiKernel:
Class documentation (Doxygen)
On CLHEP:
Tools#
Doxygen manual â
Visual Studio Code, and some useful extensions:
LaTeX Workshop â
Python â
Alignment â
Trailing Spaces â
Contribute#
This website has been set up not only to provide a set of accessible tutorial pages on the use of BOSS, but also a continuously updated inventory of the available packages. For now, it serves as a central, informal location to collect information about BESIII, but the aim is to migrate its content to a formal, interactive BESIII platform as soon as that has been set up.

It is quite easy to contribute! First of all, if you spot some typos, just click the edit button in the top right of each page. That will lead you to the source code for the page in this repository on GitHub. Bigger problems can be reported by opening an issue. In both cases, you will need to create a GitHub account.
Alternatively, just directly highlight or make notes on these pages. With a Hypothesis.is you can then post those notes as feedback.
Developing these pages#
Tip
When developing, you have to implement changes through Git. Pro Git is the best resource to learn how to do this. Also have a look here for a short tutorial about the Git workflow.
These pages are built with Sphinx, for which you need to have Python installed. The pages are written in Markedly Structured Text (MyST), an extended form of Markdown.
The easiest way to develop these pages is by using Conda and Visual Studio Code. Conda manages virtual environments, so that the Python packages that are required to work on the documentation can be easily removed or updated. Once you have those installed, itâs simply a matter of running:
git clone https://github.com/redeboer/bossdoc.git
cd bossdoc
conda env create # install required packages
conda activate bossdoc
pre-commit install
code .
The rest of the instructions will be shown once Visual Studio Code opens with the last command ;)
Next, open a terminal (Ctrl + `) and run
tox -e doclive
This will build the documentation and automatically update it while you edit the files in VSCode!
See also
Help developing on the ComPWA website, which uses the same set-up.