Run the script with
run-alibuild --prefix=/project/projectdirs/project1/user2/alice-haswell
fully builds the standard (client) ALICE software environment installing into a project directory. While run-alibuild
can be used without any option, installing to $HOME/alice
, the installation is approximately 20 GB large, occupying half of the standard NERSC home directory quota (which also resides on a slower mount).
A CERN account is still needed to pull the DPMJET event generator from a protected CERN GitLab repository (alidist issue #833). Use git-credential-store
and git-credential-cache
with the CERN account, if the build needs to complete without user interaction.
This script will apply following modification to the AliRoot code, that does not affect the function as grid client:
- Fixing memory access bugs
Example .bash_profile.ext
snippet for both Cori and with PDSF using CVMFS, where the user is expected to executes alish
to load the ALICE environment (replacing project1/user2
with your project and user ID, and redefine the two alias alish=
… to change the loading command):
alice_haswell="/project/projectdirs/project1/user2/alice-haswell"
alice_analysis_data="/project/projectdirs/project1/user2/analysis-data"
alice_aliphysics_latest="AliPhysics/latest-ali-master-release"
if [[ "${NERSC_HOST}" = cori && -d "${alice_haswell}" ]]; then
alias alish="\
export PATH=\"\${PATH}:${alice_haswell}/alibuild\"; \
export ALIBUILD_WORK_DIR=\"${alice_haswell}/sw\"; \
[ -d "${alice_analysis_data}" ] && \
export ALICE_DATA="${alice_analysis_data}"; \
eval \$(alienv load ${alice_aliphysics_latest})"
elif [[ -d /cvmfs ]]; then
source /cvmfs/alice.cern.ch/etc/login.sh
alias alish="eval \$(alienv load AliPhysics)"
fi
unset alice_aliphysics_latest
unset alice_analysis_data
unset alice_haswell
See ALICE GitHub advanced workflow how to copy the offline analysis database (OADB) directory from CERN EOS to the local file system, e.g. into the directory $alice_analysis_data
points to above.
Assuming the directory calibration
is stored on GPFS project directory /project/projectdirs/project1/user2/fake_cvmfs
(same for a SquashFS mount point, see how to create this), and the run anchor is (the LHC16k) run 257209:
OCDB_PATH=/project/projectdirs/project1/user2/fake_cvmfs ./dpgsim.sh --run 257209 --mode ocdb
LIBC_FATAL_STDERR_=1
The (undocumented) glibc environment variable causes potential crashes to be logged via stderr.
The OCDB (410 GB and containing 1.5 million files as of June 2018) can be transferred using a system with CVMFS mount (PDSF or shifter image on MPP systems) by:
rsync -ahvPSX /cvmfs/alice-ocdb.cern.ch/calibration /project/projectdirs/project1/user2/fake_cvmfs/
Transfer speed is 1–2 MB/s, and 2–3 days should be planned for to transfer the entire OCDB.
The SquashFS image is then created using:
mksquashfs /project/projectdirs/project1/user2/fake_cvmfs/calibration calibration-YYYYMMDD.sqsh -comp xz -b 1048576 -Xdict-size '100%' -no-xattrs -all-root -always-use-fragments -info
and can be archived to HPSS. The creation time is about 5½ hours (and turning the compression completely off only shaves 1–2 hours off that).