This projects contains the code for a lattice simulation of the
The lattice formulation of the model is given by the following action functional [4]
subject to the constraint
This constraint fixes the non-component component
A complexification of the mapping space is constructed by complexifiying the target space. As it turns out, there exist a series of diffeomorphisms
In this project, we mainly parametrize the complexified sphere by its tangent bundle. We choose the following convenient parametrization:
Let
where
where
In this work we implement constant deformation of this sort and show that they can be used to enhance the signal-to-noise ratio of many (albeit not all) correlators.
[1]
Detmold, William, Gurtej Kanwar, Michael L. Wagman, and Neill C. Warrington.
"Path integral contour deformations for noisy observables." \
Physical Review D 102, no. 1 (2020): 014514.
arXiv:2003.05914
[2]
Lin, Y., Detmold, W., Kanwar, G., Shanahan, P. and Wagman, M., 2024, November.
"Signal-to-noise improvement through neural network contour deformations for 3D πΊπΌ (2) lattice gauge theory."
In The 40th International Symposium on Lattice Field Theory (p. 43).
arxiv:2102.12668
[3]
Detmold, William, Gurtej Kanwar, Michael L. Wagman, and Neill C. Warrington.
"Path integral contour deformations for noisy observables."
Physical Review D 102, no. 1 (2020): 014514.
arxiv:2309.00600
[4]
Rindlisbacher, Tobias, and Philippe de Forcrand.
"A Worm Algorithm for the Lattice CP (N-1) Model."
arXiv preprint (2017)
arXiv:1703.08571
Dependencies are listed in environment.yml which can be used to create the (anaconda) virtual environment cpn:
$ conda env create -f environment.yml
The project contains two standalone "packages" toy_model/
and lattice/
which contain the code for the toy model (whose lattice consists of only two nodes) and lattice model (implemented is a square lattice) respectively.
We list a (schematic) file tree for each below.
.
βββ data
βΒ Β βββ samples_n2_b1.0.dat
βΒ Β βββ samples_n2_b1.0_mII.dat
βΒ Β βββ ...
βββ main.py
βββ plots
βΒ Β βββ fuzzy-one
βΒ Β βΒ Β βββ 2025.02.15_17:42
βΒ Β βΒ Β βββ deformation_params.pdf
βΒ Β βΒ Β βββ errorbars_comp.pdf
βΒ Β βΒ Β βββ loss.pdf
βΒ Β βΒ Β βββ raw_data
βΒ Β βΒ Β βββ run.log
βΒ Β βββ one-pt
βΒ Β βΒ Β βββ 2025.02.15_17:31
βΒ Β βΒ Β βββ ...
βΒ Β βββ two-pt
βΒ Β βββ 2025.02.15_17:35
βΒ Β βββ...
βββ src
βββ __init__.py
βββ analysis.py
βββ deformations.py
βββ linalg.py
βββ losses.py
βββ mcmc.py
βββ model.py
βββ observables.py
βββ utils.py
.
βββ data
βΒ Β βββ cpn_b4.0_L64_Nc3_ens.dat
βΒ Β βββ cpn_b4.0_L64_Nc3_u.dat
βββ main.py
βββ plots
βΒ Β βββ one-pt
βΒ Β βΒ Β βββ 2025.02.12_17:15
βΒ Β βΒ Β βββ deformation_params.pdf
βΒ Β βΒ Β βββ deformation_params_norms.pdf
βΒ Β βΒ Β βββ errorbars_comp.pdf
βΒ Β βΒ Β βββ loss.pdf
βΒ Β βΒ Β βββ raw_data
βΒ Β βΒ Β βΒ Β βββ af.pt
βΒ Β βΒ Β βΒ Β βββ losses_train.pt
βΒ Β βΒ Β βΒ Β βββ losses_val.pt
βΒ Β βΒ Β βΒ Β βββ model.pt
βΒ Β βΒ Β βΒ Β βββ observable.pt
βΒ Β βΒ Β βββ run.log
βΒ Β βββ two-pt
βΒ Β βββ 2025.02.14_19:14
βΒ Β βββ ...
βββ src
βββ __init__.py
βββ analysis.py
βββ deformations.py
βββ linalg.py
βββ losses.py
βββ model.py
βββ observables.py
βββ unet.py
βββ utils.py
Below, we provide a rudimentarty overview of each file
pkg | folder | file | description |
---|---|---|---|
toy_model / lattice | ./ | main.py |
main function |
toy_model/ | data/ | samples_n{n}_b{beta}_m{mode}.dat |
MCMC samples generated for mode is the mode how the Metripolis step was done: II = in parallel), seq = sequentially |
lattice/ | data/ | cpn_b{beta}_L{L}_Nc{Nc}_ens.dat |
MCMC samples generated for coupling constant |
lattice/ | data/ | cpn_b{beta}_L{L}_Nc{Nc}_u.dat.dat |
(normalized) action values for coupling constant |
toy_model / lattice | src/ | analysis.py |
library for (statistical) analysis |
toy_model / lattice | src/ | deformations.py |
|
toy_model / lattice | src/ | linalg.py |
library for convenient methods from linear algebra |
toy_model / lattice | src/ | losses.py |
loss functions |
toy_model / lattice | src/ | model.py |
model and training routine |
toy_model / lattice | src/ | observables.py |
observables (fuzzy-one, one-pt, two-pt) |
toy_model / lattice | src/ | utils.py |
convenience functions |
lattice | src/ | unet.py |
U-Net architecture for a CNN model learning optimal deformation |
toy_model / lattice | plots/... | deformation_params.pdf | plot of the learned deformation parameter (with maximal norm) |
lattice | plots/... | deformation_params_norms.pdf | heatmap plot of the norm of the deformation parameter at each lattice site |
toy_model / lattice | plots/... | errorbars.pdf | errorbars of the evaluated correlation function before and after training |
toy_model / lattice | plots/... | loss.pdf | plot of training and validation loss |
toy_model / lattice | plots/ | run.log | log of the simulation (see below for an example) |
toy_model / lattice | plots/raw_data/ | af.pt | learned defromation parameters |
toy_model / lattice | plots/raw_data/ | losses_train/val.pt | training / validation losses |
toy_model / lattice | plots/raw_data/ | model.pt | the model |
toy_model / lattice | plots/raw_data/ | observable.pt | expectation value of the observable, list of tuples (e,val) , where e is the epoch, val is the value |
Below we give an example of the run.log
file which logs the most important parameters used in the simulation.
2025-02-14 19:14:16,341 - INFO: Used Parameters
+---------------------+------------------+
| param | value |
+=====================+==================+
| device | cuda:0 |
+---------------------+------------------+
| L (lattice size) | 64 |
+---------------------+------------------+
| beta (coupling cst) | 4.0 |
+---------------------+------------------+
| n (dimC CP) | 2 |
+---------------------+------------------+
| dim_g | 8 |
+---------------------+------------------+
| lr (learning rate) | 1e-05 |
+---------------------+------------------+
| batch size | 256 |
+---------------------+------------------+
| loss_fn | rloss |
+---------------------+------------------+
| epochs | 10000 |
+---------------------+------------------+
| obs | LatTwoPt |
+---------------------+------------------+
| (p,q) | ((8, 8), (8, 9)) |
+---------------------+------------------+
| (i,j) | (0, 1) |
+---------------------+------------------+
| (k,l) | (0, 1) |
+---------------------+------------------+
| SLURM_JOB_ID | 48886 |
+---------------------+------------------+
usage: main.py [-h] [--obs OBS] [--i I] [--j J] [--particle PARTICLE] --tag TAG [--deformation DEFORMATION] [--epochs EPOCHS] [--loss_fn LOSS_FN] [--batch_size BATCH_SIZE] [--load_samples LOAD_SAMPLES]
options: -h, --help show this help message and exit --obs OBS observable: ToyOnePt | ToyTwoPt --i I z component --j J z* component --particle PARTICLE 0 => z, 1 => w --tag TAG tag for saving (one-pt | two-pt | fuzzy-one) --deformation DEFORMATION type of deformation: Linear | Homogeneous --epochs EPOCHS epochs --loss_fn LOSS_FN loss function --batch_size BATCH_SIZE batch size --load_samples LOAD_SAMPLES which samples to load, those created sequentuially (seq) or with parallel (II) metropolis updates
The main.py
script uses torch's _ Distributed Data Parallel_ and has to be called using torchrun
.
- navigate to
CPNStN/lattice
- exectue
main.py
usingtorchrun
$ torchrun --nnodes=1 --nproc_per_node=2 main.py \
--obs=ToyFuzzyOne \
--i=0 \
--j=1 \
--tag=fuzzy-one \
--epochs=1000 \
--deformation=Homogeneous \
--loss_fn=rlogloss \
--batch_size=1024
When running the scripts in an interactive session at Tursa, for conectivity purposes, we recommend to use GNU Screen.
After allocating resrouces using salloc
,
- activate the environment cpn
- navigate to
CPNStN/lattice/
- use
srun
to executetorchrun
Example srun:
$ salloc -N1 --time=00:10:00 --qos=dev --partition=gpu
$ conda activate cpn
$ cd CPNStN/lattice/
$ srun torchrun --nnodes=1 --nproc_per_node=4 main.py --obs=LatTwoPt --p="(5,7)" --q="(11,13)" --i=0 --j=1 --k=0 --ell=1 --tag=two-pt --epochs=1000 --batch_size=128
Example slurm job script:
#!/bin/bash
# Slurm job options
#SBATCH --job-name=cpn_lat_uet
#SBATCH --time=01:00:00
#SBATCH --partition=gpu
#SBATCH --qos=standard
#SBATCH --account=[NAME]
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:4
#SBATCH --output="slurm_logs/two-pt/slurm-%j.out"
# load modules
module load gcc/9.3.0
module load cuda/12.3
module load openmpi/4.1.5-cuda12.3
source activate
conda activate pytorch2.5
cd ~/CPNStN/lattice
# name of script
application="main.py"
# run script
echo 'working dir: ' $(pwd)
echo $'\nrun started on ' `date` $'\n'
export OMP_NUM_THREADS=4
srun torchrun \
--nnodes=1 \
--nproc_per_node=4 \
${application} \
--obs=LatTwoPt \
--p="(8,8)" \
--q="(8,9)" \
--i=0 \
--j=1 \
--k=0 \
--ell=1 \
--tag=two-pt \
--loss_fn=rloss \
--epochs=10000 \
--batch_size=256
echo $'\nrun completed on ' `date`
- Download the source and extract
$ wget http://git.savannah.gnu.org/cgit/screen.git/snapshot/v.4.3.1.tar.gz
$ tar -xvf v.4.3.1.tar.gz
$ cd v.4.3.1/src/
- Build GNU Screen
$ ./autogen.sh
$ ./configure
$ make
- Run GNU Screen
$ ./screen -S <session_name>
We recommend to create an alias for v.4.3.1/src/screen
.