Title (Master's thesis)
Alejandro Cortijo, ....
In recent years, point cloud perception tasks have gained increasing attention due to their relevance in various computer vision applications, such as 3D reconstruction, autonomous navigation, and human-machine interaction. This master's thesis aims to push the state of the art (SOTA) in estimating 3D human body meshes from sparse LiDAR point clouds, contributing new algorithms with the purpose of improve model accuracy and robustness.
.......
- Build the Docker image:
To build and run the Docker container, follow these steps:
docker build -t repo_tfm .
- Run the container with GPU support:
Linux:
docker run -it --gpus all --name tfm \
-v $(pwd)/data:/app/data \
-v $(pwd)/weights:/app/weights \
-v $(pwd)/smplx_models:/app/smplx_models \
repo_tfm \
/bin/bash
Windows (Powershell):
docker run -it --gpus all --name tfm `
-v ${PWD}/data:/app/data `
-v ${PWD}/weights:/app/weights `
-v ${PWD}/smplx_models:/app/smplx_models `
repo_tfm `
/bin/bash
Note
: After running docker run, the container's environment will be set up. Since the Pointops library requires CUDA for compilation, this process cannot be done earlier. As a result, you may see logs for about 3-4 minutes. Thank you for your patience!.
- Enjoy:
docker exec -it <CONTAINER_ID> /bin/bash
To be continued....
TODO: Improve it (add links)
Downloading the SMPL-X model weights from this website into 'smplx_models' folder.
Several 3D HPE:
To be continued ......
The corresponding train and test codes are in the 'scripts' folder.
Training: Edit the corresponding path and variable in the training files. PRN training:
python scripts/pct/train_pct.py --dataset sloper4d --cfg configs/pose/pose_15.yaml
LiDAR_HMR training:
python scripts/lidar_hmr/train_lidarhmr.py --dataset sloper4d --cfg configs/mesh/sloper4d.yaml --prn_state_dict /path/to/your/file
LiDAR_HMR testing:
python scripts/lidar_hmr/test_lidarhmr.py --dataset sloper4d --cfg configs/mesh/sloper4d.yaml --state_dict weights/sloper4d/lidar_hmr_mesh.pth
The mesh groundtruths of the Waymo-v2 dataset are acquired utilizting human pose annotations and point clouds. Download the saved pkl files and move them into ./save_data folder (create one if not exists.) for training and testing in the Waymo-v2 dataset. Download link
Our code is based on Mesh Graphormer, Point Transformer-V2, and HybrIK.
If you find this project helpful, please consider citing the following paper:
@article{xxxxx,
title={TBD},
author={xxxxx},
journal={xxxxx},
year={TBD}
}