Ling Yang1*‡, Kaixin Zhu1*, Juanxi Tian1*, Bohan Zeng1*†, Mingbao Lin3, Hongjuan Pei2, Wentao Zhang1‡, Shuicheng Yan3‡
1 Peking University 2 University of the Chinese Academy of Sciences 3 National University of Singapore
* Equal Contributions. † Project Leader. ‡ Corresponding Author.
example.mp4
weather_and_move.mp4
Please follow the 3D-GS to install the relative packages.
git clone https://github.com/Gen-Verse/WideRange4D
cd WideRange4D
git submodule update --init --recursive
conda create -n WideRange4D python=3.7
conda activate WideRange4D
pip install -r requirements.txt
pip install -e submodules/depth-diff-gaussian-rasterization
pip install -e submodules/simple-knn
In our environment, we use pytorch=1.13.1+cu116.
For multipleviews scenes: If you want to train 4D scene based on WideRange4D or your own dataset of multipleviews scenes, you can orginize your dataset as follows:
├── data
| | multipleview
│ | (Our datasets name)
│ | cam01
| ├── frame_00001.jpg
│ ├── frame_00002.jpg
│ ├── ...
│ | cam02
│ ├── frame_00001.jpg
│ ├── frame_00002.jpg
│ ├── ...
│ | ...
After that, you can use the multipleviewprogress.sh
we provided to generate related data of poses and pointcloud.You can use it as follows:
bash multipleviewprogress.sh (youe dataset name)
You need to ensure that the data folder is organized as follows after running multipleviewprogress.sh:
├── data
| | multipleview
│ | (Our dataset name)
│ | cam01
| ├── frame_00001.jpg
│ ├── frame_00002.jpg
│ ├── ...
│ | cam02
│ ├── frame_00001.jpg
│ ├── frame_00002.jpg
│ ├── ...
│ | ...
│ | sparse_
│ ├── cameras.bin
│ ├── images.bin
│ ├── ...
│ | points3D_multipleview.ply
│ | poses_bounds_multipleview.npy
For other existing 4D reconstruction dataset, you can follow:
For the dataset provided in D-NeRF, you download the dataset from dropbox.
For the dataset provided in HyperNeRF, you can download scenes from Hypernerf Dataset and organize them as Nerfies.
Meanwhile, Plenoptic Dataset could be downloaded from their official websites. To save the memory, you should extract the frames of each video and then organize your dataset as follows.
├── data
│ | dnerf
│ ├── mutant
│ ├── standup
│ ├── ...
│ | hypernerf
│ ├── interp
│ ├── misc
│ ├── virg
│ | dynerf
│ ├── cook_spinach
│ ├── cam00
│ ├── images
│ ├── 0000.png
│ ├── 0001.png
│ ├── 0002.png
│ ├── ...
│ ├── cam01
│ ├── images
│ ├── 0000.png
│ ├── 0001.png
│ ├── ...
│ ├── cut_roasted_beef
| ├── ...
For training multipleviews scenes, you are supposed to build a configuration file named (you dataset name).py under "./arguments/mutipleview", after that, run
python train.py -s data/multipleview/(our dataset name) --port 6017 --expname "multipleview/(our dataset name)" --configs arguments/multipleview/(our dataset name).py
For your custom datasets, install nerfstudio and follow their COLMAP pipeline. You should install COLMAP at first, then:
pip install nerfstudio
# computing camera poses by colmap pipeline
ns-process-data images --data data/your-data --output-dir data/your-ns-data
cp -r data/your-ns-data/images data/your-ns-data/colmap/images
python train.py -s data/your-ns-data/colmap --port 6017 --expname "custom" --configs arguments/hypernerf/default.py
You can customize your training config through the config files.
Run the following script to render the images.
python render.py --model_path "output/dnerf/(our dataset name)/" --skip_train --configs arguments/dnerf/(our dataset name).py
You can just run the following script to evaluate the model.
python metrics.py --model_path "output/dnerf/(our dataset name)/"
There are some helpful scripts, please feel free to use them.
colmap.sh
:
generate point clouds from input data
bash colmap.sh data/hypernerf/virg/vrig-chicken hypernerf
bash colmap.sh data/dynerf/sear_steak llff
downsample_point.py
:downsample generated point clouds by sfm.
python scripts/downsample_point.py data/dynerf/sear_steak/colmap/dense/workspace/fused.ply data/dynerf/sear_steak/points3D_downsample2.ply
Thanks 4DGaussians, We always use colmap.sh
to generate dense point clouds and downsample it to less than 40000 points.
Please feel free to raise issues or submit pull requests to contribute to our codebase.
Some source code of ours is borrowed from 4DGaussians. We sincerely appreciate the excellent work of these authors.
@article{yang2025widerange4d,
title={WideRange4D: Enabling High-Quality 4D Reconstruction with Wide-Range Movements and Scenes},
author={Yang, Ling and Zhu, Kaixin and Tian, Juanxi and Zeng, Bohan and Lin, Mingbao and Pei, Hongjuan and Zhang, Wentao and Yan, Shuichen},
journal={arXiv preprint arXiv:2503.13435},
year={2025}
}