# FastVGGT **Repository Path**: kangchi/FastVGGT ## Basic Information - **Project Name**: FastVGGT - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-09-11 - **Last Updated**: 2025-09-11 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
**[Media Analytics & Computing Laboratory](https://mac.xmu.edu.cn/)**; **[AUTOLAB](https://zhipengzhang.cn/)**
[You Shen](https://mystorm16.github.io/), [Zhipeng Zhang](https://zhipengzhang.cn/), [Yansong Qu](https://quyans.github.io/), [Liujuan Cao](https://mac.xmu.edu.cn/ljcao/)
## ⚙️ Environment Setup
First, create a virtual environment using Conda, clone this repository to your local machine, and install the required dependencies.
```bash
conda create -n fastvggt python=3.10
conda activate fastvggt
git clone git@github.com:mystorm16/FastVGGT.git
cd FastVGGT
pip install -r requirements.txt
```
Next, prepare the ScanNet dataset: http://www.scan-net.org/ScanNet/
Then, download the VGGT checkpoint (we use the checkpoint link provided in https://github.com/facebookresearch/vggt/tree/evaluation/evaluation):
```bash
wget https://huggingface.co/facebook/VGGT_tracker_fixed/resolve/main/model_tracker_fixed_e20.pt
```
Finally, configure the dataset path and VGGT checkpoint path. For example:
```bash
parser.add_argument(
"--data_dir", type=Path, default="/data/scannetv2/process_scannet"
)
parser.add_argument(
"--gt_ply_dir",
type=Path,
default="/data/scannetv2/OpenDataLab___ScanNet_v2/raw/scans",
)
parser.add_argument(
"--ckpt_path",
type=str,
default="./ckpt/model_tracker_fixed_e20.pt",
)
```
## 💎 Observation
Note: A large number of input_frames may significantly slow down saving the visualization results. Please try using a smaller number first.
```bash
python eval/eval_scannet.py --input_frame 30 --vis_attn_map --merging 0
```
We observe that many token-level attention maps are highly similar in each block, motivating our optimization of the Global Attention module.
## 🏀 Evaluation
### Custom Dataset
Please organize the data according to the following directory:
```
### 7 Scenes & NRGBD
Evaluate across two datasets, sampling keyframes every 10 frames:
```bash
python eval/eval_7andN.py --kf 10
```
## 🍺 Acknowledgements
- Thanks to these great repositories: [VGGT](https://github.com/facebookresearch/vggt), [Dust3r](https://github.com/naver/dust3r), [Fast3R](https://github.com/facebookresearch/fast3r), [CUT3R](https://github.com/CUT3R/CUT3R), [MV-DUSt3R+](https://github.com/facebookresearch/mvdust3r), [StreamVGGT](https://github.com/wzzheng/StreamVGGT), [VGGT-Long](https://github.com/DengKaiCQ/VGGT-Long), [ToMeSD](https://github.com/dbolya/tomesd) and many other inspiring works in the community.
- Special thanks to [Jianyuan Wang](https://jytime.github.io/) for his valuable discussions and suggestions on this work.
## ⚖️ License
See the [LICENSE](./LICENSE.txt) file for details about the license under which this code is made available.
## Citation
If you find this project helpful, please consider citing the following paper:
```
@article{shen2025fastvggt,
title={FastVGGT: Training-Free Acceleration of Visual Geometry Transformer},
author={Shen, You and Zhang, Zhipeng and Qu, Yansong and Cao, Liujuan},
journal={arXiv preprint arXiv:2509.02560},
year={2025}
}
```
## 🔍 Explore, Capture, Lead in 3D
