# VPR_Tutorial **Repository Path**: xueyoo/VPR_Tutorial ## Basic Information - **Project Name**: VPR_Tutorial - **Description**: No description available - **Primary Language**: Unknown - **License**: GPL-3.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-08-23 - **Last Updated**: 2025-08-23 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Visual Place Recognition: A Tutorial Work in progress: This repository provides the example code from our paper "Visual Place Recognition: A Tutorial". The code performs VPR on the GardensPoint day_right--night_right dataset. Output is a plotted pr-curve, matching decisions, two examples for a true-positive and a false-positive matching, and the AUC performance, as shown below. If you use our work for your academic research, please refer to the following paper: ```bibtex @article{SchubertRAM2024ICRA2024, title={Visual Place Recognition: A Tutorial}, author={Schubert, Stefan and Neubert, Peer and Garg, Sourav and Milford, Michael and Fischer, Tobias}, journal={IEEE Robotics \& Automation Magazine}, year={2024}, doi={10.1109/MRA.2023.3310859} } ``` ## How to run the code ### Online (Using GitHub Codespaces) This repository is configured for use with [GitHub Codespaces](https://github.com/features/codespaces), a service that provides you with a fully-configured Visual Studio Code environment in the cloud, directly from your GitHub repository. To open this repository in a Codespace: 1. Click on the green "Code" button near the top-right corner of the repository page. 2. In the dropdown, select "Open with Codespaces", and then click on "+ New codespace". 3. Your Codespace will be created and will start automatically. This process may take a few minutes. Once your Codespace is ready, it will open in a new browser tab. This is a full-featured version of VS Code running in your browser, and it has access to all the files in your repository and all the tools installed in your Docker container. You can run commands in the terminal, edit files, debug code, commit changes, and do anything else you would normally do in VS Code. When you're done, you can close the browser tab, and your Codespace will automatically stop after a period of inactivity. ### Local ``` python3 demo.py ``` The GardensPoints Walking dataset will be downloaded automatically. You should get an output similar to this: ``` python demo.py ========== Start VPR with HDC-DELF descriptor on dataset GardensPoint ===== Load dataset ===== Load dataset GardensPoint day_right--night_right ===== Compute reference set descriptors ===== Compute query set descriptors ===== Compute cosine similarities S ===== Match images ===== Evaluation ===== AUC (area under curve): 0.742 ===== R@100P (maximum recall at 100% precision): 0.36 ===== recall@K (R@K) -- R@1: 0.420, R@5: 0.870, R@10: 0.925 ``` | Precision-recall curve | Matchings M | Examples for a true positive and a false positive | |:-------------------------:|:-------------------------:|:-------------------------:| |precision-recall curve P=f(R) | output_images/matchings.jpg | Examples for true positive (TP) and false positive (FP)| ## Requirements The code was tested with the library versions listed in [requirements.txt](./requirements.txt). Note that Tensorflow or PyTorch is only required if the corresponding image descriptor is used. If you use pip, simply: ```bash pip install -r requirements.txt ``` You can create a conda environment containing these libraries as follows (or use the provided [environment.yml](./.devcontainer/environment.yml)): ```bash mamba create -n vprtutorial python numpy pytorch torchvision natsort tqdm opencv pillow scikit-learn faiss matplotlib-base tensorflow tensorflow-hub tqdm scikit-image patchnetvlad -c conda-forge ``` ## List of existing open-source implementations for VPR (work in progress) [//]: # (use or rowspan to combine cells) ### Descriptors #### Holistic descriptors
AlexNet code* paper
AMOSNet code paper
DELG code paper
DenseVLAD code paper
HDC-DELF code paper
HybridNet code paper
NetVLAD code paper
CosPlace code paper
EigenPlaces code paper
#### Local descriptors
D2-Net code paper
DELF code paper
LIFT code paper
Patch-NetVLAD code paper
R2D2 code paper
SuperPoint code paper
#### Local descriptor aggregation
DBoW2 code paper
HDC (Hyperdimensional Computing) code paper
iBoW (Incremental Bag-of-Words) / OBIndex2 code paper
VLAD (Vector of Locally Aggregated Descriptors) code* paper
### Sequence methods
Delta Descriptors code paper
MCN code paper
OpenSeqSLAM code* paper
OpenSeqSLAM 2.0 code paper
OPR code paper
SeqConv code paper
SeqNet code paper
SeqVLAD code paper
VPR code paper
### Misc
ICM code paper Graph optimization of the similarity matrix S
SuperGlue code paper Local descriptor matching
*Third party code. Not provided by the authors. Code implements the author's idea or can be used to implement the authors idea. ## Soft Ground Truth for evaluation In the evaluation metrics, a soft ground truth matrix can be used to ignore images with a very small visual overlap to avoid penalization in recall and precision analysis (see Equation 6 in [our paper](https://ieeexplore.ieee.org/document/10261441)). This currently is only supported for `matching="multi"`. For use in single matching, please use a dilated hard ground truth directly.