# viplanner **Repository Path**: freecode01n/viplanner ## Basic Information - **Project Name**: viplanner - **Description**: robot-Navigation-viplanner - **Primary Language**: Unknown - **License**: BSD-3-Clause - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-12-11 - **Last Updated**: 2024-12-11 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # ViPlanner: Visual Semantic Imperative Learning for Local Navigation
Project Page • arXiv • Video • BibTeX Click on the image for the demo video! [](https://youtu.be/8KO4NoDw6CM)
ViPlanner is a robust learning-based local path planner based on semantic and depth images. Fully trained in simulation, the planner can be applied in dynamic indoor as well as outdoor environments. We provide it as an extension for [NVIDIA Isaac-Sim](https://developer.nvidia.com/isaac-sim) within the [IsaacLab](https://isaac-sim.github.io/IsaacLab/) project (details [here](./omniverse/README.md)). Furthermore, a ready-to-use [ROS Noetic](http://wiki.ros.org/noetic) package is available within this repo for direct integration on any robot (tested and developed on ANYmal C and D). **Keywords:** Visual Navigation, Local Planning, Imperative Learning ## Install - Install `pyproject.toml` with pip by running: ```bash pip install . ``` or ```bash pip install -e .[standard] ``` if you want to edit the code. To apply the planner in the ROS-Node, install it with the inference setting: ```bash pip install -e .[standard,inference] ``` Make sure the CUDA toolkit is of the same version as used to compile torch. We assume 11.7. If you are using a different version, adjust the string for the mmcv install as given . If the toolkit is not found, set the `CUDA_HOME` environment variable, as follows: ``` export CUDA_HOME=/usr/local/cuda ``` On the Jetson, please use ```bash pip install -e .[inference,jetson] ``` as `mmdet` requires torch.distributed which is only build until version 1.11 and not compatible with pypose. See the [Dockerfile](./Dockerfile) for a workaround. **Known Issue** - mmcv build wheel does not finish: - fix by installing with defined CUDA version, as detailed [here](https://mmcv.readthedocs.io/en/latest/get_started/installation.html#install-with-pip). For CUDA Version 11.7 and torch==2.0.x use ``` pip install mmcv==2.0.0 -f https://download.openmmlab.com/mmcv/dist/cu117/torch2.0/index.html ``` **Extension** This work includes the switch from semantic to direct RGB input for the training pipeline to facilitate further research. For RGB input, an option exists to employ a backbone with mask2former pre-trained weights. For this option, include the GitHub submodule, install the requirements included there, and build the necessary Cuda operators. These steps are not necessary for the published planner! ```bash pip install git+https://github.com/facebookresearch/detectron2.git git submodule update --init pip install -r third_party/mask2former/requirements.txt cd third_party/mask2former/mask2former/modeling/pixel_decoder/ops \ sh make.sh ``` **Remark** Note that for an editable installation of packages without setup.py, PEP660 has to be fulfilled. This requires the following versions (as described [here](https://stackoverflow.com/questions/69711606/how-to-install-a-package-using-pip-in-editable-mode-with-pyproject-toml) in detail) - [pip >= 21.3](https://pip.pypa.io/en/stable/news/#v21-3) ``` python3 -m pip install --upgrade pip ``` - [setuptools >= 64.0.0](https://github.com/pypa/setuptools/blob/main/CHANGES.rst#v6400) ``` python3 -m pip install --upgrade setuptools ``` ## Inference and Model Demo 1. Real-World