# dora-drives **Repository Path**: dora-rs/dora-drives ## Basic Information - **Project Name**: dora-drives - **Description**: 这是一个循序渐进的教程,初学者可以使用一个简单的入门套件,从零开始编写自己的自动驾驶汽车程序。Dora-drives 让学习自动驾驶汽车系统变得更快、更容易。 - **Primary Language**: Rust - **License**: Apache-2.0 - **Default Branch**: depth-frame - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-08-29 - **Last Updated**: 2025-03-08 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README

--- `dora-drives` is a set of operators you can use with `dora` to create an autonomous driving vehicle. You can test the operators on real webcam or within [Carla](https://carla.org/). This project is in early development, and many features have yet to be implemented with breaking changes. Please don't take for granted the current design. ## Documentation The documentation can be found here: [dora-rs.github.io/dora-drives](https://dora-rs.github.io/dora-drives) You will be able to get started using the [installation section](https://dora-rs.github.io/dora-drives/installation.html). ## Operators: ### [Point cloud registration](https://paperswithcode.com/task/point-cloud-registration/latest) - [IMFNet](https://github.com/XiaoshuiHuang/IMFNet) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/imfnet-interpretable-multimodal-fusion-for/point-cloud-registration-on-3dmatch-benchmark)](https://paperswithcode.com/sota/point-cloud-registration-on-3dmatch-benchmark?p=imfnet-interpretable-multimodal-fusion-for) ### [Object dectection](https://paperswithcode.com/task/object-detection) - [yolov5](https://github.com/ultralytics/yolov5) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/path-aggregation-network-for-instance/object-detection-on-coco)](https://paperswithcode.com/sota/object-detection-on-coco?p=path-aggregation-network-for-instance) - Perfect detection on Carla Simulator ### [Traffic sign recognition](https://paperswithcode.com/task/traffic-sign-recognition) - [Custom trained yolov7 on tt100k](https://github.com/haixuanTao/yolov7) ### [Lane detection](https://paperswithcode.com/task/lane-detection) - [yolop](https://github.com/hustvl/YOLOP) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/hybridnets-end-to-end-perception-network-1/lane-detection-on-bdd100k)](https://paperswithcode.com/sota/lane-detection-on-bdd100k?p=hybridnets-end-to-end-perception-network-1) ### [Drivable Area detection](https://paperswithcode.com/task/drivable-area-detection) - [yolop](https://github.com/hustvl/YOLOP) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/hybridnets-end-to-end-perception-network-1/drivable-area-detection-on-bdd100k)](https://paperswithcode.com/sota/drivable-area-detection-on-bdd100k?p=hybridnets-end-to-end-perception-network-1) ### [Multiple Object tracking(MOT)](https://paperswithcode.com/task/multi-object-tracking) #### [strong sort](https://github.com/haixuanTao/yolov5_strongsort_package) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/strongsort-make-deepsort-great-again/multi-object-tracking-on-mot20-1)](https://paperswithcode.com/sota/multi-object-tracking-on-mot20-1?p=strongsort-make-deepsort-great-again) ### [Motion Planning](https://paperswithcode.com/task/motion-planning) - [Hybrid A-star](https://github.com/erdos-project/hybrid_astar_planner) ### Path Tracking - Proportional Integral Derivative controller (PID) ## Future operators: - [Trajectory Prediction (pedestrian and vehicles)](https://paperswithcode.com/task/trajectory-prediction) - [Pedestrian detection](https://paperswithcode.com/task/pedestrian-detection) - [Semantic segmentation](https://paperswithcode.com/task/semantic-segmentation) - [Depth estimation](https://paperswithcode.com/task/depth-estimation) - [Multiple object tracking and segmentation(MOTS)](https://paperswithcode.com/task/multi-object-tracking) ## ⚖️ LICENSE This project is licensed under Apache-2.0. Check out [NOTICE.md](NOTICE.md) for more information.