Ultralytics YOLOv8, YOLOv9, YOLOv10 for ROS 2
GPL-3.0 License
🚨 Repository Name Change Announcement 🚨
We are planning to rename this repository from yolov8_ros to yolo_ros on 31-10-2024.
The repository is renamed since more YOLO models are supported in this tool, not only YOLOv8.
Check out the updates in the yolo_ros branch.
Please update your local repository, dependencies, scripts or tools that rely on the repository URL.
After the name change, update your local repository URL:
git remote set-url origin https://github.com/mgonzs13/yolo_ros.git
ROS 2 wrap for Ultralytics YOLOv8 to perform object detection and tracking, instance segmentation, human pose estimation and Oriented Bounding Box (OBB). There are also 3D versions of object detection, including instance segmentation, and human pose estimation based on depth images.
$ cd ~/ros2_ws/src
$ git clone https://github.com/mgonzs13/yolov8_ros.git
$ pip3 install -r yolov8_ros/requirements.txt
$ cd ~/ros2_ws
$ rosdep install --from-paths src --ignore-src -r -y
$ colcon build
The available models for yolov8_ros are the following:
$ ros2 launch yolov8_bringup yolov5.launch.py
$ ros2 launch yolov8_bringup yolov8.launch.py
$ ros2 launch yolov8_bringup yolov9.launch.py
$ ros2 launch yolov8_bringup yolov10.launch.py
$ ros2 launch yolov8_bringup yolov11.launch.py
$ ros2 launch yolov8_bringup yolo-nas.launch.py
$ ros2 launch yolov8_bringup yolov8_3d.launch.py
Previous updates add Lifecycle Nodes support to all the nodes available in the package. This implementation tries to reduce the workload in the unconfigured and inactive states by only loading the models and activating the subscriber on the active state.
These are some resource comparisons using the default yolov8m.pt model on a 30fps video stream.
State | CPU Usage (i7 12th Gen) | VRAM Usage | Bandwidth Usage |
---|---|---|---|
Active | 40-50% in one core | 628 MB | Up to 200 Mbps |
Inactive | ~5-7% in one core | 338 MB | 0-20 Kbps |
This is the standard behavior of YOLOv8, which includes object tracking.
$ ros2 launch yolov8_bringup yolov8.launch.py
Instance masks are the borders of the detected objects, not the all the pixels inside the masks.
$ ros2 launch yolov8_bringup yolov8.launch.py model:=yolov8m-seg.pt
Online persons are detected along with their keypoints.
$ ros2 launch yolov8_bringup yolov8.launch.py model:=yolov8m-pose.pt
The 3D bounding boxes are calculated filtering the depth image data from an RGB-D camera using the 2D bounding box. Only objects with a 3D bounding box are visualized in the 2D image.
$ ros2 launch yolov8_bringup yolov8_3d.launch.py
In this, the depth image data is filtered using the max and min values obtained from the instance masks. Only objects with a 3D bounding box are visualized in the 2D image.
$ ros2 launch yolov8_bringup yolov8_3d.launch.py model:=yolov8m-seg.pt
Each keypoint is projected in the depth image and visualized using purple spheres. Only objects with a 3D bounding box are visualized in the 2D image.
$ ros2 launch yolov8_bringup yolov8_3d.launch.py model:=yolov8m-pose.pt