Intel(R) RealSense(TM) ROS Wrapper for Depth Camera
APACHE-2.0 License
Please choose only one option from the 3 options below (in order to prevent multiple versions installation and workspace conflicts)
sudo apt install ros-<ROS_DISTRO>-librealsense2*
sudo apt install ros-humble-librealsense2*
sudo apt install ros-<ROS_DISTRO>-realsense2-*
sudo apt install ros-humble-realsense2-*
Create a ROS2 workspace
mkdir -p ~/ros2_ws/src
cd ~/ros2_ws/src/
Clone the latest ROS Wrapper for Intel® RealSense™ cameras from here into '~/ros2_ws/src/'
git clone https://github.com/IntelRealSense/realsense-ros.git -b ros2-master
cd ~/ros2_ws
Install dependencies
sudo apt-get install python3-rosdep -y
sudo rosdep init # "sudo rosdep init --include-eol-distros" for Foxy and earlier
rosdep update # "sudo rosdep update --include-eol-distros" for Foxy and earlier
rosdep install -i --from-path src --rosdistro $ROS_DISTRO --skip-keys=librealsense2 -y
colcon build
ROS_DISTRO=<YOUR_SYSTEM_ROS_DISTRO> # set your ROS_DISTRO: jazzy, iron, humble, foxy
source /opt/ros/$ROS_DISTRO/setup.bash
cd ~/ros2_ws
. install/local_setup.bash
PLEASE PAY ATTENTION: ROS Wrapper for Intel® RealSense™ cameras is not meant to be supported on Windows by our team, since ROS2 and its packages are still not fully supported over Windows. We added these installation steps below in order to try and make it easier for users who already started working with ROS2 on Windows and want to take advantage of the capabilities of our RealSense cameras
Please choose only one option from the two options below (in order to prevent multiple versions installation and workspace conflicts)
Before starting building of our packages, make sure you have OpenCV for Windows installed on your machine. If you choose the Microsoft IOT way to install it, it will be installed automatically. Later, when colcon build, you might need to expose this installation folder by setting CMAKE_PREFIX_PATH, PATH, or OpenCV_DIR environment variables
Run "x64 Native Tools Command Prompt for VS 2019" as administrator
Setup ROS2 Environment (Do this for every new terminal/cmd you open):
If you choose the Microsoft IOT Binary option for installation
> C:\opt\ros\humble\x64\setup.bat
If you choose the ROS2 formal documentation:
> call C:\dev\ros2_iron\local_setup.bat
Change directory to realsense-ros folder
> cd C:\ros2_ws\realsense-ros
Build librealsense2 package only
> colcon build --packages-select librealsense2 --cmake-args -DBUILD_EXAMPLES=OFF -DBUILD_WITH_STATIC_CRT=OFF -DBUILD_GRAPHICAL_EXAMPLES=OFF
--event-handlers console_direct+
parameter to see more debug outputs of the colcon buildBuild the other packages
> colcon build --packages-select realsense2_camera_msgs realsense2_description realsense2_camera
--event-handlers console_direct+
parameter to see more debug outputs of the colcon buildSetup environment with new installed packages (Do this for every new terminal/cmd you open):
> call install\setup.bat
ros2 run realsense2_camera realsense2_camera_node
# or, with parameters, for example - temporal and spatial filters are enabled:
ros2 run realsense2_camera realsense2_camera_node --ros-args -p enable_color:=false -p spatial_filter.enable:=true -p temporal_filter.enable:=true
ros2 launch realsense2_camera rs_launch.py
ros2 launch realsense2_camera rs_launch.py depth_module.depth_profile:=1280x720x30 pointcloud.enable:=true
User can set the camera name and camera namespace, to distinguish between cameras and platforms, which helps identifying the right nodes and topics to work with.
If user have multiple cameras (might be of the same model) and multiple robots then user can choose to launch/run his nodes on this way.
For the first robot and first camera he will run/launch it with these parameters:
camera_namespace:
camera_name
With ros2 launch (via command line or by editing these two parameters in the launch file):
ros2 launch realsense2_camera rs_launch.py camera_namespace:=robot1 camera_name:=D455_1
ros2 run realsense2_camera realsense2_camera_node --ros-args -r __node:=D455_1 -r __ns:=robot1
> ros2 node list
/robot1/D455_1
> ros2 topic list
/robot1/D455_1/color/camera_info
/robot1/D455_1/color/image_raw
/robot1/D455_1/color/metadata
/robot1/D455_1/depth/camera_info
/robot1/D455_1/depth/image_rect_raw
/robot1/D455_1/depth/metadata
/robot1/D455_1/extrinsics/depth_to_color
/robot1/D455_1/imu
> ros2 service list
/robot1/D455_1/hw_reset
/robot1/D455_1/device_info
> ros2 node list
/camera/camera
> ros2 topic list
/camera/camera/color/camera_info
/camera/camera/color/image_raw
/camera/camera/color/metadata
/camera/camera/depth/camera_info
/camera/camera/depth/image_rect_raw
/camera/camera/depth/metadata
/camera/camera/extrinsics/depth_to_color
/camera/camera/imu
> ros2 service list
/camera/camera/hw_reset
/camera/camera/device_info
ros2 param list
.ros2 param get <node> <parameter_name>
ros2 param get /camera/camera depth_module.emitter_enabled
ros2 param set <node> <parameter_name> <value>
ros2 param set /camera/camera depth_module.emitter_enabled 1
depth_module
and rgb_camera
)
depth_module.depth_profile:=640x480x30 depth_module.infra_profile:=640x480x30 rgb_camera.color_profile:=1280x720x30
ros2 param describe <your_node_name> <param_name>
to get the list of supported profiles.depth_module.depth_format:=Z16 depth_module.infra1_format:=y8 rgb_camera.color_format:=RGB8
ros2 param describe <your_node_name> <param_name>
to get the list of supported formats.rs-enumerate-devices
command to know the list of profiles supported by the connected sensors.enable_infra1:=true enable_color:=false
SYSTEM_DEFAULT
, DEFAULT
, PARAMETER_EVENTS
, SERVICES_DEFAULT
, PARAMETERS
, SENSOR_DATA
.depth_qos:=SENSOR_DATA
pointcloud.pointcloud_qos
parameter in the pointcloud filter, refer to the Post-Processing Filters section for details.unite_imu_method
param supports below values:
BUILD_ACCELERATE_GPU_WITH_GLSL
during build:colcon build --cmake-args '-DBUILD_ACCELERATE_GPU_WITH_GLSL=ON'
serial_no:=_831612073525
.usb_port_id:=4-1
or usb_port_id:=4-2
device_type:=d435
will match d435 and d435i.device_type=d435(?!i)
will match d435 but not d435i.reconnect_timeout:=10
wait_for_device_timeout:=60
ros2 run realsense2_camera realsense2_camera_node -p rosbag_filename:="/full/path/to/rosbag.bag"
rosbag_filename
parameter with rosbag full path (see realsense2_camera/launch/rs_launch.py
as reference)initial_reset:=true
clip_distance:=1.5
/diagnostics
topic./diagnostics
topic includes information regarding the device temperatures and actual frequency of the enabled streams.administrator@perclnx466 ~/ros2_humble $ ros2 topic echo /camera/camera/extrinsics/depth_to_color
rotation:
- 0.9999583959579468
- 0.008895332925021648
- -0.0020127370953559875
- -0.008895229548215866
- 0.9999604225158691
- 6.045500049367547e-05
- 0.0020131953060626984
- -4.254872692399658e-05
- 0.9999979734420776
translation:
- 0.01485931035131216
- 0.0010161789832636714
- 0.0005317096947692335
---
The published topics differ according to the device and parameters.
After running the above command with D435i attached, the following list of topics will be available (This is a partial list. For full one type ros2 topic list
):
This will stream relevant camera sensors and publish on the appropriate ROS topics.
Enabling accel and gyro is achieved either by adding the following parameters to the command line:
ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true enable_gyro:=true enable_accel:=true
or in runtime using the following commands:
ros2 param set /camera/camera enable_accel true
ros2 param set /camera/camera enable_gyro true
Enabling stream adds matching topics. For instance, enabling the gyro and accel streams adds the following topics:
RGBD new topic, publishing [RGB + Depth] in the same message (see RGBD.msg for reference). For now, works only with depth aligned to color images, as color and depth images are synchronized by frame time tag.
These boolean paramters should be true to enable rgbd messages:
enable_rgbd
: new paramter, to enable/disable rgbd topic, changeable during runtimealign_depth.enable
: align depth images to rgb imagesenable_sync
: let librealsense sync between frames, and get the frameset with color and depth images combinedenable_color
+ enable_depth
: enable both color and depth sensorsThe current QoS of the topic itself, is the same as Depth and Color streams (SYSTEM_DEFAULT)
Example:
ros2 launch realsense2_camera rs_launch.py enable_rgbd:=true enable_sync:=true align_depth.enable:=true enable_color:=true enable_depth:=true
The metadata messages store the camera's available metadata in a json format. To learn more, a dedicated script for echoing a metadata topic in runtime is attached. For instance, use the following command to echo the camera/depth/metadata topic:
python3 src/realsense-ros/realsense2_camera/scripts/echo_metadada.py /camera/camera/depth/metadata
The following post processing filters are available:
align_depth
: If enabled, will publish the depth image aligned to the color image on the topic /camera/camera/aligned_depth_to_color/image_raw
.
colorizer
: will color the depth image. On the depth topic an RGB image will be published, instead of the 16bit depth values .
pointcloud
: will add a pointcloud topic /camera/camera/depth/color/points
.
pointcloud.stream_filter
parameter.
pointcloud.allow_no_texture_points
to true.pointcloud.ordered_pc
to true.pointcloud.pointcloud_qos
parameter.
{'name': 'pointcloud.pointcloud_qos', 'default': 'SENSOR_DATA', 'description': 'pointcloud qos'}
pointcloud.pointcloud_qos:=SENSOR_DATA
ros2 param set /camera/camera pointcloud.pointcloud_qos SENSOR_DATA
ros2 param set /camera/camera pointcloud.enable false
ros2 param set /camera/camera pointcloud.enable true
hdr_merge
: Allows depth image to be created by merging the information from 2 consecutive frames, taken with different exposure and gain values.
depth_module.hdr_enabled
: to enable/disable HDR. The way to set exposure and gain values for each sequence:
depth_module.sequence_id
parameter and then modifying the depth_module.gain
, and depth_module.exposure
.depth_module.hdr_enabled
parameter and then, update the required presets.depth_module.exposure.1
depth_module.gain.1
depth_module.exposure.2
depth_module.gain.2
depth_module.hdr_enabled
to true, otherwise these parameters won't be considered.filter_by_sequence_id.sequence_id
parameter.The following filters have detailed descriptions in : https://github.com/IntelRealSense/librealsense/blob/master/doc/post-processing-filters.md
disparity_filter
- convert depth to disparity before applying other filters and back.spatial_filter
- filter the depth image spatially.temporal_filter
- filter the depth image temporally.hole_filling_filter
- apply hole-filling filter.decimation_filter
- reduces depth scene complexity.Each of the above filters have it's own parameters, following the naming convention of <filter_name>.<parameter_name>
including a <filter_name>.enable
parameter to enable/disable it.
ros2 service call /camera/camera/hw_reset std_srvs/srv/Empty
ros2 interface show realsense2_camera_msgs/srv/DeviceInfo
for the full list.ros2 service call /camera/camera/device_info realsense2_camera_msgs/srv/DeviceInfo
Read calibration config.
Note that reading calibration config is applicable only in Safey Service Mode
Type ros2 interface show realsense2_camera_msgs/srv/CalibConfigRead
for the full request/response fields.
Call example: ros2 service call /camera/camera/calib_config_read realsense2_camera_msgs/srv/CalibConfigRead
response: realsense2_camera_msgs.srv.CalibConfigRead_Response(success=True, error_message='', calib_config='{"calibration_config":{"camera_position":{"rotation":[[0.0,0.0,1.0],[-1.0,0.0,0.0],[0.0,-1.0,0.0]],"translation":[0.0,0.0,0.0]},"crypto_signature":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"roi_0":{"vertex_0":[0,0],"vertex_1":[0,0],"vertex_2":[0,0],"vertex_3":[0,0]},"roi_1":{"vertex_0":[0,0],"vertex_1":[0,0],"vertex_2":[0,0],"vertex_3":[0,0]},"roi_2":{"vertex_0":[0,0],"vertex_1":[0,0],"vertex_2":[0,0],"vertex_3":[0,0]},"roi_3":{"vertex_0":[0,0],"vertex_1":[0,0],"vertex_2":[0,0],"vertex_3":[0,0]},"roi_num_of_segments":0}}')
Write calibration config.
Note that writing calibration config is applicable only in Safey Service Mode
Type ros2 interface show realsense2_camera_msgs/srv/CalibConfigWrite
for the full request/response fields.
ros2 service call /camera/camera/calib_config_write realsense2_camera_msgs/srv/CalibConfigWrite "{calib_config: '{\"calibration_config\":{\"camera_position\":{\"rotation\":[[0.0,0.0,1.0],[-1.0,0.0,0.0],[0.0,-1.0,0.0]],\"translation\":[0.0,0.0,0.0]},\"crypto_signature\":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\"roi_0\":{\"vertex_0\":[0,0],\"vertex_1\":[0,0],\"vertex_2\":[0,0],\"vertex_3\":[0,0]},\"roi_1\":{\"vertex_0\":[0,0],\"vertex_1\":[0,0],\"vertex_2\":[0,0],\"vertex_3\":[0,0]},\"roi_2\":{\"vertex_0\":[0,0],\"vertex_1\":[0,0],\"vertex_2\":[0,0],\"vertex_3\":[0,0]},\"roi_3\":{\"vertex_0\":[0,0],\"vertex_1\":[0,0],\"vertex_2\":[0,0],\"vertex_3\":[0,0]},\"roi_num_of_segments\":0}}' }"
Result example: realsense2_camera_msgs.srv.CalibConfigWrite_Response(success=True, error_message='')
ros2 interface show realsense2_camera_msgs/action/TriggeredCalibration
for the full request/result/feedback fields.
# request
string json "calib run" # default value
---
# result
bool success
string error_msg
string calibration
float32 health
---
# feedback
float32 progress
depth_module.visual_preset: 1
# switch to visual preset #1 in depth moduledepth_module.emitter_enabled: true
# enable emitter in depth moduledepth_module.enable_auto_exposure: true
# enable AE in depth moudleenable_depth: false
# turn off depth streamenable_infra1: false
# turn off infra1 streamenable_infra2: false
# turn off infra2 streamros2 action send_goal /camera/camera/triggered_calibration realsense2_camera_msgs/action/TriggeredCalibration '{json: "{calib run}"}'
or even with an empty request ros2 action send_goal /camera/camera/triggered_calibration realsense2_camera_msgs/action/TriggeredCalibration ''
because the default behavior is already calib run.--feedback
to the end of the command.Result:
success: false
error_msg: 'TriggeredCalibrationExecute: Aborted. Error: Calibration completed but algorithm failed'
calibration: '{}'
health: 0.0
Our ROS2 Wrapper node supports zero-copy communications if loaded in the same process as a subscriber node. This can reduce copy times on image/pointcloud topics, especially with big frame resolutions and high FPS.
You will need to launch a component container and launch our node as a component together with other component nodes. Further details on "Composing multiple nodes in a single process" can be found here.
Further details on efficient intra-process communication can be found here.
Start the component:
ros2 run rclcpp_components component_container
Add the wrapper:
ros2 component load /ComponentManager realsense2_camera realsense2_camera::RealSenseNodeFactory -e use_intra_process_comms:=true
Load other component nodes (consumers of the wrapper topics) in the same way.
image_transport
will be disabled as this isn't supported with intra-process communicationFor getting a sense of the latency reduction, a frame latency reporter tool is available via a launch file.
The launch file loads the wrapper and a frame latency reporter tool component into a single container (so the same process).
The tool prints out the frame latency (now - frame.timestamp
) per frame.
The tool is not built unless asked for. Turn on BUILD_TOOLS
during build to have it available:
colcon build --cmake-args '-DBUILD_TOOLS=ON'
The launch file accepts a parameter, intra_process_comms
, controlling whether zero-copy is turned on or not. Default is on:
ros2 launch realsense2_camera rs_intra_process_demo_launch.py intra_process_comms:=true