What is roc?
roc is a tool that wants to help you to develop ROS2 applications faster and easier. It is a collection of tools that are used to generate code, build and test your ROS2 applications. It is based on the ROS2 CLI.
Why roc?
roc aims to be a tool that will completely be written in RUST and at some point not relay on the ROS2 CLI anymore. This will allow us to have a tool that is faster and more reliable. It will also allow us to have a tool that can be used on other platforms like Windows.
Features
- Generate ROS2 packages with a template system
- Build ROS2 packages (colcon at the moment) but will be replaced by a custom build system
- Adding missing feature that IMO ros2 cli should have like:
roc frame
to work with tf frames, cordination systems and transformationsroc bridge
to bridge topics between different ROS2 instances
- Adding TUI (Text User Interface) to make it easier to work with ROS2
Notice
Almost all of this book is generated by LLM, i have just guided it through the code. If you see something not as in code or vice versa, please let knw (open a PR or something). This will help me guide the project further and not have a discrappancy between docs and code.
Installation
Install ROS2
Setup Sources
You will need to add the ROS 2 apt repository to your system. First ensure that the Ubuntu Universe repository is enabled.
sudo apt install software-properties-common
sudo add-apt-repository universe
Now add the ROS 2 GPG key with apt.
sudo apt update && sudo apt install curl -y
sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg
Then add the repository to your sources list.
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list > /dev/null
Install ROS 2 packages
Update your apt repository caches after setting up the repositories.
sudo apt update
ROS 2 packages are built on frequently updated Ubuntu systems. It is always recommended that you ensure your system is up to date before installing new packages.
sudo apt upgrade
Desktop Install (Recommended): ROS, RViz, demos, tutorials.
sudo apt install ros-humble-desktop
ROS-Base Install (Bare Bones): Communication libraries, message packages, command line tools. No GUI tools.
sudo apt install ros-humble-ros-base
Install additional RMW implementations
sudo apt install ros-humble-rmw*
Development tools: Compilers and other tools to build ROS packages
sudo apt install ros-dev-tools
Install foxglove and rosbridge
sudo apt install ros-humble-foxglove*
sudo apt install ros-humble-rosbridge*
Install ros2 tf2 tools
sudo apt install ros-humble-tf2*
su
Install roc
Cargo
cargo install rocc
From Source
git clone
cd rocc
cargo install --path .
ROS 2 Architecture Overview
ROS 2 (Robot Operating System 2) is a sophisticated middleware framework designed for distributed robotics applications. Understanding its layered architecture is crucial for implementing effective bindings and tools like roc
.
Layered Architecture
ROS 2 follows a layered architecture that separates concerns and provides modularity:
┌─────────────────────────────────────────┐
│ User Applications │
├─────────────────────────────────────────┤
│ ROS 2 API │
│ (rclcpp, rclpy, etc.) │
├─────────────────────────────────────────┤
│ RCL Layer │
│ (Robot Control Library) │
├─────────────────────────────────────────┤
│ RMW Layer │
│ (ROS Middleware Interface) │
├─────────────────────────────────────────┤
│ DDS Implementation │
│ (Fast-DDS, Cyclone DX, etc.) │
└─────────────────────────────────────────┘
Key Components
1. DDS Layer (Bottom)
- Purpose: Provides the actual networking and discovery mechanisms
- Examples: Fast-DDS, Cyclone DX, Connext DDS
- Responsibilities:
- Network communication
- Service discovery
- Quality of Service (QoS) enforcement
- Data serialization/deserialization
2. RMW Layer (ROS Middleware Interface)
- Purpose: Abstract interface that isolates ROS 2 from specific DDS implementations
- Location:
/opt/ros/jazzy/include/rmw/
- Key Types:
rmw_context_t
- Middleware contextrmw_node_t
- Node representationrmw_publisher_t
/rmw_subscription_t
- Topic endpointsrmw_qos_profile_t
- Quality of Service profilesrmw_topic_endpoint_info_t
- Detailed endpoint information
3. RCL Layer (Robot Control Library)
- Purpose: Provides a C API that manages the ROS 2 graph and lifecycle
- Location:
/opt/ros/jazzy/include/rcl/
- Key Functions:
rcl_init()
- Initialize RCL contextrcl_node_init()
- Create nodesrcl_get_topic_names_and_types()
- Graph introspectionrcl_get_publishers_info_by_topic()
- Detailed topic information
4. Language-Specific APIs
- rclcpp: C++ client library
- rclpy: Python client library
- rclrs: Rust client library (emerging)
Why This Architecture Matters for roc
The roc
tool operates primarily at the RCL and RMW layers, bypassing the higher-level language APIs to:
- Direct Access: Get raw, unfiltered information about the ROS 2 graph
- Performance: Avoid overhead of higher-level abstractions
- Completeness: Access all available metadata (QoS, GIDs, type hashes)
- Compatibility: Work consistently across different ROS 2 distributions
Discovery and Communication Flow
sequenceDiagram
participant App as roc Application
participant RCL as RCL Layer
participant RMW as RMW Layer
participant DDS as DDS Implementation
App->>RCL: rcl_init()
RCL->>RMW: rmw_init()
RMW->>DDS: Initialize DDS participant
App->>RCL: rcl_get_publishers_info_by_topic()
RCL->>RMW: rmw_get_publishers_info_by_topic()
RMW->>DDS: Query DDS discovery database
DDS-->>RMW: Publisher endpoint info
RMW-->>RCL: rmw_topic_endpoint_info_t[]
RCL-->>App: rcl_topic_endpoint_info_array_t
This architecture allows roc
to access detailed information that is often abstracted away in higher-level tools, making it particularly powerful for debugging and system introspection.
RCL and RMW Layers
The RCL (Robot Control Library) and RMW (ROS Middleware) layers form the core of ROS 2's architecture. Understanding these layers is essential for implementing effective bindings and tools.
RMW Layer (ROS Middleware Interface)
Purpose and Design
The RMW layer serves as an abstraction barrier between ROS 2 and specific DDS implementations. This design allows ROS 2 to work with different middleware providers without changing upper-layer code.
Key Data Structures
Topic Endpoint Information
typedef struct rmw_topic_endpoint_info_s {
const char * node_name; // Node that owns this endpoint
const char * node_namespace; // Node's namespace
const char * topic_type; // Message type name
rosidl_type_hash_t topic_type_hash; // Hash of message definition
rmw_endpoint_type_t endpoint_type; // PUBLISHER or SUBSCRIPTION
uint8_t endpoint_gid[RMW_GID_STORAGE_SIZE]; // Global unique identifier
rmw_qos_profile_t qos_profile; // Quality of Service settings
} rmw_topic_endpoint_info_t;
This structure contains all the detailed information about a topic endpoint that roc
displays in verbose mode.
QoS Profile Structure
typedef struct rmw_qos_profile_s {
rmw_qos_history_policy_e history; // KEEP_LAST, KEEP_ALL
size_t depth; // Queue depth for KEEP_LAST
rmw_qos_reliability_policy_e reliability; // RELIABLE, BEST_EFFORT
rmw_qos_durability_policy_e durability; // VOLATILE, TRANSIENT_LOCAL
rmw_time_s deadline; // Maximum time between messages
rmw_time_s lifespan; // How long messages stay valid
rmw_qos_liveliness_policy_e liveliness; // Liveliness assertion policy
rmw_time_s liveliness_lease_duration; // Liveliness lease time
bool avoid_ros_namespace_conventions; // Bypass ROS naming
} rmw_qos_profile_t;
RMW Functions Used by roc
The key RMW functions that our implementation uses:
// Get detailed publisher information
rmw_ret_t rmw_get_publishers_info_by_topic(
const rmw_node_t * node,
rcutils_allocator_t * allocator,
const char * topic_name,
bool no_mangle,
rmw_topic_endpoint_info_array_t * publishers_info
);
// Get detailed subscriber information
rmw_ret_t rmw_get_subscriptions_info_by_topic(
const rmw_node_t * node,
rcutils_allocator_t * allocator,
const char * topic_name,
bool no_mangle,
rmw_topic_endpoint_info_array_t * subscriptions_info
);
RCL Layer (Robot Control Library)
Purpose and Design
The RCL layer provides a C API that manages:
- Context initialization and cleanup
- Node lifecycle management
- Graph introspection
- Resource management
Key RCL Functions
Context and Node Management
// Initialize RCL context
rcl_ret_t rcl_init(
int argc,
char const * const * argv,
const rcl_init_options_t * options,
rcl_context_t * context
);
// Initialize a node
rcl_ret_t rcl_node_init(
rcl_node_t * node,
const char * name,
const char * namespace_,
rcl_context_t * context,
const rcl_node_options_t * options
);
Graph Introspection
// Get all topics and their types
rcl_ret_t rcl_get_topic_names_and_types(
const rcl_node_t * node,
rcutils_allocator_t * allocator,
bool no_demangle,
rcl_names_and_types_t * topic_names_and_types
);
// Count publishers for a topic
rcl_ret_t rcl_count_publishers(
const rcl_node_t * node,
const char * topic_name,
size_t * count
);
Detailed Endpoint Information
// Get detailed publisher info (wraps RMW function)
rcl_ret_t rcl_get_publishers_info_by_topic(
const rcl_node_t * node,
rcutils_allocator_t * allocator,
const char * topic_name,
bool no_mangle,
rcl_topic_endpoint_info_array_t * publishers_info
);
Type Mapping and Aliases
RCL often provides type aliases for RMW types:
// RCL aliases for RMW types
typedef rmw_topic_endpoint_info_t rcl_topic_endpoint_info_t;
typedef rmw_topic_endpoint_info_array_t rcl_topic_endpoint_info_array_t;
typedef rmw_names_and_types_t rcl_names_and_types_t;
This design means that RCL functions often directly pass through to RMW implementations.
Error Handling
Both RCL and RMW use integer return codes:
#define RCL_RET_OK 0
#define RCL_RET_ERROR 1
#define RCL_RET_BAD_ALLOC 10
#define RCL_RET_INVALID_ARGUMENT 11
#define RCL_RET_NODE_INVALID 200
Our Rust bindings convert these into Result<T, anyhow::Error>
types for idiomatic error handling.
Memory Management
Key Principles
- Caller allocates, caller deallocates: The caller must provide allocators and clean up resources
- Array finalization: Arrays returned by RCL/RMW must be finalized with specific functions
- String lifecycle: Strings in returned structures may have complex ownership
Example: Proper Resource Cleanup
// Initialize array
rcl_topic_endpoint_info_array_t publishers_info = rmw_get_zero_initialized_topic_endpoint_info_array();
// Get data
rcl_get_publishers_info_by_topic(node, &allocator, topic_name, false, &publishers_info);
// Use data...
// Clean up (REQUIRED)
rmw_topic_endpoint_info_array_fini(&publishers_info, &allocator);
This pattern is critical for preventing memory leaks in long-running applications like roc
.
Integration with DDS
The RMW layer abstracts DDS-specific details, but understanding the mapping helps:
ROS 2 Concept | DDS Concept | Purpose |
---|---|---|
Node | DDS Participant | Process-level entity |
Publisher | DDS Publisher + DataWriter | Sends data |
Subscription | DDS Subscriber + DataReader | Receives data |
Topic | DDS Topic | Communication channel |
QoS Profile | DDS QoS Policies | Communication behavior |
GID | DDS Instance Handle | Unique endpoint ID |
This layered approach allows roc
to access both high-level ROS 2 concepts and low-level DDS details through a unified interface.
Rust FFI Bindings
Creating effective Rust bindings for ROS 2's C libraries requires careful handling of Foreign Function Interface (FFI) concepts, memory management, and type safety.
Overview of Our Binding Strategy
The roc
project uses a custom FFI binding approach located in the rclrs/
subdirectory. This provides direct access to RCL and RMW functions without the overhead of higher-level abstractions.
Project Structure
rclrs/
├── build.rs # Build script for bindgen
├── Cargo.toml # Crate configuration
├── wrapper.h # C header wrapper
└── src/
└── lib.rs # Rust bindings and wrappers
Build System (build.rs
)
Our build script uses bindgen
to automatically generate Rust bindings from C headers:
use bindgen; use std::env; use std::path::PathBuf; fn main() { // Tell cargo to look for ROS 2 installation println!("cargo:rustc-link-search=native=/opt/ros/jazzy/lib"); // Link against RCL libraries println!("cargo:rustc-link-lib=rcl"); println!("cargo:rustc-link-lib=rmw"); println!("cargo:rustc-link-lib=rcutils"); // Generate bindings let bindings = bindgen::Builder::default() .header("wrapper.h") .clang_arg("-I/opt/ros/jazzy/include") .parse_callbacks(Box::new(bindgen::CargoCallbacks)) .generate() .expect("Unable to generate bindings"); let out_path = PathBuf::from(env::var("OUT_DIR").unwrap()); bindings .write_to_file(out_path.join("bindings.rs")) .expect("Couldn't write bindings!"); }
Header Wrapper (wrapper.h
)
We create a minimal wrapper that includes only the headers we need:
#ifndef WRAPPER_H
#define WRAPPER_H
// Core RCL headers
#include "rcl/rcl/allocator.h"
#include "rcl/rcl/context.h"
#include "rcl/rcl/graph.h"
#include "rcl/rcl/init.h"
#include "rcl/rcl/init_options.h"
#include "rcl/rcl/node.h"
// RMW headers for detailed topic information
#include "rmw/rmw/allocators.h"
#include "rmw/rmw/init.h"
#include "rmw/rmw/init_options.h"
#include "rmw/rmw/ret_types.h"
#include "rmw/rmw/types.h"
#include "rmw/rmw/topic_endpoint_info.h"
#endif // WRAPPER_H
This selective inclusion keeps compilation fast and only exposes the APIs we actually use.
Generated Bindings
The bindgen
tool generates Rust equivalents for C types and functions:
C Structs → Rust Structs
#![allow(unused)] fn main() { // C: rmw_topic_endpoint_info_t #[repr(C)] pub struct rmw_topic_endpoint_info_s { pub node_name: *const ::std::os::raw::c_char, pub node_namespace: *const ::std::os::raw::c_char, pub topic_type: *const ::std::os::raw::c_char, pub topic_type_hash: rosidl_type_hash_t, pub endpoint_type: rmw_endpoint_type_t, pub endpoint_gid: [u8; 16usize], pub qos_profile: rmw_qos_profile_t, } }
C Enums → Rust Constants
#![allow(unused)] fn main() { // C: rmw_endpoint_type_e pub const rmw_endpoint_type_e_RMW_ENDPOINT_INVALID: rmw_endpoint_type_e = 0; pub const rmw_endpoint_type_e_RMW_ENDPOINT_PUBLISHER: rmw_endpoint_type_e = 1; pub const rmw_endpoint_type_e_RMW_ENDPOINT_SUBSCRIPTION: rmw_endpoint_type_e = 2; pub type rmw_endpoint_type_e = ::std::os::raw::c_uint; }
C Functions → Rust Extern Functions
#![allow(unused)] fn main() { extern "C" { pub fn rcl_get_publishers_info_by_topic( node: *const rcl_node_t, allocator: *mut rcutils_allocator_t, topic_name: *const ::std::os::raw::c_char, no_mangle: bool, publishers_info: *mut rcl_topic_endpoint_info_array_t, ) -> rcl_ret_t; } }
Safe Rust Wrappers
Our implementation wraps the raw FFI with safe Rust abstractions:
String Handling
#![allow(unused)] fn main() { // Convert C strings to Rust strings safely let node_name = if info.node_name.is_null() { "unknown".to_string() } else { std::ffi::CStr::from_ptr(info.node_name) .to_string_lossy() .to_string() }; }
Error Handling
#![allow(unused)] fn main() { // Convert C return codes to Rust Results let ret = rcl_get_publishers_info_by_topic( &self.node, &mut allocator, topic_name_c.as_ptr(), false, &mut publishers_info, ); if ret != 0 { return Err(anyhow!("Failed to get publishers info: {}", ret)); } }
Memory Management
#![allow(unused)] fn main() { // Ensure proper cleanup with RAII unsafe { let mut allocator = rcutils_get_default_allocator(); let mut publishers_info: rcl_topic_endpoint_info_array_t = std::mem::zeroed(); // ... use the data ... // Automatic cleanup when leaving scope rmw_topic_endpoint_info_array_fini(&mut publishers_info, &mut allocator); } }
Type Conversions
We provide safe conversions between C types and idiomatic Rust types:
Enum Conversions
#![allow(unused)] fn main() { impl EndpointType { fn from_rmw(endpoint_type: rmw_endpoint_type_t) -> Self { match endpoint_type { rmw_endpoint_type_e_RMW_ENDPOINT_PUBLISHER => EndpointType::Publisher, rmw_endpoint_type_e_RMW_ENDPOINT_SUBSCRIPTION => EndpointType::Subscription, _ => EndpointType::Invalid, } } } }
Complex Structure Conversions
#![allow(unused)] fn main() { impl QosProfile { fn from_rmw(qos: &rmw_qos_profile_t) -> Self { QosProfile { history: QosHistoryPolicy::from_rmw(qos.history), depth: qos.depth, reliability: QosReliabilityPolicy::from_rmw(qos.reliability), durability: QosDurabilityPolicy::from_rmw(qos.durability), deadline_sec: qos.deadline.sec, deadline_nsec: qos.deadline.nsec, // ... other fields } } } }
Challenges and Solutions
1. Null Pointer Handling
Challenge: C APIs can return null pointers Solution: Check for null before dereferencing
#![allow(unused)] fn main() { let topic_type = if info.topic_type.is_null() { "unknown".to_string() } else { std::ffi::CStr::from_ptr(info.topic_type).to_string_lossy().to_string() }; }
2. Memory Ownership
Challenge: Complex ownership semantics between C and Rust Solution: Clear ownership boundaries and explicit cleanup
#![allow(unused)] fn main() { // C owns the memory in the array, we just read it let gid = std::slice::from_raw_parts( info.endpoint_gid.as_ptr(), info.endpoint_gid.len() ).to_vec(); // Copy to Rust-owned Vec }
3. Type Size Mismatches
Challenge: C int
vs Rust i32
vs c_int
Solution: Use std::os::raw
types consistently
#![allow(unused)] fn main() { use std::os::raw::{c_char, c_int, c_uint}; }
4. Array Handling
Challenge: C arrays with separate size fields Solution: Safe iteration with bounds checking
#![allow(unused)] fn main() { for i in 0..publishers_info.size { let info = &*(publishers_info.info_array.add(i)); // ... process info safely } }
Testing FFI Code
FFI code requires careful testing:
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_context_creation() { let context = RclGraphContext::new(); assert!(context.is_ok()); } #[test] fn test_topic_discovery() { let context = RclGraphContext::new().unwrap(); let topics = context.get_topic_names(); assert!(topics.is_ok()); } } }
Performance Considerations
- Minimize FFI Calls: Batch operations when possible
- Avoid String Conversions: Cache converted strings
- Memory Locality: Process data in the order it's laid out in memory
- Error Path Optimization: Fast paths for common success cases
This FFI design provides the foundation for roc
's powerful introspection capabilities while maintaining safety and performance.
Graph Context Implementation
The RclGraphContext
is the core component that manages ROS 2 graph introspection in roc
. It provides a safe Rust wrapper around RCL and RMW APIs for discovering and querying the ROS 2 computation graph.
Core Design Principles
1. RAII (Resource Acquisition Is Initialization)
The context automatically manages RCL resources:
#![allow(unused)] fn main() { pub struct RclGraphContext { context: rcl_context_t, // RCL context handle node: rcl_node_t, // Minimal node for graph queries is_initialized: bool, // Safety flag } }
2. Direct DDS Discovery
Unlike ros2
CLI tools that may use the daemon, roc
always performs direct DDS discovery:
#![allow(unused)] fn main() { /// Note: This implementation always performs direct DDS discovery /// (equivalent to --no-daemon) pub fn new() -> Result<Self> { Self::new_with_discovery(std::time::Duration::from_millis(150)) } }
3. Type Safety
All unsafe C interactions are wrapped in safe Rust APIs that return Result
types.
Initialization Process
The initialization follows a specific sequence required by RCL:
#![allow(unused)] fn main() { pub fn new_with_discovery(discovery_time: std::time::Duration) -> Result<Self> { unsafe { // 1. Read ROS_DOMAIN_ID from environment let domain_id = env::var("ROS_DOMAIN_ID") .ok() .and_then(|s| s.parse::<usize>().ok()) .unwrap_or(0); // 2. Initialize RCL init options let mut init_options = rcl_get_zero_initialized_init_options(); let allocator = rcutils_get_default_allocator(); let ret = rcl_init_options_init(&mut init_options, allocator); if ret != 0 { return Err(anyhow!("Failed to initialize RCL init options: {}", ret)); } // 3. Configure RMW init options with domain ID let rmw_init_options = rcl_init_options_get_rmw_init_options(&mut init_options); if rmw_init_options.is_null() { return Err(anyhow!("Failed to get RMW init options")); } (*rmw_init_options).domain_id = domain_id; // 4. Initialize RCL context let mut context = rcl_get_zero_initialized_context(); let ret = rcl_init(0, ptr::null_mut(), &init_options, &mut context); if ret != 0 { return Err(anyhow!("Failed to initialize RCL: {}", ret)); } // 5. Create minimal node for graph queries let mut node = rcl_get_zero_initialized_node(); let node_name = CString::new("roc_graph_node")?; let namespace = CString::new("/")?; let node_options = rcl_node_get_default_options(); let ret = rcl_node_init( &mut node, node_name.as_ptr(), namespace.as_ptr(), &mut context, &node_options, ); if ret != 0 { rcl_shutdown(&mut context); return Err(anyhow!("Failed to initialize node: {}", ret)); } // 6. Wait for DDS discovery let graph_context = RclGraphContext { context, node, is_initialized: true }; graph_context.wait_for_graph_discovery(discovery_time)?; Ok(graph_context) } } }
Graph Discovery Operations
Basic Topic Listing
#![allow(unused)] fn main() { pub fn get_topic_names(&self) -> Result<Vec<String>> { if !self.is_valid() { return Err(anyhow!("RCL context is not valid")); } unsafe { let mut allocator = rcutils_get_default_allocator(); let mut topic_names_and_types = rcl_names_and_types_t { names: rcutils_get_zero_initialized_string_array(), types: ptr::null_mut(), }; let ret = rcl_get_topic_names_and_types( &self.node, &mut allocator as *mut _, false, // no_demangle: use ROS topic name conventions &mut topic_names_and_types, ); if ret != 0 { return Err(anyhow!("Failed to get topic names: {}", ret)); } // Convert C string array to Rust Vec<String> let mut result = Vec::new(); for i in 0..topic_names_and_types.names.size { if !topic_names_and_types.names.data.add(i).is_null() { let name_ptr = *topic_names_and_types.names.data.add(i); if !name_ptr.is_null() { let name_cstr = std::ffi::CStr::from_ptr(name_ptr); if let Ok(name_str) = name_cstr.to_str() { result.push(name_str.to_string()); } } } } // Critical: clean up allocated memory rcl_names_and_types_fini(&mut topic_names_and_types); Ok(result) } } }
Counting Publishers/Subscribers
#![allow(unused)] fn main() { pub fn count_publishers(&self, topic_name: &str) -> Result<usize> { if !self.is_valid() { return Err(anyhow!("RCL context is not valid")); } let topic_name_c = CString::new(topic_name)?; unsafe { let mut count: usize = 0; let ret = rcl_count_publishers( &self.node, topic_name_c.as_ptr(), &mut count ); if ret != 0 { return Err(anyhow!("Failed to count publishers: {}", ret)); } Ok(count) } } }
Detailed Endpoint Information
The most complex operation is getting detailed endpoint information with QoS profiles:
#![allow(unused)] fn main() { pub fn get_publishers_info(&self, topic_name: &str) -> Result<Vec<TopicEndpointInfo>> { if !self.is_valid() { return Err(anyhow!("RCL context is not valid")); } let topic_name_c = CString::new(topic_name)?; unsafe { let mut allocator = rcutils_get_default_allocator(); let mut publishers_info: rcl_topic_endpoint_info_array_t = std::mem::zeroed(); let ret = rcl_get_publishers_info_by_topic( &self.node, &mut allocator, topic_name_c.as_ptr(), false, // no_mangle: follow ROS conventions &mut publishers_info, ); if ret != 0 { return Err(anyhow!("Failed to get publishers info: {}", ret)); } // Process each endpoint info structure let mut result = Vec::new(); for i in 0..publishers_info.size { let info = &*(publishers_info.info_array.add(i)); // Extract and convert all fields safely let endpoint_info = TopicEndpointInfo { node_name: self.extract_string(info.node_name)?, node_namespace: self.extract_string(info.node_namespace)?, topic_type: self.extract_string(info.topic_type)?, topic_type_hash: format_topic_type_hash(&info.topic_type_hash), endpoint_type: EndpointType::from_rmw(info.endpoint_type), gid: self.extract_gid(&info.endpoint_gid), qos_profile: QosProfile::from_rmw(&info.qos_profile), }; result.push(endpoint_info); } // Critical: cleanup allocated memory rmw_topic_endpoint_info_array_fini(&mut publishers_info, &mut allocator); Ok(result) } } }
Memory Management Strategy
Allocation Pattern
- Zero-initialize all structures before use
- Pass allocators to RCL/RMW functions
- Extract/copy data before cleanup
- Finalize structures to free memory
Helper Methods for Safe Extraction
#![allow(unused)] fn main() { impl RclGraphContext { unsafe fn extract_string(&self, ptr: *const c_char) -> Result<String> { if ptr.is_null() { Ok("unknown".to_string()) } else { Ok(std::ffi::CStr::from_ptr(ptr).to_string_lossy().to_string()) } } unsafe fn extract_gid(&self, gid_array: &[u8; 16]) -> Vec<u8> { gid_array.to_vec() // Copy the array to owned Vec } } }
Error Handling and Validation
Context Validation
#![allow(unused)] fn main() { pub fn is_valid(&self) -> bool { if !self.is_initialized { return false; } unsafe { rcl_context_is_valid(&self.context) && rcl_node_is_valid(&self.node) } } }
Comprehensive Error Mapping
#![allow(unused)] fn main() { fn map_rcl_error(ret: i32, operation: &str) -> anyhow::Error { match ret { 0 => panic!("Success code passed to error mapper"), 1 => anyhow!("{}: Generic error", operation), 10 => anyhow!("{}: Memory allocation failed", operation), 11 => anyhow!("{}: Invalid argument", operation), 200 => anyhow!("{}: Node is invalid", operation), _ => anyhow!("{}: Unknown error code {}", operation, ret), } } }
Resource Cleanup (Drop Implementation)
Proper cleanup is critical for long-running applications:
#![allow(unused)] fn main() { impl Drop for RclGraphContext { fn drop(&mut self) { if self.is_initialized { unsafe { // Order matters: node before context if rcl_node_is_valid(&self.node) { rcl_node_fini(&mut self.node); } if rcl_context_is_valid(&self.context) { rcl_shutdown(&mut self.context); } self.is_initialized = false; } } } } }
Discovery Timing
Since we use direct DDS discovery, we must wait for the discovery protocol:
#![allow(unused)] fn main() { fn wait_for_graph_discovery(&self, discovery_time: std::time::Duration) -> Result<()> { if !self.is_valid() { return Err(anyhow!("RCL context is not valid")); } // DDS discovery is asynchronous - we need to wait for network convergence std::thread::sleep(discovery_time); Ok(()) } }
The default 150ms timeout balances discovery completeness with startup speed.
Thread Safety
The RclGraphContext
is not thread-safe. RCL contexts and nodes are not designed for concurrent access. For multi-threaded applications, create separate contexts per thread or use synchronization primitives.
Performance Characteristics
- Initialization: ~150ms (dominated by DDS discovery)
- Topic listing: ~1-5ms (depends on graph size)
- Detailed queries: ~5-20ms (depends on topic complexity)
- Memory usage: ~1MB base + proportional to graph size
This implementation provides the foundation for all of roc
's graph introspection capabilities while maintaining safety and performance.
Topic Information System
The topic information system in roc
provides comprehensive details about ROS 2 topics, including basic metadata and detailed endpoint information with QoS profiles. This chapter explains how the system works and what information it provides.
Command Structure
The topic info command follows this pattern:
roc topic info <topic_name> [--verbose]
- Basic mode: Shows topic type, publisher count, and subscriber count
- Verbose mode: Adds detailed endpoint information including QoS profiles, GIDs, and type hashes
Information Hierarchy
Basic Information
Type: std_msgs/msg/String
Publisher count: 1
Subscription count: 1
This basic information comes from:
rcl_get_topic_names_and_types()
- for the topic typercl_count_publishers()
- for publisher countrcl_count_subscribers()
- for subscriber count
Detailed Information (Verbose Mode)
Publishers:
Node name: talker
Node namespace: /
Topic type: std_msgs/msg/String
Topic type hash: RIHS01_df668c740482bbd48fb39d76a70dfd4bd59db1288021743503259e948f6b1a18
Endpoint type: PUBLISHER
GID: 01.0f.ba.ec.43.55.39.96.00.00.00.00.00.00.14.03
QoS profile:
Reliability: RELIABLE
History (KEEP_LAST): 10
Durability: VOLATILE
Lifespan: Infinite
Deadline: Infinite
Liveliness: AUTOMATIC
Liveliness lease duration: Infinite
Data Flow Architecture
graph TD
A[User Command] --> B[Command Parser]
B --> C[RclGraphContext::new]
C --> D[RCL Initialization]
D --> E[DDS Discovery Wait]
E --> F{Verbose Mode?}
F -->|No| G[Basic Info Queries]
F -->|Yes| H[Detailed Info Queries]
G --> I[Display Basic Results]
H --> J[Extract & Convert Data]
J --> K[Display Detailed Results]
Implementation Details
Basic Information Collection
The basic mode uses simple counting operations:
#![allow(unused)] fn main() { fn run_command(matches: ArgMatches, common_args: CommonTopicArgs) -> Result<()> { let topic_name = matches.get_one::<String>("topic_name")?; let verbose = matches.get_flag("verbose"); let create_context = || -> Result<RclGraphContext> { RclGraphContext::new() .map_err(|e| anyhow!("Failed to initialize RCL context: {}", e)) }; // Get topic type let topic_type = { let context = create_context()?; let topics_and_types = context.get_topic_names_and_types()?; topics_and_types.iter() .find(|(name, _)| name == topic_name) .map(|(_, type_name)| type_name.clone()) .ok_or_else(|| anyhow!("Topic '{}' not found", topic_name))? }; // Get counts let publisher_count = create_context()?.count_publishers(topic_name)?; let subscriber_count = create_context()?.count_subscribers(topic_name)?; println!("Type: {}", topic_type); println!("Publisher count: {}", publisher_count); println!("Subscription count: {}", subscriber_count); // ... verbose mode handling } }
Detailed Information Collection
For verbose mode, we query detailed endpoint information:
#![allow(unused)] fn main() { if verbose { let publishers_info = create_context()?.get_publishers_info(topic_name)?; let subscribers_info = create_context()?.get_subscribers_info(topic_name)?; println!("\nPublishers:"); for pub_info in publishers_info { display_endpoint_info(&pub_info); } println!("\nSubscribers:"); for sub_info in subscribers_info { display_endpoint_info(&sub_info); } } }
Data Structures
TopicEndpointInfo Structure
#![allow(unused)] fn main() { #[derive(Debug, Clone)] pub struct TopicEndpointInfo { pub node_name: String, // Node that owns this endpoint pub node_namespace: String, // Node's namespace pub topic_type: String, // Message type name pub topic_type_hash: String, // Hash of message definition pub endpoint_type: EndpointType, // PUBLISHER or SUBSCRIPTION pub gid: Vec<u8>, // Global unique identifier pub qos_profile: QosProfile, // Quality of Service settings } }
QosProfile Structure
#![allow(unused)] fn main() { #[derive(Debug, Clone)] pub struct QosProfile { pub history: QosHistoryPolicy, // KEEP_LAST, KEEP_ALL pub depth: usize, // Queue depth for KEEP_LAST pub reliability: QosReliabilityPolicy, // RELIABLE, BEST_EFFORT pub durability: QosDurabilityPolicy, // VOLATILE, TRANSIENT_LOCAL pub deadline_sec: u64, // Deadline seconds pub deadline_nsec: u64, // Deadline nanoseconds pub lifespan_sec: u64, // Lifespan seconds pub lifespan_nsec: u64, // Lifespan nanoseconds pub liveliness: QosLivelinessPolicy, // Liveliness policy pub liveliness_lease_duration_sec: u64, // Lease duration seconds pub liveliness_lease_duration_nsec: u64, // Lease duration nanoseconds pub avoid_ros_namespace_conventions: bool, // Bypass ROS naming } }
Information Sources and Mapping
RCL/RMW Source Mapping
Display Field | RCL/RMW Source | Notes |
---|---|---|
Type | rcl_get_topic_names_and_types() | Basic topic type |
Publisher count | rcl_count_publishers() | Simple count |
Subscription count | rcl_count_subscribers() | Simple count |
Node name | rmw_topic_endpoint_info_t.node_name | From detailed query |
Node namespace | rmw_topic_endpoint_info_t.node_namespace | From detailed query |
Topic type hash | rmw_topic_endpoint_info_t.topic_type_hash | Message definition hash |
Endpoint type | rmw_topic_endpoint_info_t.endpoint_type | PUBLISHER/SUBSCRIPTION |
GID | rmw_topic_endpoint_info_t.endpoint_gid | DDS global identifier |
QoS profile | rmw_topic_endpoint_info_t.qos_profile | Complete QoS settings |
Topic Type Hash Format
The topic type hash uses the RIHS (ROS Interface Hash Standard) format:
RIHS01_<hex_hash>
Where:
RIHS01
indicates version 1 of the ROS Interface Hash Standard<hex_hash>
is the SHA-256 hash of the message definition
GID Format
Global Identifiers (GIDs) are displayed as dot-separated hexadecimal:
01.0f.ba.ec.43.55.39.96.00.00.00.00.00.00.14.03
This 16-byte identifier uniquely identifies the DDS endpoint.
QoS Policy Interpretation
Reliability
- RELIABLE: Guarantees delivery, may retry
- BEST_EFFORT: Attempts delivery, may lose messages
- SYSTEM_DEFAULT: Uses DDS implementation default
History
- KEEP_LAST: Keep only the last N messages (N = depth)
- KEEP_ALL: Keep all messages subject to resource limits
- SYSTEM_DEFAULT: Uses DDS implementation default
Durability
- VOLATILE: Messages not persisted
- TRANSIENT_LOCAL: Messages persisted for late-joining subscribers
- SYSTEM_DEFAULT: Uses DDS implementation default
Liveliness
- AUTOMATIC: DDS automatically asserts liveliness
- MANUAL_BY_TOPIC: Application must assert liveliness per topic
- SYSTEM_DEFAULT: Uses DDS implementation default
Duration Values
Duration values are displayed in nanoseconds:
- Infinite:
9223372036854775807
nanoseconds (effectively infinite) - Zero:
0
nanoseconds (immediate) - Finite: Actual nanosecond values
Error Handling
Topic Not Found
When a topic doesn't exist:
Error: Topic '/nonexistent' not found. [No daemon running]
The error message includes daemon status for compatibility with ros2
CLI.
Context Initialization Failures
If RCL initialization fails:
Error: Failed to initialize RCL context: <error_code>
Discovery Timeouts
If no endpoints are found but the topic exists, this typically indicates:
- Publishers/subscribers haven't started discovery yet
- Network connectivity issues
- Domain ID mismatches
Performance Considerations
Context Reuse Strategy
The current implementation creates a new context for each query operation. This trade-off:
- Pros: Ensures fresh discovery information, prevents stale state
- Cons: ~150ms overhead per context creation
Future optimizations could cache contexts with invalidation strategies.
Memory Usage
- Basic queries: ~1MB (RCL/DDS overhead)
- Detailed queries: +~10KB per endpoint (QoS and string data)
- Peak usage during array processing before conversion to Rust types
Discovery Timing
The 150ms discovery timeout balances:
- Completeness: Enough time for DDS discovery protocol
- Responsiveness: Fast enough for interactive use
- Reliability: Consistent results across different DDS implementations
This information system provides the foundation for understanding ROS 2 system behavior and debugging communication issues.
QoS Profile Handling
Quality of Service (QoS) profiles are critical for ROS 2 communication behavior. The roc
tool provides detailed QoS information that helps developers understand and debug communication patterns, compatibility issues, and performance characteristics.
QoS Overview
QoS profiles define the communication behavior between publishers and subscribers. They consist of several policies that must be compatible between endpoints for successful communication.
QoS Policies in roc
roc
displays the following QoS policies:
- Reliability - Message delivery guarantees
- History - Message queue behavior
- Durability - Message persistence
- Deadline - Maximum time between messages
- Lifespan - How long messages remain valid
- Liveliness - Endpoint aliveness checking
Data Structure Implementation
Rust QoS Representation
#![allow(unused)] fn main() { #[derive(Debug, Clone)] pub struct QosProfile { pub history: QosHistoryPolicy, pub depth: usize, pub reliability: QosReliabilityPolicy, pub durability: QosDurabilityPolicy, pub deadline_sec: u64, pub deadline_nsec: u64, pub lifespan_sec: u64, pub lifespan_nsec: u64, pub liveliness: QosLivelinessPolicy, pub liveliness_lease_duration_sec: u64, pub liveliness_lease_duration_nsec: u64, pub avoid_ros_namespace_conventions: bool, } }
Policy Enumerations
#![allow(unused)] fn main() { #[derive(Debug, Clone)] pub enum QosReliabilityPolicy { SystemDefault, // Use DDS implementation default Reliable, // Guarantee delivery BestEffort, // Best effort delivery Unknown, // Unrecognized value BestAvailable, // Match majority of endpoints } #[derive(Debug, Clone)] pub enum QosHistoryPolicy { SystemDefault, // Use DDS implementation default KeepLast, // Keep last N messages KeepAll, // Keep all messages Unknown, // Unrecognized value } #[derive(Debug, Clone)] pub enum QosDurabilityPolicy { SystemDefault, // Use DDS implementation default TransientLocal, // Persist for late joiners Volatile, // Don't persist Unknown, // Unrecognized value BestAvailable, // Match majority of endpoints } #[derive(Debug, Clone)] pub enum QosLivelinessPolicy { SystemDefault, // Use DDS implementation default Automatic, // DDS manages liveliness ManualByNode, // Application asserts per node (deprecated) ManualByTopic, // Application asserts per topic Unknown, // Unrecognized value BestAvailable, // Match majority of endpoints } }
Conversion from RMW Types
From C Enums to Rust Enums
#![allow(unused)] fn main() { impl QosReliabilityPolicy { fn from_rmw(reliability: rmw_qos_reliability_policy_e) -> Self { match reliability { rmw_qos_reliability_policy_e_RMW_QOS_POLICY_RELIABILITY_SYSTEM_DEFAULT => QosReliabilityPolicy::SystemDefault, rmw_qos_reliability_policy_e_RMW_QOS_POLICY_RELIABILITY_RELIABLE => QosReliabilityPolicy::Reliable, rmw_qos_reliability_policy_e_RMW_QOS_POLICY_RELIABILITY_BEST_EFFORT => QosReliabilityPolicy::BestEffort, rmw_qos_reliability_policy_e_RMW_QOS_POLICY_RELIABILITY_BEST_AVAILABLE => QosReliabilityPolicy::BestAvailable, _ => QosReliabilityPolicy::Unknown, } } } }
Complete QoS Profile Conversion
#![allow(unused)] fn main() { impl QosProfile { fn from_rmw(qos: &rmw_qos_profile_t) -> Self { QosProfile { history: QosHistoryPolicy::from_rmw(qos.history), depth: qos.depth, reliability: QosReliabilityPolicy::from_rmw(qos.reliability), durability: QosDurabilityPolicy::from_rmw(qos.durability), deadline_sec: qos.deadline.sec, deadline_nsec: qos.deadline.nsec, lifespan_sec: qos.lifespan.sec, lifespan_nsec: qos.lifespan.nsec, liveliness: QosLivelinessPolicy::from_rmw(qos.liveliness), liveliness_lease_duration_sec: qos.liveliness_lease_duration.sec, liveliness_lease_duration_nsec: qos.liveliness_lease_duration.nsec, avoid_ros_namespace_conventions: qos.avoid_ros_namespace_conventions, } } } }
Display Formatting
Duration Formatting
Duration values require special formatting because they can represent:
- Infinite duration:
0x7FFFFFFFFFFFFFFF
seconds and nanoseconds - Unspecified duration:
0
seconds and nanoseconds - Specific duration: Actual time values
#![allow(unused)] fn main() { impl QosProfile { pub fn format_duration(&self, sec: u64, nsec: u64) -> String { if sec == 0x7FFFFFFFFFFFFFFF && nsec == 0x7FFFFFFFFFFFFFFF { "Infinite".to_string() } else if sec == 0 && nsec == 0 { "0 nanoseconds".to_string() } else { format!("{} nanoseconds", sec * 1_000_000_000 + nsec) } } } }
Policy Display
#![allow(unused)] fn main() { impl QosReliabilityPolicy { pub fn to_string(&self) -> &'static str { match self { QosReliabilityPolicy::SystemDefault => "SYSTEM_DEFAULT", QosReliabilityPolicy::Reliable => "RELIABLE", QosReliabilityPolicy::BestEffort => "BEST_EFFORT", QosReliabilityPolicy::Unknown => "UNKNOWN", QosReliabilityPolicy::BestAvailable => "BEST_AVAILABLE", } } } }
QoS Policy Details
Reliability Policy
RELIABLE
- Guarantees message delivery
- Uses acknowledgments and retransmissions
- Higher bandwidth and latency overhead
- Suitable for critical data
BEST_EFFORT
- Attempts delivery without guarantees
- No acknowledgments or retransmissions
- Lower bandwidth and latency
- Suitable for high-frequency sensor data
Example output:
Reliability: RELIABLE
History Policy
KEEP_LAST
- Maintains a queue of the last N messages
- Depth field indicates queue size
- Older messages are discarded when queue is full
- Most common for real-time systems
KEEP_ALL
- Attempts to deliver all messages
- Subject to resource limits
- Can cause memory growth under high load
- Suitable when no data loss is acceptable
Example output:
History (KEEP_LAST): 10
Durability Policy
VOLATILE
- Messages exist only while publisher is active
- Late-joining subscribers miss earlier messages
- Default for most applications
TRANSIENT_LOCAL
- Messages are stored for late-joining subscribers
- Publisher maintains message history
- Useful for configuration or status topics
Example output:
Durability: TRANSIENT_LOCAL
Deadline Policy
Specifies the maximum expected time between consecutive messages.
Infinite (default)
- No deadline constraint
- Publisher can send at any rate
Finite deadline
- Publisher commits to sending within deadline
- Subscriber can detect missed deadlines
- Useful for real-time systems
Example output:
Deadline: Infinite
Deadline: 100000000 nanoseconds # 100ms
Lifespan Policy
Defines how long messages remain valid after publication.
Infinite (default)
- Messages never expire
- Suitable for persistent data
Finite lifespan
- Messages expire after specified time
- Useful for time-sensitive data
Example output:
Lifespan: 5000000000 nanoseconds # 5 seconds
Liveliness Policy
Determines how endpoint "aliveness" is maintained and monitored.
AUTOMATIC
- DDS automatically maintains liveliness
- Most common and recommended setting
MANUAL_BY_TOPIC
- Application must explicitly assert liveliness
- Provides fine-grained control
- Used in safety-critical systems
Example output:
Liveliness: AUTOMATIC
Liveliness lease duration: Infinite
QoS Compatibility
Compatibility Rules
For successful communication, QoS policies must be compatible:
Policy | Publisher | Subscriber | Compatible? |
---|---|---|---|
Reliability | RELIABLE | RELIABLE | ✅ |
Reliability | RELIABLE | BEST_EFFORT | ✅ |
Reliability | BEST_EFFORT | RELIABLE | ❌ |
Reliability | BEST_EFFORT | BEST_EFFORT | ✅ |
Policy | Publisher | Subscriber | Compatible? |
---|---|---|---|
Durability | TRANSIENT_LOCAL | TRANSIENT_LOCAL | ✅ |
Durability | TRANSIENT_LOCAL | VOLATILE | ✅ |
Durability | VOLATILE | TRANSIENT_LOCAL | ❌ |
Durability | VOLATILE | VOLATILE | ✅ |
Common QoS Profiles
Sensor Data Profile
Reliability: BEST_EFFORT
History (KEEP_LAST): 5
Durability: VOLATILE
Deadline: Infinite
Lifespan: Infinite
Liveliness: AUTOMATIC
Parameter Profile
Reliability: RELIABLE
History (KEEP_LAST): 1000
Durability: VOLATILE
Deadline: Infinite
Lifespan: Infinite
Liveliness: AUTOMATIC
Services Profile
Reliability: RELIABLE
History (KEEP_LAST): 10
Durability: VOLATILE
Deadline: Infinite
Lifespan: Infinite
Liveliness: AUTOMATIC
Debugging QoS Issues
Common Problems
No Communication
- Check reliability compatibility
- Verify durability compatibility
- Ensure deadline constraints are met
High Latency
- RELIABLE policy adds overhead
- Large history depth increases processing
- Network congestion from retransmissions
Memory Usage
- KEEP_ALL history can grow unbounded
- TRANSIENT_LOCAL stores message history
- Large depth values consume memory
Using roc
for QoS Debugging
-
Check endpoint QoS:
roc topic info /my_topic --verbose
-
Compare publisher and subscriber QoS: Look for compatibility issues in the output
-
Monitor over time: Run repeatedly to see if QoS settings change
-
Verify against expectations: Compare displayed QoS with application configuration
Performance Impact
Policy Performance Characteristics
Policy | Bandwidth | Latency | Memory | CPU |
---|---|---|---|---|
RELIABLE | High | Higher | Medium | Higher |
BEST_EFFORT | Low | Lower | Low | Lower |
KEEP_ALL | - | - | High | Medium |
KEEP_LAST | - | - | Low | Low |
TRANSIENT_LOCAL | Medium | - | High | Medium |
Optimization Guidelines
- Use BEST_EFFORT for high-frequency sensor data
- Use RELIABLE for commands and critical data
- Keep history depth small for real-time performance
- Use VOLATILE durability unless persistence is needed
- Set realistic deadlines to detect communication issues
The QoS system in roc
provides essential visibility into ROS 2 communication behavior, enabling developers to optimize performance and debug connectivity issues.
Endpoint Discovery
Endpoint discovery is the process by which roc
finds and identifies publishers and subscribers in the ROS 2 system. This chapter explains how the discovery mechanism works, the information it provides, and how it differs from daemon-based approaches.
Discovery Architecture
Direct DDS Discovery vs Daemon
roc
uses direct DDS discovery, which differs from the ros2
CLI daemon approach:
Approach | Mechanism | Pros | Cons |
---|---|---|---|
Direct DDS (roc) | Directly queries DDS discovery database | Always current, no daemon dependency | Slower startup, repeated discovery overhead |
Daemon (ros2 cli) | Queries centralized daemon cache | Fast queries, shared discovery | Stale data possible, daemon dependency |
Discovery Flow
sequenceDiagram
participant roc as roc Command
participant RCL as RCL Layer
participant DDS as DDS Discovery
participant Net as Network
roc->>RCL: rcl_init()
RCL->>DDS: Initialize participant
DDS->>Net: Send participant announcement
Net->>DDS: Receive peer announcements
DDS->>DDS: Build discovery database
Note over DDS: Discovery timeout (150ms)
roc->>RCL: rcl_get_publishers_info_by_topic()
RCL->>DDS: Query discovery database
DDS-->>RCL: Endpoint information
RCL-->>roc: Topic endpoint details
Discovery Timing
Initialization Sequence
#![allow(unused)] fn main() { pub fn new_with_discovery(discovery_time: std::time::Duration) -> Result<Self> { // 1. Initialize RCL context and node let graph_context = RclGraphContext { context, node, is_initialized: true }; // 2. Wait for DDS discovery to converge graph_context.wait_for_graph_discovery(discovery_time)?; Ok(graph_context) } fn wait_for_graph_discovery(&self, discovery_time: std::time::Duration) -> Result<()> { // DDS discovery is asynchronous - wait for network convergence std::thread::sleep(discovery_time); Ok(()) } }
Discovery Timeout Selection
The default 150ms timeout balances several factors:
Too Short (< 50ms)
- May miss endpoints that haven't completed discovery
- Inconsistent results across runs
- Network-dependent behavior
Optimal (100-200ms)
- Allows most DDS implementations to converge
- Reasonable for interactive use
- Reliable across different networks
Too Long (> 500ms)
- Slow interactive response
- Diminishing returns for completeness
- User experience degradation
Endpoint Information Structure
Complete Endpoint Data
#![allow(unused)] fn main() { pub struct TopicEndpointInfo { pub node_name: String, // ROS node name pub node_namespace: String, // ROS namespace pub topic_type: String, // Message type pub topic_type_hash: String, // Message definition hash pub endpoint_type: EndpointType, // PUBLISHER/SUBSCRIPTION pub gid: Vec<u8>, // DDS Global ID pub qos_profile: QosProfile, // Complete QoS settings } }
Endpoint Type Classification
#![allow(unused)] fn main() { #[derive(Debug, Clone)] pub enum EndpointType { Publisher, // Sends messages Subscription, // Receives messages Invalid, // Error state } }
Discovery Data Sources
RCL Discovery Functions
Basic Topology
// Get all topics and types
rcl_ret_t rcl_get_topic_names_and_types(
const rcl_node_t * node,
rcutils_allocator_t * allocator,
bool no_demangle,
rcl_names_and_types_t * topic_names_and_types
);
// Count endpoints
rcl_ret_t rcl_count_publishers(const rcl_node_t * node, const char * topic_name, size_t * count);
rcl_ret_t rcl_count_subscribers(const rcl_node_t * node, const char * topic_name, size_t * count);
Detailed Endpoint Information
// Get detailed publisher info
rcl_ret_t rcl_get_publishers_info_by_topic(
const rcl_node_t * node,
rcutils_allocator_t * allocator,
const char * topic_name,
bool no_mangle,
rcl_topic_endpoint_info_array_t * publishers_info
);
// Get detailed subscriber info
rcl_ret_t rcl_get_subscriptions_info_by_topic(
const rcl_node_t * node,
rcutils_allocator_t * allocator,
const char * topic_name,
bool no_mangle,
rcl_topic_endpoint_info_array_t * subscriptions_info
);
Information Extraction Process
#![allow(unused)] fn main() { pub fn get_publishers_info(&self, topic_name: &str) -> Result<Vec<TopicEndpointInfo>> { let topic_name_c = CString::new(topic_name)?; unsafe { let mut allocator = rcutils_get_default_allocator(); let mut publishers_info: rcl_topic_endpoint_info_array_t = std::mem::zeroed(); // Query DDS discovery database let ret = rcl_get_publishers_info_by_topic( &self.node, &mut allocator, topic_name_c.as_ptr(), false, // no_mangle: follow ROS naming conventions &mut publishers_info, ); if ret != 0 { return Err(anyhow!("Failed to get publishers info: {}", ret)); } // Extract information from each endpoint let mut result = Vec::new(); for i in 0..publishers_info.size { let info = &*(publishers_info.info_array.add(i)); result.push(TopicEndpointInfo { node_name: extract_string(info.node_name), node_namespace: extract_string(info.node_namespace), topic_type: extract_string(info.topic_type), topic_type_hash: format_topic_type_hash(&info.topic_type_hash), endpoint_type: EndpointType::from_rmw(info.endpoint_type), gid: extract_gid(&info.endpoint_gid), qos_profile: QosProfile::from_rmw(&info.qos_profile), }); } // Clean up allocated memory rmw_topic_endpoint_info_array_fini(&mut publishers_info, &mut allocator); Ok(result) } } }
Global Identifiers (GIDs)
GID Structure and Format
GIDs are 16-byte unique identifiers assigned by the DDS implementation:
Byte Layout: [01][0f][ba][ec][43][55][39][96][00][00][00][00][00][00][14][03]
Display: 01.0f.ba.ec.43.55.39.96.00.00.00.00.00.00.14.03
GID Components (implementation-specific):
- Bytes 0-3: Often participant identifier
- Bytes 4-7: Usually timestamp or sequence
- Bytes 8-11: Typically zero padding
- Bytes 12-15: Entity identifier within participant
GID Extraction and Formatting
#![allow(unused)] fn main() { // Extract GID from C array let gid = std::slice::from_raw_parts( info.endpoint_gid.as_ptr(), info.endpoint_gid.len() ).to_vec(); // Format for display fn format_gid(gid: &[u8]) -> String { gid.iter() .map(|b| format!("{:02x}", b)) .collect::<Vec<String>>() .join(".") } }
GID Uniqueness Properties
- Global: Unique across entire DDS domain
- Persistent: Remains same for endpoint lifetime
- Deterministic: Recreated consistently by DDS
- Opaque: Implementation-specific internal structure
Topic Type Hashes
RIHS Format (ROS Interface Hash Standard)
Topic type hashes follow the RIHS format:
RIHS01_<hex_hash>
Components:
RIHS
: ROS Interface Hash Standard identifier01
: Version number (currently 1)<hex_hash>
: SHA-256 hash of message definition
Hash Generation Process
- Canonical representation: Message definition in canonical form
- Hash calculation: SHA-256 of canonical representation
- Encoding: Hexadecimal encoding of hash bytes
- Formatting: Prepend RIHS version identifier
Example Hash
RIHS01_df668c740482bbd48fb39d76a70dfd4bd59db1288021743503259e948f6b1a18
This represents the hash for std_msgs/msg/String
.
Hash Extraction
#![allow(unused)] fn main() { fn format_topic_type_hash(hash: &rosidl_type_hash_t) -> String { let hash_bytes = unsafe { std::slice::from_raw_parts(hash.value.as_ptr(), hash.value.len()) }; let hex_hash = hash_bytes.iter() .map(|b| format!("{:02x}", b)) .collect::<String>(); format!("RIHS01_{}", hex_hash) } }
Discovery Scope and Filtering
Domain Isolation
Discovery is limited by ROS domain:
#![allow(unused)] fn main() { // Read ROS_DOMAIN_ID from environment (default: 0) let domain_id = env::var("ROS_DOMAIN_ID") .ok() .and_then(|s| s.parse::<usize>().ok()) .unwrap_or(0); // Configure RMW with domain ID (*rmw_init_options).domain_id = domain_id; }
Topic Name Filtering
The discovery system can filter by:
- Exact topic name:
rcl_get_publishers_info_by_topic("/chatter", ...)
- Name mangling:
no_mangle
parameter controls ROS naming conventions
Endpoint Filtering
Results can be filtered by:
- Endpoint type: Publishers vs subscribers
- Node name/namespace: Filter by owning node
- QoS compatibility: Only compatible endpoints
Discovery Performance
Timing Characteristics
Operation | Typical Time | Factors |
---|---|---|
Context initialization | 150ms | DDS discovery timeout |
Topic list query | 1-5ms | Number of topics |
Endpoint count | 1-3ms | Number of endpoints |
Detailed endpoint info | 5-20ms | QoS complexity, endpoint count |
Memory Usage
Component | Memory Usage | Notes |
---|---|---|
RCL context | ~1MB | DDS participant overhead |
Topic list | ~1KB per topic | Name and type strings |
Endpoint info | ~500B per endpoint | QoS and metadata |
Peak processing | +50% | During C to Rust conversion |
Optimization Strategies
Context Reuse
#![allow(unused)] fn main() { // Current: Create new context per operation let context = RclGraphContext::new()?; let info = context.get_publishers_info(topic)?; // Potential: Reuse context across operations let context = RclGraphContext::new()?; let info1 = context.get_publishers_info(topic1)?; let info2 = context.get_publishers_info(topic2)?; }
Batch Operations
#![allow(unused)] fn main() { // Current: Separate calls for publishers and subscribers let pubs = context.get_publishers_info(topic)?; let subs = context.get_subscribers_info(topic)?; // Potential: Combined endpoint query let endpoints = context.get_all_endpoints_info(topic)?; }
Error Handling and Edge Cases
Discovery Failures
No Endpoints Found
- Topic exists but no active endpoints
- Discovery timing issues
- Network connectivity problems
Partial Discovery
- Some endpoints discovered, others missed
- Network partitions or high latency
- DDS implementation differences
Invalid Data
- Corrupted endpoint information
- Unsupported QoS policies
- Protocol version mismatches
Error Recovery Strategies
#![allow(unused)] fn main() { // Retry with longer discovery timeout if endpoints.is_empty() { let context = RclGraphContext::new_with_discovery(Duration::from_millis(500))?; endpoints = context.get_publishers_info(topic)?; } // Validate endpoint data for endpoint in &endpoints { if endpoint.node_name.is_empty() { warn!("Endpoint with empty node name: {:?}", endpoint.gid); } } }
The endpoint discovery system provides comprehensive visibility into the ROS 2 computation graph, enabling effective debugging and system understanding.
Memory Management
This chapter covers how the roc
tool manages memory when interfacing with ROS 2's C libraries through FFI (Foreign Function Interface).
Overview
Memory management in FFI bindings is critical for safety and performance. The roc
tool must carefully handle:
- Allocation and deallocation of C structures
- Ownership transfer between Rust and C code
- String handling across language boundaries
- Resource cleanup to prevent memory leaks
Memory Safety Principles
RAII (Resource Acquisition Is Initialization)
The roc
tool follows Rust's RAII principles by wrapping C resources in Rust structs that implement Drop
:
#![allow(unused)] fn main() { pub struct RclGraphContext { context: *mut rcl_context_t, node: *mut rcl_node_t, // Other fields... } impl Drop for RclGraphContext { fn drop(&mut self) { unsafe { if !self.node.is_null() { rcl_node_fini(self.node); libc::free(self.node as *mut c_void); } if !self.context.is_null() { rcl_context_fini(self.context); libc::free(self.context as *mut c_void); } } } } }
Safe Wrappers
All C FFI calls are wrapped in safe Rust functions that handle error checking and memory management:
#![allow(unused)] fn main() { impl RclGraphContext { pub fn new() -> Result<Self, String> { unsafe { // Allocate C structures let context = libc::malloc(size_of::<rcl_context_t>()) as *mut rcl_context_t; if context.is_null() { return Err("Failed to allocate context".to_string()); } // Initialize with proper error handling let ret = rcl_init(0, ptr::null(), ptr::null(), context); if ret != RCL_RET_OK as i32 { libc::free(context as *mut c_void); return Err(format!("Failed to initialize context: {}", ret)); } // Continue with node allocation and initialization... } } } }
String Handling
C String Conversion
Converting between Rust strings and C strings requires careful memory management:
#![allow(unused)] fn main() { fn rust_string_to_c_string(s: &str) -> Result<*mut c_char, String> { let c_string = CString::new(s).map_err(|e| format!("Invalid string: {}", e))?; let ptr = unsafe { libc::malloc(c_string.len() + 1) as *mut c_char }; if ptr.is_null() { return Err("Failed to allocate memory for C string".to_string()); } unsafe { libc::strcpy(ptr, c_string.as_ptr()); } Ok(ptr) } fn c_string_to_rust_string(ptr: *const c_char) -> Option<String> { if ptr.is_null() { return None; } unsafe { CStr::from_ptr(ptr).to_string_lossy().into_owned().into() } } }
Owned vs Borrowed Strings
The code distinguishes between owned and borrowed string data:
#![allow(unused)] fn main() { // Borrowed - ROS 2 owns the memory let topic_name = c_string_to_rust_string(topic_info.topic_name); // Owned - we must free the memory unsafe { if !owned_string_ptr.is_null() { libc::free(owned_string_ptr as *mut c_void); } } }
Array and Structure Management
Dynamic Arrays
When ROS 2 returns arrays of structures, we must carefully manage the memory:
#![allow(unused)] fn main() { pub fn get_topic_names_and_types(&self) -> Result<Vec<(String, Vec<String>)>, String> { let mut names_and_types = rcl_names_and_types_t { names: rcl_string_array_t { data: ptr::null_mut(), size: 0, allocator: rcl_get_default_allocator(), }, types: rcl_string_array_t { data: ptr::null_mut(), size: 0, allocator: rcl_get_default_allocator(), }, }; unsafe { let ret = rcl_get_topic_names_and_types( self.node, &mut names_and_types.names, &mut names_and_types.types, ); if ret != RCL_RET_OK as i32 { return Err(format!("Failed to get topic names and types: {}", ret)); } // Convert to Rust types let result = self.convert_names_and_types(&names_and_types)?; // Clean up ROS 2 allocated memory rcl_names_and_types_fini(&mut names_and_types); Ok(result) } } }
Structure Initialization
C structures must be properly initialized to avoid undefined behavior:
#![allow(unused)] fn main() { fn create_topic_endpoint_info() -> rcl_topic_endpoint_info_t { rcl_topic_endpoint_info_t { node_name: ptr::null(), node_namespace: ptr::null(), topic_type: ptr::null(), endpoint_type: RCL_PUBLISHER_ENDPOINT, endpoint_gid: [0; 24], // GID is a fixed-size array qos_profile: rcl_qos_profile_t { history: RCL_QOS_POLICY_HISTORY_KEEP_LAST, depth: 10, reliability: RCL_QOS_POLICY_RELIABILITY_RELIABLE, durability: RCL_QOS_POLICY_DURABILITY_VOLATILE, deadline: rcl_duration_t { nanoseconds: 0 }, lifespan: rcl_duration_t { nanoseconds: 0 }, liveliness: RCL_QOS_POLICY_LIVELINESS_AUTOMATIC, liveliness_lease_duration: rcl_duration_t { nanoseconds: 0 }, avoid_ros_namespace_conventions: false, }, } } }
Error Handling and Cleanup
Consistent Error Handling
All FFI operations follow a consistent pattern for error handling:
#![allow(unused)] fn main() { macro_rules! check_rcl_ret { ($ret:expr, $msg:expr) => { if $ret != RCL_RET_OK as i32 { return Err(format!("{}: error code {}", $msg, $ret)); } }; } // Usage let ret = unsafe { rcl_some_function(params) }; check_rcl_ret!(ret, "Failed to call rcl_some_function"); }
Resource Cleanup on Error
When operations fail, we must ensure proper cleanup:
#![allow(unused)] fn main() { pub fn initialize_node(name: &str) -> Result<*mut rcl_node_t, String> { unsafe { let node = libc::malloc(size_of::<rcl_node_t>()) as *mut rcl_node_t; if node.is_null() { return Err("Failed to allocate node".to_string()); } let c_name = rust_string_to_c_string(name)?; let ret = rcl_node_init(node, c_name, self.context); // Clean up the C string regardless of success/failure libc::free(c_name as *mut c_void); if ret != RCL_RET_OK as i32 { libc::free(node as *mut c_void); return Err(format!("Failed to initialize node: {}", ret)); } Ok(node) } } }
Performance Considerations
Memory Pool Reuse
For frequently allocated structures, consider using memory pools:
#![allow(unused)] fn main() { pub struct EndpointInfoPool { pool: Vec<rcl_topic_endpoint_info_t>, next_available: usize, } impl EndpointInfoPool { pub fn get_endpoint_info(&mut self) -> &mut rcl_topic_endpoint_info_t { if self.next_available >= self.pool.len() { self.pool.push(create_topic_endpoint_info()); } let info = &mut self.pool[self.next_available]; self.next_available += 1; info } pub fn reset(&mut self) { self.next_available = 0; } } }
Minimize Allocations
Reuse string buffers and structures when possible:
#![allow(unused)] fn main() { pub struct StringBuffer { buffer: Vec<u8>, } impl StringBuffer { pub fn as_c_string(&mut self, s: &str) -> *const c_char { self.buffer.clear(); self.buffer.extend_from_slice(s.as_bytes()); self.buffer.push(0); // null terminator self.buffer.as_ptr() as *const c_char } } }
Common Pitfalls
Double Free
Never free memory that ROS 2 still owns:
#![allow(unused)] fn main() { // BAD - ROS 2 owns this memory unsafe { libc::free(topic_info.topic_name as *mut c_void); // Don't do this! } // GOOD - Let ROS 2 clean up its own memory unsafe { rcl_topic_endpoint_info_fini(&mut topic_info); } }
Use After Free
Always set pointers to null after freeing:
#![allow(unused)] fn main() { unsafe { if !ptr.is_null() { libc::free(ptr as *mut c_void); ptr = ptr::null_mut(); // Prevent use-after-free } } }
Memory Leaks
Use tools like Valgrind to detect memory leaks:
valgrind --leak-check=full --show-leak-kinds=all ./target/debug/roc topic list
Testing Memory Management
Unit Tests
Test memory management in isolation:
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_context_creation_and_cleanup() { let context = RclGraphContext::new().expect("Failed to create context"); // Context should be properly cleaned up when dropped } #[test] fn test_string_conversion() { let test_str = "test_topic"; let c_str = rust_string_to_c_string(test_str).expect("Failed to convert"); let rust_str = c_string_to_rust_string(c_str).expect("Failed to convert back"); assert_eq!(test_str, rust_str); unsafe { libc::free(c_str as *mut c_void); } } } }
Integration Tests
Test memory management with real ROS 2 operations:
#![allow(unused)] fn main() { #[test] fn test_topic_info_memory_management() { let context = RclGraphContext::new().expect("Failed to create context"); // This should not leak memory for _ in 0..1000 { let topics = context.get_topic_names_and_types() .expect("Failed to get topics"); assert!(!topics.is_empty()); } } }
This comprehensive memory management ensures that the roc
tool is both safe and efficient when interfacing with ROS 2's C libraries.
Dynamic Message Type Loading
This chapter explains one of the most important and sophisticated features in roc
: dynamic runtime loading of ROS2 message type support. This technique enables roc
to work with any ROS2 message type without requiring compile-time knowledge or static linking against specific message packages.
Table of Contents
- Overview
- The Problem
- The Solution: Runtime Dynamic Loading
- How Dynamic Loading Works
- Implementation Architecture
- Code Walkthrough
- Generic Type Support Resolution
- Benefits and Trade-offs
- Future Enhancements
Overview
Dynamic message type loading is a technique that allows roc
to:
- Load ROS2 message type support libraries at runtime (not compile time)
- Resolve type support functions dynamically using symbol lookup
- Create real RCL publishers/subscribers for any message type
- Support custom message types without code changes
- Work with any ROS2 package that provides proper typesupport libraries
This is what enables commands like:
# Works with any installed ROS2 message type!
roc topic pub /test geometry_msgs/msg/Twist '{linear: {x: 0.5}}'
roc topic pub /custom custom_msgs/msg/MyMessage '{field: value}'
The Problem
Traditional ROS2 tools face a fundamental challenge:
Static Linking Approach (Traditional)
#![allow(unused)] fn main() { // Traditional approach requires compile-time knowledge use geometry_msgs::msg::Twist; use std_msgs::msg::String; // ... must import every message type you want to use fn create_publisher() { // Must know the exact type at compile time let twist_publisher = node.create_publisher::<Twist>("topic", qos); let string_publisher = node.create_publisher::<String>("topic", qos); } }
Problems with static linking:
- ❌ Limited to pre-compiled message types
- ❌ Huge binary size (includes all message libraries)
- ❌ Cannot work with custom/unknown message types
- ❌ Requires recompilation for new message types
- ❌ Complex dependency management
The ROS2 Type Support Challenge
ROS2's architecture requires type support pointers to create publishers:
// This is what RCL requires internally
rcl_ret_t rcl_publisher_init(
rcl_publisher_t * publisher,
const rcl_node_t * node,
const rosidl_message_type_support_t * type_support, // ← This is the key!
const char * topic_name,
const rcl_publisher_options_t * options
);
The type_support
pointer contains:
- Message structure layout
- Serialization/deserialization functions
- Field metadata and types
- Memory management functions
Without valid type support, you cannot create RCL publishers, and topics won't appear in the ROS graph!
The Solution: Runtime Dynamic Loading
roc
solves this through dynamic library loading - a powerful systems programming technique:
Key Insight: ROS2 Type Support Libraries
ROS2 installations contain pre-compiled type support libraries:
/opt/ros/jazzy/lib/
├── libgeometry_msgs__rosidl_typesupport_c.so # Geometry messages
├── libstd_msgs__rosidl_typesupport_c.so # Standard messages
├── libsensor_msgs__rosidl_typesupport_c.so # Sensor messages
├── libcustom_msgs__rosidl_typesupport_c.so # Your custom messages
└── ...
Each library exports type support functions:
$ nm -D libgeometry_msgs__rosidl_typesupport_c.so | grep Twist
rosidl_typesupport_c__get_message_type_support_handle__geometry_msgs__msg__Twist
Dynamic Loading Strategy
Instead of static linking, roc
uses runtime dynamic loading:
- Construct library path from message type:
geometry_msgs/msg/Twist
→/opt/ros/jazzy/lib/libgeometry_msgs__rosidl_typesupport_c.so
- Load library dynamically using
dlopen()
(viarcutils_load_shared_library
) - Resolve type support symbol using
dlsym()
(viarcutils_get_symbol
) - Call the function to get the type support pointer
- Create real RCL publishers with valid type support
How Dynamic Loading Works
Step-by-Step Process
1. Message Type Parsing
#![allow(unused)] fn main() { // Input: "geometry_msgs/msg/Twist" let (package, message) = parse_message_type("geometry_msgs/msg/Twist")?; // package = "geometry_msgs", message = "Twist" }
2. Library Path Construction
#![allow(unused)] fn main() { // Construct library path using naming convention let library_path = format!( "/opt/ros/jazzy/lib/lib{}__rosidl_typesupport_c.so", package ); // Result: "/opt/ros/jazzy/lib/libgeometry_msgs__rosidl_typesupport_c.so" }
3. Symbol Name Construction
#![allow(unused)] fn main() { // Construct symbol name using ROS2 naming convention let symbol_name = format!( "rosidl_typesupport_c__get_message_type_support_handle__{}__msg__{}", package, message ); // Result: "rosidl_typesupport_c__get_message_type_support_handle__geometry_msgs__msg__Twist" }
4. Dynamic Library Loading
#![allow(unused)] fn main() { unsafe { // Initialize library handle let mut shared_lib = rcutils_get_zero_initialized_shared_library(); // Load the shared library let ret = rcutils_load_shared_library( &mut shared_lib, library_path_c.as_ptr(), allocator, ); if ret != 0 { return Err(anyhow!("Failed to load library")); } } }
5. Symbol Resolution
#![allow(unused)] fn main() { unsafe { // Get the symbol from the loaded library let symbol_ptr = rcutils_get_symbol(&shared_lib, symbol_name_c.as_ptr()); if symbol_ptr.is_null() { return Err(anyhow!("Symbol not found")); } // Cast to function pointer and call it type TypeSupportGetterFn = unsafe extern "C" fn() -> *const rosidl_message_type_support_t; let type_support_fn: TypeSupportGetterFn = std::mem::transmute(symbol_ptr); let type_support = type_support_fn(); } }
6. RCL Publisher Creation
#![allow(unused)] fn main() { unsafe { // Now we can create a real RCL publisher! let ret = rcl_publisher_init( &mut publisher, node, type_support, // ← Valid type support from dynamic loading topic_name_c.as_ptr(), &options, ); // Publisher is registered in ROS graph and appears in topic lists! } }
Implementation Architecture
Core Components
1. DynamicMessageRegistry
File: src/shared/dynamic_messages.rs
Central registry for loading and caching message types:
#![allow(unused)] fn main() { pub struct DynamicMessageRegistry { loaded_types: HashMap<String, DynamicMessageType>, } impl DynamicMessageRegistry { pub fn load_message_type(&mut self, type_name: &str) -> Result<DynamicMessageType> { // 1. Parse message type // 2. Load type support dynamically // 3. Cache result // 4. Return type info with valid type support pointer } } }
2. Generic Type Support Loading
#![allow(unused)] fn main() { fn try_get_generic_type_support( &self, package_name: &str, message_name: &str, ) -> Result<*const rosidl_message_type_support_t> { // Automatic library path construction let library_path = format!("/opt/ros/jazzy/lib/lib{}__rosidl_typesupport_c.so", package_name); // Automatic symbol name construction let symbol_name = format!( "rosidl_typesupport_c__get_message_type_support_handle__{}__msg__{}", package_name, message_name ); // Dynamic loading self.load_type_support_from_library(&library_path, &symbol_name) } }
3. Bindgen Integration
File: rclrs/build.rs
Exposes dynamic loading functions to Rust:
#![allow(unused)] fn main() { let bindings = bindgen::Builder::default() .header("wrapper.h") // Dynamic loading functions .allowlist_function("rcutils_load_shared_library") .allowlist_function("rcutils_get_symbol") .allowlist_function("rcutils_unload_shared_library") .allowlist_function("rcutils_get_zero_initialized_shared_library") // Type support types .allowlist_type("rosidl_message_type_support_t") .allowlist_type("rcutils_shared_library_t") .generate()?; }
Data Flow
User Command: roc topic pub /test geometry_msgs/msg/Twist '{linear: {x: 0.5}}'
↓
1. Parse message type: "geometry_msgs/msg/Twist"
↓
2. Construct library path and symbol name
↓
3. Load: /opt/ros/jazzy/lib/libgeometry_msgs__rosidl_typesupport_c.so
↓
4. Resolve: rosidl_typesupport_c__get_message_type_support_handle__geometry_msgs__msg__Twist
↓
5. Call function → Get type_support pointer
↓
6. Create RCL publisher with valid type_support
↓
7. Topic appears in ROS graph! ✅
Code Walkthrough
Complete Type Support Loading Function
#![allow(unused)] fn main() { fn load_type_support_from_library( &self, library_name: &str, symbol_name: &str, ) -> Result<*const rosidl_message_type_support_t> { use std::ffi::CString; unsafe { // Step 1: Initialize shared library handle let mut shared_lib = rcutils_get_zero_initialized_shared_library(); // Step 2: Convert library name to C string let lib_name_c = CString::new(library_name) .map_err(|e| anyhow!("Invalid library name '{}': {}", library_name, e))?; // Step 3: Load the shared library let allocator = rcutils_get_default_allocator(); let ret = rcutils_load_shared_library( &mut shared_lib, lib_name_c.as_ptr(), allocator, ); if ret != 0 { // RCUTILS_RET_OK is 0 return Err(anyhow!("Failed to load library '{}': return code {}", library_name, ret)); } // Step 4: Convert symbol name to C string let symbol_name_c = CString::new(symbol_name) .map_err(|e| anyhow!("Invalid symbol name '{}': {}", symbol_name, e))?; // Step 5: Get the symbol from the library let symbol_ptr = rcutils_get_symbol(&shared_lib, symbol_name_c.as_ptr()); if symbol_ptr.is_null() { rcutils_unload_shared_library(&mut shared_lib); return Err(anyhow!("Symbol '{}' not found in library '{}'", symbol_name, library_name)); } // Step 6: Cast the symbol to a function pointer and call it type TypeSupportGetterFn = unsafe extern "C" fn() -> *const rosidl_message_type_support_t; let type_support_fn: TypeSupportGetterFn = std::mem::transmute(symbol_ptr); let type_support = type_support_fn(); // Step 7: Validate the result if type_support.is_null() { return Err(anyhow!("Type support function returned null pointer")); } println!("Successfully loaded type support for symbol: {}", symbol_name); Ok(type_support) } } }
Publisher Creation with Dynamic Type Support
#![allow(unused)] fn main() { fn create_dynamic_publisher( context: &RclGraphContext, topic_name: &str, message_type: &str, ) -> Result<rcl_publisher_t> { // Load type support dynamically let mut registry = DynamicMessageRegistry::new(); let message_type_info = registry.load_message_type(message_type)?; let type_support = message_type_info.type_support .ok_or_else(|| anyhow!("Could not load type support for {}", message_type))?; unsafe { let mut publisher = rcl_get_zero_initialized_publisher(); let options = rcl_publisher_get_default_options(); let topic_name_c = CString::new(topic_name)?; // Create publisher with dynamically loaded type support! let ret = rcl_publisher_init( &mut publisher, context.node(), type_support, // ← This comes from dynamic loading topic_name_c.as_ptr(), &options, ); if ret != 0 { return Err(anyhow!("Failed to create publisher: {}", ret)); } Ok(publisher) } } }
Generic Type Support Resolution
Fallback Hierarchy
roc
uses a smart fallback strategy:
#![allow(unused)] fn main() { fn try_get_type_support(&self, package_name: &str, message_name: &str) -> Result<TypeSupport> { let full_type = format!("{}/msg/{}", package_name, message_name); match full_type.as_str() { // 1. Optimized paths for common types "geometry_msgs/msg/Twist" => self.try_get_twist_type_support(), "std_msgs/msg/String" => self.try_get_string_type_support(), "std_msgs/msg/Int32" => self.try_get_int32_type_support(), "std_msgs/msg/Float64" => self.try_get_float64_type_support(), // 2. Generic fallback for ANY message type _ => self.try_get_generic_type_support(package_name, message_name), } } }
Automatic Library Discovery
The generic loader automatically constructs paths:
Message Type | Library Path | Symbol Name |
---|---|---|
geometry_msgs/msg/Twist | /opt/ros/jazzy/lib/libgeometry_msgs__rosidl_typesupport_c.so | rosidl_typesupport_c__get_message_type_support_handle__geometry_msgs__msg__Twist |
custom_msgs/msg/MyType | /opt/ros/jazzy/lib/libcustom_msgs__rosidl_typesupport_c.so | rosidl_typesupport_c__get_message_type_support_handle__custom_msgs__msg__MyType |
sensor_msgs/msg/Image | /opt/ros/jazzy/lib/libsensor_msgs__rosidl_typesupport_c.so | rosidl_typesupport_c__get_message_type_support_handle__sensor_msgs__msg__Image |
Testing the Generic Loader
# These all work automatically:
roc topic pub /test1 geometry_msgs/msg/Twist '{linear: {x: 1.0}}' # Known type
roc topic pub /test2 geometry_msgs/msg/Point '{x: 1.0, y: 2.0, z: 3.0}' # Generic loading
roc topic pub /test3 sensor_msgs/msg/Image '{header: {frame_id: "camera"}}' # Generic loading
roc topic pub /test4 custom_msgs/msg/MyType '{my_field: "value"}' # Your custom types!
Output shows the dynamic loading in action:
Attempting generic type support loading:
Library: /opt/ros/jazzy/lib/libgeometry_msgs__rosidl_typesupport_c.so
Symbol: rosidl_typesupport_c__get_message_type_support_handle__geometry_msgs__msg__Point
Successfully loaded type support for symbol: ...
Successfully created RCL publisher with real type support!
Benefits and Trade-offs
Benefits ✅
-
Universal Message Support
- Works with any ROS2 message type
- Supports custom packages automatically
- No compilation required for new types
-
Small Binary Size
- No static linking of message libraries
- Only loads what's actually used
- Minimal memory footprint
-
Runtime Flexibility
- Discover available message types at runtime
- Work with packages installed after compilation
- Perfect for generic tools like
roc
-
Performance
- Type support loaded once and cached
- No runtime overhead after initial load
- Real RCL integration (not simulation)
-
Maintainability
- No manual type definitions required
- Automatic support for new ROS2 versions
- Self-discovering architecture
Trade-offs ⚖️
-
Runtime Dependencies
- Requires ROS2 installation with typesupport libraries
- Fails gracefully if libraries are missing
- Error messages help diagnose missing packages
-
Platform Assumptions
- Assumes standard ROS2 installation paths
- Library naming conventions must match
- Works with standard ROS2 distributions
-
Error Handling Complexity
- Must handle dynamic loading failures
- Symbol resolution errors need clear messages
- Graceful degradation for partial installations
Future Enhancements
1. Introspection-Based Generic Serialization
The next evolution is fully generic serialization using ROS2's introspection API:
#![allow(unused)] fn main() { // Future: No manual serialization needed! pub fn serialize_any_message( yaml_value: &YamlValue, type_support: *const rosidl_message_type_support_t, ) -> Result<Vec<u8>> { // 1. Get introspection data from type_support let introspection = get_message_introspection(type_support)?; // 2. Walk the message structure automatically let message_ptr = allocate_message_memory(introspection.size_of); serialize_fields_recursively(yaml_value, introspection.members, message_ptr)?; // 3. Use RMW to serialize to CDR format let serialized = rmw_serialize(message_ptr, type_support)?; Ok(serialized) } }
2. Automatic Package Discovery
#![allow(unused)] fn main() { // Future: Scan filesystem for available message types pub fn discover_available_message_types() -> Vec<String> { let lib_dir = "/opt/ros/jazzy/lib"; let pattern = "lib*__rosidl_typesupport_c.so"; // Scan libraries and extract symbols scan_libraries_for_message_types(lib_dir, pattern) } }
3. Message Definition Introspection
#![allow(unused)] fn main() { // Future: Runtime message structure inspection pub fn get_message_definition(message_type: &str) -> Result<MessageDefinition> { let type_support = load_type_support(message_type)?; let introspection = get_introspection_data(type_support)?; // Return complete message structure info Ok(MessageDefinition { fields: extract_field_definitions(introspection), dependencies: find_nested_types(introspection), size: introspection.size_of, }) } }
4. Performance Optimizations
- Library preloading for common types
- Symbol caching across multiple calls
- Memory pool for message allocation
- Batch operations for multiple message types
Conclusion
Dynamic message type loading is a sophisticated technique that gives roc
universal ROS2 message support without the limitations of static linking. By leveraging:
- Runtime dynamic library loading (
dlopen
/dlsym
) - ROS2 type support architecture
- Automatic path and symbol construction
- Graceful fallback strategies
roc
can work with any ROS2 message type - including custom packages you create! This makes it a truly generic and powerful tool for ROS2 development.
The implementation demonstrates advanced systems programming concepts while remaining maintainable and extensible. It's a great example of how understanding the underlying architecture (ROS2 type support system) enables building more flexible and powerful tools.
Key takeaway: Dynamic loading isn't just a neat trick - it's a fundamental technique that enables building truly generic and extensible systems that can adapt to runtime conditions and work with code that didn't exist at compile time.
IDL Tools Overview
ROC provides comprehensive Interface Definition Language (IDL) tools that enable seamless interoperability between ROS2 and other robotics ecosystems. These tools are designed to facilitate cross-platform communication and protocol conversion without requiring external dependencies.
What is Interface Definition Language?
Interface Definition Language (IDL) is a specification language used to describe a software component's interface. In robotics and distributed systems, IDLs serve several critical purposes:
- Platform Independence: Define data structures and APIs that work across different programming languages and systems
- Code Generation: Automatically generate serialization/deserialization code from specifications
- Protocol Interoperability: Enable communication between systems using different message formats
- Version Management: Maintain backward/forward compatibility through structured schemas
ROS2 Message System
ROS2 uses its own IDL format (.msg
files) to define message structures:
# Example: RobotStatus.msg
string robot_name
bool is_active
float64 battery_level
geometry_msgs/Pose current_pose
sensor_msgs/LaserScan[] recent_scans
Key Characteristics:
- Simple Syntax: Human-readable format with minimal boilerplate
- Type System: Built-in primitive types plus support for nested messages
- Array Support: Fixed-size and dynamic arrays
- Package Namespacing: Messages organized by ROS2 packages
- Constants: Support for constant definitions within messages
Protobuf Integration
Protocol Buffers (protobuf) is Google's language-neutral, platform-neutral extensible mechanism for serializing structured data:
// Example: robot_status.proto
syntax = "proto3";
package robotics;
message RobotStatus {
string robot_name = 1;
bool is_active = 2;
double battery_level = 3;
Pose current_pose = 4;
repeated LaserScan recent_scans = 5;
}
Key Characteristics:
- Efficient Serialization: Compact binary format
- Schema Evolution: Built-in versioning and backward compatibility
- Language Support: Code generation for 20+ programming languages
- Advanced Features: Oneof fields, maps, enums, and nested definitions
- Performance: Optimized for speed and memory usage
Why Bidirectional Conversion?
The ability to convert between ROS2 .msg
and Protobuf .proto
formats enables:
Integration with Non-ROS Systems
- Cloud Services: Many cloud platforms use Protobuf for APIs
- Mobile Applications: Protobuf is standard in mobile development
- Microservices: Modern architectures often rely on Protobuf for service communication
- AI/ML Pipelines: TensorFlow, gRPC, and other ML tools use Protobuf extensively
Performance Optimization
- Reduced Overhead: Protobuf's binary format is more efficient than ROS2's CDR serialization in some scenarios
- Bandwidth Conservation: Smaller message sizes for network communication
- Processing Speed: Faster serialization/deserialization in high-throughput applications
Protocol Migration
- Legacy System Integration: Convert existing Protobuf schemas to ROS2 messages
- Gradual Migration: Incrementally move systems between protocols
- Multi-Protocol Support: Support both formats during transition periods
ROC's IDL Implementation
ROC's IDL tools provide several advantages over existing solutions:
Pure Rust Implementation
- No External Dependencies: Self-contained parser and generator
- Performance: Native speed without Python or C++ overhead
- Reliability: Memory-safe implementation with robust error handling
- Maintainability: Single codebase without complex build dependencies
Intelligent Conversion
- Automatic Direction Detection: Determines conversion direction from file extensions
- Advanced Feature Support: Handles complex Protobuf constructs (nested messages, enums, oneofs, maps)
- Type Mapping: Intelligent conversion between type systems
- Dependency Resolution: Generates files in correct dependency order
Developer Experience
- Inplace Output: Generates files alongside source files by default
- Dry Run Mode: Preview conversions without writing files
- Verbose Logging: Detailed information about conversion process
- Error Reporting: Clear, actionable error messages
Use Cases
Robotics Cloud Integration
Convert ROS2 sensor data to Protobuf for cloud processing:
# Convert sensor messages for cloud upload
roc idl protobuf sensor_msgs/LaserScan.msg sensor_msgs/PointCloud2.msg --output ./cloud_api/
Cross-Platform Development
Generate ROS2 messages from existing Protobuf schemas:
# Convert existing Protobuf API to ROS2 messages
roc idl protobuf api_definitions/*.proto --output ./ros2_interfaces/msg/
Protocol Modernization
Migrate legacy systems to modern formats:
# Update old message definitions
roc idl protobuf legacy_messages/*.msg --output ./proto_definitions/
The following sections provide detailed information about specific aspects of ROC's IDL implementation.
Protobuf Integration
ROC provides comprehensive support for Protocol Buffers (Protobuf), enabling seamless conversion between .proto
and ROS2 .msg
formats. This chapter details the technical implementation and capabilities of ROC's Protobuf integration.
Protobuf Parser Implementation
ROC implements a pure Rust Protobuf parser that handles the complete proto3 specification without external dependencies. The parser is built with performance and accuracy in mind.
Supported Protobuf Features
Basic Syntax Elements
- Syntax Declaration:
syntax = "proto3";
- Package Declaration:
package com.example.robotics;
- Import Statements:
import "google/protobuf/timestamp.proto";
- Comments: Single-line (
//
) and multi-line (/* */
) comments
Message Definitions
message RobotStatus {
// Basic field definition
string name = 1;
int32 id = 2;
bool active = 3;
// Repeated fields (arrays)
repeated double joint_positions = 4;
repeated string error_messages = 5;
}
Nested Messages
message Robot {
message Status {
bool online = 1;
string last_error = 2;
}
Status current_status = 1;
string robot_id = 2;
}
Flattening Behavior: ROC automatically flattens nested message names:
Robot.Status
becomesRobotStatus.msg
Sensor.Camera.Configuration
becomesSensorCameraConfiguration.msg
Enumerations
enum RobotState {
ROBOT_STATE_UNKNOWN = 0;
ROBOT_STATE_IDLE = 1;
ROBOT_STATE_MOVING = 2;
ROBOT_STATE_ERROR = 3;
}
message RobotCommand {
RobotState desired_state = 1;
}
ROC converts enums to ROS2 constants within messages:
# RobotCommand.msg
uint8 ROBOT_STATE_UNKNOWN=0
uint8 ROBOT_STATE_IDLE=1
uint8 ROBOT_STATE_MOVING=2
uint8 ROBOT_STATE_ERROR=3
uint8 desired_state
Oneof Fields
message Command {
oneof command_type {
string text_command = 1;
int32 numeric_command = 2;
bool boolean_command = 3;
}
}
Oneof fields are converted to separate optional fields in ROS2:
# Command.msg
string text_command
int32 numeric_command
bool boolean_command
Map Types
message Configuration {
map<string, string> parameters = 1;
map<int32, double> sensor_readings = 2;
}
Maps are converted to arrays of key-value pairs:
# Configuration.msg
# Generated from map<string, string> parameters
ConfigurationParametersEntry[] parameters
# Generated from map<int32, double> sensor_readings
ConfigurationSensorReadingsEntry[] sensor_readings
# ConfigurationParametersEntry.msg
string key
string value
# ConfigurationSensorReadingsEntry.msg
int32 key
float64 value
Type Conversion System
ROC implements intelligent type mapping between Protobuf and ROS2 type systems:
Primitive Types
Protobuf Type | ROS2 Type | Notes |
---|---|---|
bool | bool | Direct mapping |
int32 | int32 | Direct mapping |
int64 | int64 | Direct mapping |
uint32 | uint32 | Direct mapping |
uint64 | uint64 | Direct mapping |
sint32 | int32 | Signed integer |
sint64 | int64 | Signed integer |
fixed32 | uint32 | Fixed-width unsigned |
fixed64 | uint64 | Fixed-width unsigned |
sfixed32 | int32 | Fixed-width signed |
sfixed64 | int64 | Fixed-width signed |
float | float32 | Single precision |
double | float64 | Double precision |
string | string | UTF-8 strings |
bytes | uint8[] | Byte arrays |
Repeated Fields
Protobuf repeated fields map directly to ROS2 arrays:
repeated double values = 1; // → float64[] values
repeated string names = 2; // → string[] names
repeated RobotStatus robots = 3; // → RobotStatus[] robots
Well-Known Types
ROC provides mappings for common Protobuf well-known types:
import "google/protobuf/timestamp.proto";
import "google/protobuf/duration.proto";
message TimedData {
google.protobuf.Timestamp timestamp = 1; // → builtin_interfaces/Time
google.protobuf.Duration timeout = 2; // → builtin_interfaces/Duration
}
Conversion Process
Proto to Msg Conversion
- Parsing: Parse
.proto
file into abstract syntax tree - Validation: Validate proto3 syntax and semantic rules
- Dependency Analysis: Build dependency graph of message types
- Type Resolution: Resolve all type references and nested definitions
- Flattening: Flatten nested messages into separate files
- Generation: Generate
.msg
files in dependency order
Msg to Proto Conversion
- Parsing: Parse
.msg
files and extract field definitions - Type Mapping: Convert ROS2 types to Protobuf equivalents
- Packaging: Organize messages into appropriate proto packages
- Generation: Generate
.proto
files with proper syntax
Advanced Features
Comment Preservation
ROC preserves comments during conversion when possible:
// This is a robot status message
message RobotStatus {
// The robot's unique identifier
string id = 1;
// Whether the robot is currently active
bool active = 2;
}
Becomes:
# This is a robot status message
# The robot's unique identifier
string id
# Whether the robot is currently active
bool active
Package Handling
ROC intelligently handles package declarations:
- Proto to Msg: Uses package name as prefix for generated message names
- Msg to Proto: Groups related messages into logical packages
- Namespace Mapping: Converts between proto packages and ROS2 namespaces
Import Resolution
For proto files with imports, ROC:
- Tracks imported dependencies
- Generates corresponding ROS2 message files
- Updates field references to use correct message types
- Maintains dependency order in output
Error Handling and Validation
ROC provides comprehensive error reporting:
Syntax Errors
Error parsing robot.proto:5:10
|
5 | message Robot {
| ^^^^^ Expected message name
Semantic Errors
Error: Undefined message type 'UnknownStatus' referenced in field 'status'
--> robot.proto:15:3
Conversion Warnings
Warning: Oneof field 'command_type' converted to separate optional fields
Note: ROS2 messages don't support oneof semantics
Performance Characteristics
ROC's Protobuf implementation is optimized for:
- Speed: Pure Rust implementation with zero-copy parsing where possible
- Memory: Minimal memory allocations during parsing
- Scalability: Handles large proto files and complex dependency graphs
- Reliability: Comprehensive error handling and validation
Usage Examples
Basic Conversion
# Convert proto to msg
roc idl protobuf robot_api.proto sensor_data.proto
# Convert msg to proto
roc idl protobuf RobotStatus.msg SensorReading.msg
Advanced Options
# Specify output directory
roc idl protobuf --output ./generated *.proto
# Dry run to preview output
roc idl protobuf --dry-run complex_robot.proto
# Verbose output for debugging
roc idl protobuf --verbose robot_messages/*.proto
Integration with Build Systems
# Generate messages as part of build process
roc idl protobuf src/proto/*.proto --output msg/
colcon build --packages-select my_robot_interfaces
ROS2 Message System
ROS2 uses a simple yet powerful message definition format that enables efficient communication between nodes. This chapter explains how ROC processes and converts ROS2 message definitions.
ROS2 Message Format
ROS2 messages are defined in .msg
files using a straightforward syntax:
Basic Message Structure
# Comments start with hash symbols
# Field definitions: type field_name [default_value]
string robot_name
int32 robot_id
float64 battery_level
bool is_active
Field Types
Primitive Types
ROS2 supports these built-in primitive types:
Type | Size | Range | Description |
---|---|---|---|
bool | 1 byte | true/false | Boolean value |
byte | 1 byte | 0-255 | Unsigned 8-bit integer |
char | 1 byte | -128 to 127 | Signed 8-bit integer |
int8 | 1 byte | -128 to 127 | Signed 8-bit integer |
uint8 | 1 byte | 0 to 255 | Unsigned 8-bit integer |
int16 | 2 bytes | -32,768 to 32,767 | Signed 16-bit integer |
uint16 | 2 bytes | 0 to 65,535 | Unsigned 16-bit integer |
int32 | 4 bytes | -2^31 to 2^31-1 | Signed 32-bit integer |
uint32 | 4 bytes | 0 to 2^32-1 | Unsigned 32-bit integer |
int64 | 8 bytes | -2^63 to 2^63-1 | Signed 64-bit integer |
uint64 | 8 bytes | 0 to 2^63-1 | Unsigned 64-bit integer |
float32 | 4 bytes | IEEE 754 | Single-precision float |
float64 | 8 bytes | IEEE 754 | Double-precision float |
string | Variable | UTF-8 | Unicode string |
Array Types
ROS2 supports both fixed-size and dynamic arrays:
# Fixed-size arrays
int32[10] fixed_array # Array of exactly 10 integers
float64[3] position # 3D position vector
# Dynamic arrays (unbounded)
string[] names # Variable number of strings
geometry_msgs/Point[] waypoints # Array of custom message types
# Bounded arrays
int32[<=100] bounded_readings # At most 100 readings
Message Types
Messages can contain other messages as fields:
# Using standard ROS2 messages
geometry_msgs/Pose current_pose
sensor_msgs/LaserScan scan_data
# Using custom messages from same package
RobotStatus status
BatteryInfo battery
# Using messages from other packages
my_package/CustomMessage custom_field
Constants and Default Values
ROS2 messages support constant definitions:
# Integer constants
int32 STATUS_OK=0
int32 STATUS_WARNING=1
int32 STATUS_ERROR=2
# String constants
string DEFAULT_NAME="DefaultRobot"
# Float constants
float64 MAX_SPEED=10.5
# Using constants with fields
int32 current_status STATUS_OK # Default value
string name DEFAULT_NAME
Comments and Documentation
Comments provide documentation and are preserved during conversion:
# This message represents the complete state of a robot
#
# The robot state includes position, orientation, and operational status.
# This message is published periodically by the robot state publisher.
std_msgs/Header header # Standard ROS header with timestamp
geometry_msgs/Pose pose # Robot position and orientation
geometry_msgs/Twist velocity # Current linear and angular velocity
uint8 operational_mode # Current operational mode
ROC's ROS2 Message Parser
ROC implements a comprehensive parser for ROS2 message definitions:
Parsing Process
- Lexical Analysis: Tokenize the message file into meaningful elements
- Syntax Parsing: Build abstract syntax tree from tokens
- Type Resolution: Resolve all message type references
- Validation: Validate field names, types, and constraints
- Dependency Tracking: Build dependency graph for proper ordering
Advanced Features Supported
Header Information
ROC extracts and preserves:
- Package names from message paths
- Comments and documentation
- Field ordering and grouping
- Constant definitions
Type Analysis
ROC analyzes:
- Primitive vs. composite types
- Array bounds and constraints
- Message dependencies
- Namespace resolution
Error Detection
ROC validates:
- Type name correctness
- Array syntax validity
- Constant value compatibility
- Circular dependency detection
Message to Protobuf Conversion
When converting ROS2 messages to Protobuf, ROC applies intelligent transformations:
Type Mapping Strategy
Direct Mappings
# ROS2 → Protobuf
bool active # → bool active = 1;
int32 count # → int32 count = 2;
float64 value # → double value = 3;
string name # → string name = 4;
Array Conversions
# ROS2 arrays → Protobuf repeated fields
int32[] numbers # → repeated int32 numbers = 1;
string[10] fixed_strings # → repeated string fixed_strings = 2;
geometry_msgs/Point[] points # → repeated geometry_msgs.Point points = 3;
Message Reference Resolution
# ROS2 message reference
geometry_msgs/Pose current_pose
# Becomes Protobuf field
geometry_msgs.Pose current_pose = 1;
Package and Namespace Handling
ROC converts ROS2 package structure to Protobuf packages:
# File: my_robot_msgs/msg/RobotStatus.msg
std_msgs/Header header
geometry_msgs/Pose pose
Becomes:
// robot_status.proto
syntax = "proto3";
package my_robot_msgs;
import "std_msgs/header.proto";
import "geometry_msgs/pose.proto";
message RobotStatus {
std_msgs.Header header = 1;
geometry_msgs.Pose pose = 2;
}
Constant Handling
ROS2 constants are converted to Protobuf enums when appropriate:
# ROS2 constants
uint8 MODE_MANUAL=0
uint8 MODE_AUTO=1
uint8 MODE_EMERGENCY=2
uint8 current_mode
Becomes:
enum Mode {
MODE_MANUAL = 0;
MODE_AUTO = 1;
MODE_EMERGENCY = 2;
}
message RobotControl {
Mode current_mode = 1;
}
Common ROS2 Message Patterns
Standard Header Pattern
Many ROS2 messages include a standard header:
std_msgs/Header header
# ... other fields
ROC recognizes this pattern and handles the std_msgs dependency appropriately.
Sensor Data Pattern
Sensor messages often follow this structure:
std_msgs/Header header
# Sensor-specific data fields
float64[] ranges
float64 angle_min
float64 angle_max
float64 angle_increment
Status/Diagnostic Pattern
Status messages typically include:
std_msgs/Header header
uint8 level # Status level (OK, WARN, ERROR)
string name # Component name
string message # Human-readable status message
string hardware_id # Hardware identifier
diagnostic_msgs/KeyValue[] values # Additional diagnostic data
Integration with ROS2 Ecosystem
Package Dependencies
ROC understands common ROS2 message packages:
std_msgs
: Standard message types (Header, String, etc.)geometry_msgs
: Geometric primitives (Point, Pose, Twist, etc.)sensor_msgs
: Sensor data (LaserScan, Image, PointCloud, etc.)nav_msgs
: Navigation messages (Path, OccupancyGrid, etc.)action_msgs
: Action-related messagesdiagnostic_msgs
: System diagnostics
Build System Integration
ROC-generated protobuf files can be integrated into ROS2 build systems:
# CMakeLists.txt
find_package(protobuf REQUIRED)
# Convert ROS2 messages to protobuf
execute_process(
COMMAND roc idl protobuf ${CMAKE_CURRENT_SOURCE_DIR}/msg/*.msg
--output ${CMAKE_CURRENT_BINARY_DIR}/proto/
)
# Add protobuf generation
protobuf_generate_cpp(PROTO_SRCS PROTO_HDRS ${PROTO_FILES})
Best Practices
Message Design
- Keep messages simple and focused
- Use descriptive field names
- Include appropriate documentation
- Follow ROS2 naming conventions
Conversion Considerations
- Be aware of type precision differences
- Consider array bounds in target format
- Plan for constant handling strategy
- Document conversion decisions
Performance Tips
- Use appropriate numeric types
- Minimize nested message depth
- Consider serialization efficiency
- Profile converted message performance
Limitations and Considerations
ROS2 to Protobuf Limitations
- Service Definitions: ROC currently focuses on message definitions
- Action Definitions: Action definitions require special handling
- Complex Constants: Some constant expressions may not convert directly
- Custom Types: Very specialized ROS2 types may need manual attention
Protobuf to ROS2 Limitations
- Oneof Fields: ROS2 doesn't have direct oneof equivalent
- Map Types: Converted to key-value pair arrays
- Any Types: Not directly supported in ROS2
- Extensions: Protobuf extensions don't map to ROS2
Understanding these patterns and limitations helps ensure successful conversion between ROS2 message formats and Protobuf schemas.
Type Mapping
Effective conversion between Protobuf and ROS2 message formats requires careful consideration of type system differences. This chapter provides comprehensive information about how ROC maps types between these systems.
Type System Comparison
Protobuf Type System
Protobuf uses a rich type system designed for cross-language compatibility:
- Primitive Types: Integers of various sizes, floating-point, boolean, string, bytes
- Composite Types: Messages (structs), enums, oneofs (unions)
- Container Types: Repeated fields (arrays), maps
- Special Types: Well-known types (Timestamp, Duration, Any, etc.)
- Advanced Features: Optional fields, default values, extensions
ROS2 Type System
ROS2 uses a simpler, more constrained type system:
- Primitive Types: Fixed-size integers, floating-point, boolean, string
- Composite Types: Messages (structs), constants
- Container Types: Fixed and dynamic arrays
- Special Types: Standard message types (Header, etc.)
- Constraints: Bounded arrays, default values
Comprehensive Type Mapping Tables
Protobuf to ROS2 Mapping
Numeric Types
Protobuf Type | ROS2 Type | Size | Signed | Notes |
---|---|---|---|---|
bool | bool | 1 byte | N/A | Direct mapping |
int32 | int32 | 4 bytes | Yes | Direct mapping |
int64 | int64 | 8 bytes | Yes | Direct mapping |
uint32 | uint32 | 4 bytes | No | Direct mapping |
uint64 | uint64 | 8 bytes | No | Direct mapping |
sint32 | int32 | 4 bytes | Yes | ZigZag encoded in protobuf |
sint64 | int64 | 8 bytes | Yes | ZigZag encoded in protobuf |
fixed32 | uint32 | 4 bytes | No | Fixed-width encoding |
fixed64 | uint64 | 8 bytes | No | Fixed-width encoding |
sfixed32 | int32 | 4 bytes | Yes | Fixed-width signed |
sfixed64 | int64 | 8 bytes | Yes | Fixed-width signed |
float | float32 | 4 bytes | Yes | IEEE 754 single precision |
double | float64 | 8 bytes | Yes | IEEE 754 double precision |
String and Binary Types
Protobuf Type | ROS2 Type | Notes |
---|---|---|
string | string | UTF-8 encoded strings |
bytes | uint8[] | Binary data as byte array |
Container Types
Protobuf Type | ROS2 Type | Example |
---|---|---|
repeated T | T[] | repeated int32 values → int32[] values |
map<K,V> | MapEntry[] | map<string,int32> data → DataEntry[] data |
ROS2 to Protobuf Mapping
Numeric Types
ROS2 Type | Protobuf Type | Rationale |
---|---|---|
bool | bool | Direct mapping |
byte | uint32 | ROS2 byte is unsigned 8-bit |
char | int32 | ROS2 char is signed 8-bit |
int8 | int32 | Protobuf doesn't have 8-bit integers |
uint8 | uint32 | Protobuf doesn't have 8-bit integers |
int16 | int32 | Protobuf doesn't have 16-bit integers |
uint16 | uint32 | Protobuf doesn't have 16-bit integers |
int32 | int32 | Direct mapping |
uint32 | uint32 | Direct mapping |
int64 | int64 | Direct mapping |
uint64 | uint64 | Direct mapping |
float32 | float | Direct mapping |
float64 | double | Direct mapping |
string | string | Direct mapping |
Array Types
ROS2 Type | Protobuf Type | Notes |
---|---|---|
T[] | repeated T | Dynamic arrays |
T[N] | repeated T | Fixed-size arrays (size constraint lost) |
T[<=N] | repeated T | Bounded arrays (bound constraint lost) |
Special Type Conversions
Protobuf Oneof to ROS2
Protobuf oneof fields don't have a direct equivalent in ROS2. ROC handles this by creating separate optional fields:
// Protobuf
message Command {
oneof command_type {
string text_command = 1;
int32 numeric_command = 2;
bool flag_command = 3;
}
}
Converts to:
# ROS2 - all fields are optional, only one should be set
string text_command
int32 numeric_command
bool flag_command
Protobuf Maps to ROS2
Maps are converted to arrays of key-value pair messages:
// Protobuf
message Configuration {
map<string, double> parameters = 1;
}
Converts to:
# Configuration.msg
ConfigurationParametersEntry[] parameters
# ConfigurationParametersEntry.msg (auto-generated)
string key
float64 value
ROS2 Constants to Protobuf
ROS2 constants are converted to enum values when they represent a set of related values:
# ROS2
uint8 STATE_IDLE=0
uint8 STATE_MOVING=1
uint8 STATE_ERROR=2
uint8 current_state
Converts to:
// Protobuf
enum State {
STATE_IDLE = 0;
STATE_MOVING = 1;
STATE_ERROR = 2;
}
message RobotStatus {
State current_state = 1;
}
Well-Known Type Mappings
Protobuf Well-Known Types
ROC provides special handling for common Protobuf well-known types:
Protobuf Type | ROS2 Equivalent | Notes |
---|---|---|
google.protobuf.Timestamp | builtin_interfaces/Time | Nanosecond precision |
google.protobuf.Duration | builtin_interfaces/Duration | Nanosecond precision |
google.protobuf.Empty | Empty message | No fields |
google.protobuf.StringValue | string | Wrapper type flattened |
google.protobuf.Int32Value | int32 | Wrapper type flattened |
google.protobuf.BoolValue | bool | Wrapper type flattened |
Standard ROS2 Types
Common ROS2 types have conventional Protobuf mappings:
ROS2 Type | Protobuf Equivalent | Notes |
---|---|---|
std_msgs/Header | Custom message | Timestamp + frame_id |
geometry_msgs/Point | Custom message | x, y, z coordinates |
geometry_msgs/Quaternion | Custom message | x, y, z, w components |
geometry_msgs/Pose | Custom message | Position + orientation |
geometry_msgs/Twist | Custom message | Linear + angular velocity |
Type Conversion Edge Cases
Precision and Range Considerations
Integer Overflow Scenarios
# ROS2 uint8 field
uint8 small_value 255 # Maximum value for uint8
When converted to Protobuf uint32
, the range increases significantly. ROC preserves the original constraint information in comments:
// Protobuf
message Example {
uint32 small_value = 1; // Originally uint8, max value 255
}
Floating-Point Precision
# ROS2 float32
float32 precise_value 3.14159265359
Converting to Protobuf maintains the precision level:
float precise_value = 1; // 32-bit precision maintained
Array Bound Handling
Fixed-Size Arrays
# ROS2 fixed-size array
float64[3] position
Protobuf doesn't support fixed-size arrays, so this becomes:
repeated double position = 1; // Size constraint documented separately
Bounded Arrays
# ROS2 bounded array
int32[<=100] readings
The bound constraint is preserved in documentation:
repeated int32 readings = 1; // Maximum 100 elements
Advanced Mapping Strategies
Nested Message Flattening
ROC flattens nested Protobuf messages for ROS2 compatibility:
// Protobuf nested messages
message Robot {
message Status {
bool active = 1;
string state = 2;
}
Status current_status = 1;
string robot_id = 2;
}
Becomes:
# Robot.msg
RobotStatus current_status
string robot_id
# RobotStatus.msg (flattened)
bool active
string state
Package and Namespace Translation
Protobuf Package to ROS2 Package
// Protobuf
syntax = "proto3";
package robotics.sensors;
message LaserData { ... }
Becomes:
# File: robotics_sensors_msgs/msg/LaserData.msg
# Content of the message...
ROS2 Package to Protobuf Package
# File: my_robot_msgs/msg/Status.msg
# Message content...
Becomes:
syntax = "proto3";
package my_robot_msgs;
message Status { ... }
Configuration and Customization
Custom Type Mappings
ROC supports configuration files for custom type mappings:
# type_mappings.yaml
protobuf_to_ros2:
"my.custom.Timestamp": "builtin_interfaces/Time"
"my.custom.Position": "geometry_msgs/Point"
ros2_to_protobuf:
"my_msgs/CustomType": "my.package.CustomMessage"
Usage:
roc idl protobuf --config type_mappings.yaml input_files...
Mapping Validation
ROC validates type mappings and warns about potential issues:
Warning: Converting uint64 to int64 may cause overflow for large values
Warning: Map type conversion may affect lookup performance
Warning: Oneof semantics lost in ROS2 conversion
Performance Implications
Serialization Efficiency
Different type choices affect serialization performance:
- Protobuf varint encoding: Smaller integers encode more efficiently
- Fixed-width types: Predictable size but potentially wasteful
- String vs bytes: UTF-8 validation overhead for strings
Memory Usage
Type conversions can affect memory usage:
- Array bounds: ROS2 bounded arrays vs Protobuf repeated fields
- Message size: Nested vs flattened message structures
- Field ordering: Affects struct packing and cache efficiency
Best Practices for Type Mapping
Design Considerations
- Choose appropriate numeric types: Don't use int64 when int32 suffices
- Consider array bounds: Use bounded arrays in ROS2 when possible
- Document constraints: Preserve semantic meaning across conversions
- Plan for evolution: Design messages that can evolve over time
Conversion Guidelines
- Test thoroughly: Validate converted messages with real data
- Preserve semantics: Maintain the original meaning of fields
- Document decisions: Record rationale for non-obvious mappings
- Monitor performance: Profile converted message performance
Maintenance Strategies
- Version control: Track message schema changes
- Backward compatibility: Plan for schema evolution
- Testing automation: Automated conversion validation
- Documentation updates: Keep mapping documentation current
Understanding these type mapping strategies ensures successful and maintainable conversions between Protobuf and ROS2 message formats.
Workspace Management Overview
ROC includes a comprehensive workspace management system that serves as a modern, high-performance replacement for colcon. The roc work
command provides a complete suite of tools for ROS2 workspace management, including package creation, discovery, dependency resolution, and building.
Key Features
Complete Colcon Replacement
ROC's build system (roc work build
) is designed as a drop-in replacement for colcon build
with the following advantages:
- Native Performance: Written in Rust for superior performance and memory safety
- Parallel Execution: Multi-threaded builds with intelligent dependency resolution
- Environment Isolation: Clean environment management preventing build contamination
- Comprehensive Logging: Detailed build logs and error reporting
- Full Compatibility: Supports all major colcon command-line options
Package Management
- Intelligent Discovery: Automatic workspace scanning and package.xml parsing
- Metadata Extraction: Complete package information including dependencies, maintainers, and build types
- Build Type Support: Full support for ament_cmake, ament_python, and cmake packages
- Dependency Validation: Circular dependency detection and resolution
Development Workflow
- Package Creation: Intelligent wizard for creating properly structured ROS2 packages
- Build Optimization: Incremental builds and parallel execution
- Environment Setup: Automatic generation of setup scripts for workspace activation
Architecture
The workspace management system is built on several core components:
- Package Discovery Engine: Recursively scans workspace directories for
package.xml
files - Dependency Graph Resolver: Builds and validates package dependency graphs
- Build Executor: Manages parallel build execution with proper environment isolation
- Environment Manager: Handles environment variable setup and setup script generation
Command Structure
roc work <subcommand> [options]
Available Subcommands
build
- Build packages in the workspace (colcon replacement)create
- Create new ROS2 packages with templateslist
- List and discover packages in the workspaceinfo
- Display detailed package information
Compatibility
ROC's workspace system is designed to be fully compatible with existing ROS2 workflows:
- Colcon Arguments: All major colcon build options are supported
- Package Formats: Supports package.xml formats 2 and 3
- Build Systems: Works with ament_cmake, ament_python, and plain cmake
- Environment: Generates standard ROS2 setup scripts compatible with existing tools
Performance Benefits
Compared to colcon, ROC provides:
- Faster Startup: Native binary with minimal overhead
- Better Parallelization: More efficient worker thread management
- Memory Efficiency: Lower memory usage during builds
- Cleaner Environment: Better isolation prevents build environment pollution
- Superior Error Handling: More detailed error messages and recovery options
The following sections provide detailed information about each component of the workspace management system.
Build System Architecture
ROC's build system is designed as a high-performance, parallel replacement for colcon. This chapter details the internal architecture and implementation of the build system.
Core Components
1. Build Configuration (BuildConfig
)
The build system is driven by a comprehensive configuration structure that mirrors colcon's options:
#![allow(unused)] fn main() { pub struct BuildConfig { pub base_paths: Vec<PathBuf>, // Paths to search for packages pub packages_select: Option<Vec<String>>, // Build only selected packages pub packages_ignore: Option<Vec<String>>, // Ignore specific packages pub packages_up_to: Option<Vec<String>>, // Build up to specified packages pub parallel_workers: u32, // Number of parallel build workers pub merge_install: bool, // Use merged vs isolated install pub symlink_install: bool, // Use symlinks for installs pub cmake_args: Vec<String>, // Additional CMake arguments pub cmake_target: Option<String>, // Specific CMake target pub continue_on_error: bool, // Continue building on failures pub workspace_root: PathBuf, // Root of workspace pub install_base: PathBuf, // Install directory pub build_base: PathBuf, // Build directory pub isolated: bool, // Isolated vs merged installs } }
2. Build Orchestrator (ColconBuilder
)
The main orchestrator manages the entire build process:
#![allow(unused)] fn main() { pub struct ColconBuilder { config: BuildConfig, packages: Vec<PackageMeta>, // Discovered packages build_order: Vec<usize>, // Topologically sorted build order } }
Build Process Flow
- Package Discovery: Scan workspace for
package.xml
files - Dependency Resolution: Build dependency graph and determine build order
- Environment Setup: Prepare build environments for each package
- Build Execution: Execute builds in parallel with proper dependency ordering
- Setup Script Generation: Create workspace activation scripts
3. Build Executor (BuildExecutor
)
The build executor handles the actual compilation process:
Sequential vs Parallel Execution
Sequential Mode (parallel_workers = 1
):
- Uses
build_sequential_filtered()
method - Creates fresh environment for each package to prevent contamination
- Processes packages in strict topological order
Parallel Mode (parallel_workers > 1
):
- Spawns worker threads up to the configured limit
- Uses shared state management for coordination
- Implements work-stealing queue for load balancing
Build State Management
#![allow(unused)] fn main() { pub struct BuildState { package_states: Arc<Mutex<HashMap<String, PackageState>>>, install_paths: Arc<Mutex<HashMap<String, PathBuf>>>, build_count: Arc<Mutex<(usize, usize)>>, // (successful, failed) } #[derive(Debug, Clone, PartialEq)] pub enum PackageState { Pending, // Waiting for dependencies Building, // Currently being built Completed, // Successfully built Failed, // Build failed } }
4. Build Type Handlers
The system supports multiple build types through dedicated handlers:
CMake Handler (build_cmake_package_with_env
)
- Configures CMake with appropriate flags and environment
- Supports both ament_cmake and plain cmake packages
- Handles install prefix configuration for isolated/merged installs
cmake -S <source> -B <build> -DCMAKE_INSTALL_PREFIX=<install>
cmake --build <build> --target install -- -j<workers>
Python Handler (build_python_package_with_env
)
- Uses Python setuptools for ament_python packages
- Handles build and install phases separately
python3 setup.py build --build-base <build>
python3 setup.py install --prefix "" --root <install>
Environment Management
Build-Time Environment
Each package build receives a carefully constructed environment:
- Base Environment: Inherits from current shell environment
- Dependency Paths: Adds install paths of all built dependencies
- Build Tools: Ensures CMake, Python, and other tools are available
- ROS Environment: Sets up AMENT_PREFIX_PATH, CMAKE_PREFIX_PATH, etc.
Environment Isolation
The system uses two strategies for environment isolation:
Sequential Builds: Each package gets a fresh EnvironmentManager
instance to prevent environment accumulation that can cause CMake hangs.
Parallel Builds: Each worker thread maintains its own environment state, updating it only with completed dependencies.
Path Management
Environment variables are updated using intelligent path prepending:
#![allow(unused)] fn main() { fn update_path_env(&mut self, var_name: &str, new_path: &Path) { let separator = if cfg!(windows) { ";" } else { ":" }; if let Some(current) = self.env_vars.get(var_name) { // Check for duplicates before adding let paths: Vec<&str> = current.split(separator).collect(); if !paths.contains(&new_path_str.as_ref()) { let updated = format!("{}{}{}", new_path_str, separator, current); self.env_vars.insert(var_name.to_string(), updated); } } else { self.env_vars.insert(var_name.to_string(), new_path_str.to_string()); } } }
Parallel Execution Strategy
Worker Thread Model
The parallel build system uses a work-stealing approach:
- Worker Spawning: Creates
parallel_workers
threads - Work Discovery: Each worker scans for packages whose dependencies are satisfied
- State Synchronization: Uses
Arc<Mutex<>>
for thread-safe state sharing - Load Balancing: Workers dynamically pick up available work
Dependency Satisfaction
Before building a package, workers verify all dependencies are completed:
#![allow(unused)] fn main() { let all_deps_ready = deps.iter().all(|dep| { states.get(dep).map(|s| *s == PackageState::Completed).unwrap_or(true) }); }
External dependencies (not in workspace) are assumed to be available.
Error Handling
The system supports flexible error handling:
- Fail Fast (default): Stop all builds on first failure
- Continue on Error: Mark failed packages but continue with independent packages
- Detailed Logging: Capture stdout/stderr for debugging
Performance Optimizations
Memory Management
- Zero-copy string handling where possible
- Efficient HashMap usage for package lookup
- Minimal cloning of large data structures
I/O Optimization
- Parallel directory scanning during package discovery
- Asynchronous log writing
- Efficient XML parsing with
roxmltree
Build Efficiency
- Leverages CMake's internal dependency checking
- Reuses build directories for incremental builds
- Intelligent environment caching
Error Recovery
The build system includes comprehensive error handling:
Build Failures
- Captures complete stdout/stderr output
- Provides detailed error context
- Suggests common fixes for typical issues
Environment Issues
- Validates required tools (cmake, python) are available
- Checks for common environment problems
- Provides clear error messages for missing dependencies
Recovery Strategies
- Supports partial rebuilds after fixing issues
- Maintains build state across invocations
- Allows selective package rebuilds
This architecture provides a robust, scalable foundation for workspace builds that significantly outperforms traditional Python-based tools while maintaining full compatibility with existing ROS2 workflows.
Package Discovery
ROC's package discovery system automatically scans workspace directories to find and parse ROS2 packages. This chapter details how the discovery process works and how it handles various package configurations.
Discovery Process
1. Workspace Scanning
The discovery process begins by recursively scanning the configured base paths (default: src/
):
#![allow(unused)] fn main() { pub fn discover_packages(base_paths: &[PathBuf]) -> Result<Vec<PackageMeta>, Box<dyn std::error::Error>> { let mut packages = Vec::new(); for base_path in base_paths { if base_path.exists() { discover_packages_in_path(base_path, &mut packages)?; } else { println!("Warning: Base path {} does not exist", base_path.display()); } } Ok(packages) } }
2. Package Identification
Packages are identified by the presence of a package.xml
file in the directory root. The discovery engine:
- Recursively walks directory trees using the
walkdir
crate - Skips directories containing
COLCON_IGNORE
files - Parses each
package.xml
file found - Extracts comprehensive package metadata
3. Manifest Parsing
Each package.xml
is parsed using the roxmltree
XML parser to extract:
#![allow(unused)] fn main() { pub struct PackageMeta { pub name: String, // Package name pub path: PathBuf, // Package directory path pub build_type: BuildType, // Build system type pub version: String, // Package version pub description: String, // Package description pub maintainers: Vec<String>, // Package maintainers pub build_deps: Vec<String>, // Build dependencies pub buildtool_deps: Vec<String>, // Build tool dependencies pub exec_deps: Vec<String>, // Runtime dependencies pub test_deps: Vec<String>, // Test dependencies } }
XML Parsing Implementation
Dependency Extraction
The parser extracts different types of dependencies from the manifest:
#![allow(unused)] fn main() { // Build dependencies let build_deps: Vec<String> = root .descendants() .filter(|n| n.has_tag_name("build_depend")) .filter_map(|n| n.text()) .map(|s| s.to_string()) .collect(); // Build tool dependencies (cmake, ament_cmake, etc.) let buildtool_deps: Vec<String> = root .descendants() .filter(|n| n.has_tag_name("buildtool_depend")) .filter_map(|n| n.text()) .map(|s| s.to_string()) .collect(); // Runtime dependencies let exec_deps: Vec<String> = root .descendants() .filter(|n| n.has_tag_name("exec_depend") || n.has_tag_name("run_depend")) .filter_map(|n| n.text()) .map(|s| s.to_string()) .collect(); }
Build Type Detection
Build type is determined through multiple strategies:
- Explicit Declaration: Check for
<build_type>
in the<export>
section - File-Based Inference: Examine files in the package directory
- Default Assignment: Fall back to
ament_cmake
#![allow(unused)] fn main() { fn infer_build_type(package_path: &Path) -> BuildType { if package_path.join("CMakeLists.txt").exists() { BuildType::AmentCmake } else if package_path.join("setup.py").exists() { BuildType::AmentPython } else { BuildType::AmentCmake // Default } } }
Supported Package Formats
Package.xml Format Support
ROC supports both package.xml formats used in ROS2:
- Format 2: Standard format inherited from ROS1
- Format 3: Enhanced format with conditional dependencies and groups
Build Type Support
The discovery system recognizes these build types:
#![allow(unused)] fn main() { #[derive(Debug, Clone, PartialEq)] pub enum BuildType { AmentCmake, // Standard C++ packages AmentPython, // Pure Python packages Cmake, // Plain CMake packages Other(String), // Extension point for future types } }
AmentCmake Packages
- Use CMake as the build system
- Include ament_cmake macros for ROS2 integration
- Typically contain C++ source code
- Most common package type in ROS2
AmentPython Packages
- Use Python setuptools for building
- Contain Python modules and scripts
- Use
setup.py
for build configuration - Common for pure Python ROS2 nodes
Plain CMake Packages
- Use standard CMake without ament extensions
- Useful for integrating non-ROS libraries
- Less common but fully supported
Error Handling and Validation
XML Parsing Errors
The discovery system handles various XML parsing issues:
#![allow(unused)] fn main() { match parse_package_xml(&package_xml) { Ok(package_meta) => { packages.push(package_meta); } Err(e) => { eprintln!("Warning: Failed to parse {}: {}", package_xml.display(), e); } } }
Common issues addressed:
- Malformed XML syntax
- Missing required elements (
<name>
,<version>
) - Invalid dependency declarations
- Encoding issues
Package Validation
During discovery, several validation checks are performed:
- Unique Names: Ensure no duplicate package names in workspace
- Required Elements: Verify presence of essential package.xml elements
- Path Validity: Confirm package paths are accessible
- Build Type Consistency: Validate build type matches package contents
Duplicate Package Handling
If multiple packages with the same name are discovered:
#![allow(unused)] fn main() { // Check for duplicate package names let mut seen_names = std::collections::HashSet::new(); for package in &packages { if !seen_names.insert(&package.name) { return Err(format!("Duplicate package name found: {}", package.name).into()); } } }
Performance Optimizations
Efficient Directory Traversal
The discovery system uses optimized directory traversal:
- Parallel Scanning: Multiple base paths scanned concurrently
- Early Termination: Stop scanning ignored directories immediately
- Memory Efficiency: Stream processing of directory entries
XML Parser Selection
ROC uses roxmltree
for XML parsing because:
- Performance: Faster than alternatives for small XML files
- Memory Efficiency: Low memory overhead
- Safety: Memory-safe with proper error handling
- Simplicity: Clean API for tree traversal
Caching Strategy
While not currently implemented, the architecture supports future caching:
- Manifest Checksums: Cache parsed results based on file modification time
- Incremental Discovery: Only re-scan changed directories
- Metadata Persistence: Save/restore package metadata across invocations
Integration with Build System
Package Filtering
Discovery results can be filtered based on build configuration:
#![allow(unused)] fn main() { // Apply packages_select filter if let Some(ref selected) = self.config.packages_select { self.packages.retain(|pkg| selected.contains(&pkg.name)); } // Apply packages_ignore filter if let Some(ref ignored) = self.config.packages_ignore { self.packages.retain(|pkg| !ignored.contains(&pkg.name)); } }
Dependency Graph Input
The discovered packages serve as input to the dependency resolution system:
- Package names become graph nodes
- Dependencies become directed edges
- Build types determine build strategies
- Metadata guides environment setup
Future Enhancements
Conditional Dependencies
Support for package.xml format 3 conditional dependencies:
<depend condition="$ROS_VERSION == 2">ros2_specific_pkg</depend>
Package Groups
Enhanced support for dependency groups:
<group_depend>navigation_stack</group_depend>
Extended Metadata
Additional metadata extraction for:
- License information
- Repository URLs
- Bug tracker links
- Documentation links
The package discovery system provides a solid foundation for workspace management, efficiently finding and parsing ROS2 packages while maintaining compatibility with existing tooling and workflows.
Dependency Resolution
ROC's dependency resolution system builds a comprehensive dependency graph from discovered packages and determines the optimal build order. This chapter explains the algorithms and strategies used for dependency management.
Dependency Graph Construction
Graph Representation
The dependency system uses a directed graph where:
- Nodes: Represent packages in the workspace
- Edges: Represent dependencies (A → B means A depends on B)
- Direction: Dependencies point from dependent to dependency
#![allow(unused)] fn main() { pub fn topological_sort(packages: &[PackageMeta]) -> Result<Vec<usize>, Box<dyn std::error::Error>> { let mut name_to_index: HashMap<String, usize> = HashMap::new(); let mut graph: Vec<Vec<usize>> = vec![Vec::new(); packages.len()]; let mut in_degree: Vec<usize> = vec![0; packages.len()]; // Build name to index mapping for (idx, package) in packages.iter().enumerate() { name_to_index.insert(package.name.clone(), idx); } // Build dependency graph for (pkg_idx, package) in packages.iter().enumerate() { for dep_name in &package.build_deps { if let Some(&dep_idx) = name_to_index.get(dep_name) { graph[dep_idx].push(pkg_idx); in_degree[pkg_idx] += 1; } // External dependencies are ignored (assumed available) } } } }
Dependency Types
The system considers multiple types of dependencies when building the graph:
Build Dependencies (build_depend
)
- Required for compilation/building
- Must be built before dependent package
- Include headers, libraries, and build tools
Build Tool Dependencies (buildtool_depend
)
- Build system tools (cmake, ament_cmake, etc.)
- Usually external to workspace
- Considered for ordering if present in workspace
Runtime Dependencies (exec_depend
)
- Required at runtime
- Not directly used for build ordering
- Important for environment setup
External Dependencies
Dependencies not found in the workspace are treated as external:
- Assumed to be available in the environment
- Not included in build ordering
- May trigger warnings if expected but missing
Topological Sorting Algorithm
Kahn's Algorithm Implementation
ROC uses Kahn's algorithm for topological sorting, which is efficient and provides clear cycle detection:
#![allow(unused)] fn main() { // Kahn's algorithm for topological sorting let mut queue: VecDeque<usize> = VecDeque::new(); let mut result: Vec<usize> = Vec::new(); // Add all nodes with no incoming edges for (idx, °ree) in in_degree.iter().enumerate() { if degree == 0 { queue.push_back(idx); } } while let Some(current) = queue.pop_front() { result.push(current); // Remove this node and update in-degrees for &neighbor in &graph[current] { in_degree[neighbor] -= 1; if in_degree[neighbor] == 0 { queue.push_back(neighbor); } } } }
Algorithm Benefits
Kahn's algorithm provides several advantages:
- Cycle Detection: Incomplete result indicates circular dependencies
- Efficiency: O(V + E) time complexity
- Stability: Consistent ordering for the same input
- Parallelization: Can identify independent packages for parallel builds
Cycle Detection and Resolution
Circular Dependency Detection
Circular dependencies are detected when the topological sort fails to include all packages:
#![allow(unused)] fn main() { // Check for cycles if result.len() != packages.len() { // Find the cycle let remaining: Vec<String> = packages .iter() .enumerate() .filter(|(idx, _)| !result.contains(idx)) .map(|(_, pkg)| pkg.name.clone()) .collect(); return Err(format!("Circular dependency detected among packages: {:?}", remaining).into()); } }
Common Cycle Scenarios
Typical circular dependency patterns:
- Direct Cycles: A depends on B, B depends on A
- Indirect Cycles: A → B → C → A
- Build Tool Cycles: Package depends on build tool that depends on package
Resolution Strategies
When cycles are detected, users can:
- Review Dependencies: Examine package.xml files for unnecessary dependencies
- Split Packages: Break large packages into smaller, independent pieces
- Use Interface Packages: Create interface-only packages to break cycles
- Dependency Inversion: Restructure dependencies using abstract interfaces
Package Filtering
Build Selection Filters
The dependency resolver supports various package selection strategies:
Selective Building (--packages-select
)
Build only specified packages plus their dependencies:
#![allow(unused)] fn main() { if let Some(ref selected) = self.config.packages_select { self.packages.retain(|pkg| selected.contains(&pkg.name)); } }
Package Exclusion (--packages-ignore
)
Exclude specific packages from builds:
#![allow(unused)] fn main() { if let Some(ref ignored) = self.config.packages_ignore { self.packages.retain(|pkg| !ignored.contains(&pkg.name)); } }
Build Up To (--packages-up-to
)
Build dependencies up to specified packages:
#![allow(unused)] fn main() { if let Some(ref up_to) = self.config.packages_up_to { let mut packages_to_build = std::collections::HashSet::new(); // Add target packages for target in up_to { if let Some(pkg) = self.packages.iter().find(|p| &p.name == target) { packages_to_build.insert(pkg.name.clone()); // Add all dependencies recursively self.add_dependencies_recursive(&pkg.name, &mut packages_to_build); } } self.packages.retain(|pkg| packages_to_build.contains(&pkg.name)); } }
Recursive Dependency Collection
For --packages-up-to
, dependencies are collected recursively:
#![allow(unused)] fn main() { fn add_dependencies_recursive(&self, pkg_name: &str, packages_to_build: &mut std::collections::HashSet<String>) { if let Some(pkg) = self.packages.iter().find(|p| &p.name == pkg_name) { for dep in &pkg.build_deps { if !packages_to_build.contains(dep) { if self.packages.iter().any(|p| &p.name == dep) { packages_to_build.insert(dep.clone()); self.add_dependencies_recursive(dep, packages_to_build); } } } } } }
Parallel Build Optimization
Independent Package Identification
The topological sort naturally identifies packages that can be built in parallel:
- Packages with no dependencies can start immediately
- Packages with the same dependency level can build concurrently
- Only direct dependencies need to complete before a package starts
Dependency Satisfaction Checking
During parallel builds, each worker verifies dependencies before starting:
#![allow(unused)] fn main() { let all_deps_ready = deps.iter().all(|dep| { states.get(dep).map(|s| *s == PackageState::Completed).unwrap_or(true) }); if all_deps_ready { states.insert(pkg_name.clone(), PackageState::Building); ready_package = Some(pkg_name); break; } }
Load Balancing
The work-stealing approach ensures optimal resource utilization:
- Dynamic Work Assignment: Workers pick up available packages as dependencies complete
- No Static Partitioning: Avoids idle workers when some builds take longer
- Dependency-Aware: Respects build order constraints while maximizing parallelism
Advanced Dependency Scenarios
Cross-Package Dependencies
Handling complex dependency relationships:
Message/Service Dependencies
<build_depend>my_interfaces</build_depend>
<exec_depend>my_interfaces</exec_depend>
Metapackage Dependencies
<buildtool_depend>ament_cmake</buildtool_depend>
<exec_depend>package1</exec_depend>
<exec_depend>package2</exec_depend>
Conditional Dependencies (Format 3)
<depend condition="$ROS_VERSION == 2">ros2_specific_pkg</depend>
Build Tool Resolution
Special handling for build tools:
- External Build Tools: ament_cmake, cmake, python3-setuptools
- Workspace Build Tools: Custom ament extensions built from source
- Version Constraints: Ensuring compatible tool versions
Error Handling and Diagnostics
Dependency Validation
The system performs comprehensive validation:
#![allow(unused)] fn main() { // Check for missing dependencies for package in &packages { for dep in &package.build_deps { if !name_to_index.contains_key(dep) && !is_external_dependency(dep) { warnings.push(format!("Package {} depends on missing package {}", package.name, dep)); } } } }
Diagnostic Output
Detailed information for troubleshooting:
- Build Order Visualization: Show the determined build sequence
- Dependency Tree: Display complete dependency relationships
- Cycle Analysis: Identify specific packages involved in cycles
- Missing Dependencies: List external dependencies that may be missing
Recovery Strategies
When dependency issues are encountered:
- Graceful Degradation: Continue with buildable packages
- Partial Builds: Build independent subgraphs
- Dependency Suggestions: Recommend missing packages to install
- Alternative Orderings: Provide multiple valid build orders when possible
The dependency resolution system provides a robust foundation for workspace builds, ensuring correct build order while maximizing parallel execution opportunities and providing clear diagnostics for troubleshooting dependency issues.
Environment Management
ROC's environment management system handles the complex task of setting up proper build and runtime environments for ROS2 packages. This chapter details how environments are constructed, maintained, and used throughout the build process.
Environment Architecture
Core Components
The environment management system consists of several key components:
#![allow(unused)] fn main() { pub struct EnvironmentManager { env_vars: HashMap<String, String>, // Current environment variables install_prefix: PathBuf, // Install prefix directory isolated: bool, // Whether using isolated installs } }
Environment Lifecycle
- Initialization: Start with current shell environment
- Package Setup: Add package-specific paths and variables
- Dependency Integration: Include paths from built dependencies
- Build Execution: Provide clean environment to build processes
- Script Generation: Create setup scripts for workspace activation
Build-Time Environment Management
Environment Isolation Strategy
ROC uses different isolation strategies based on build mode:
Sequential Builds
- Each package gets a fresh
EnvironmentManager
instance - Prevents environment accumulation that can cause CMake hangs
- Ensures clean, predictable builds
#![allow(unused)] fn main() { // Create a fresh environment manager for this package let mut package_env_manager = EnvironmentManager::new( self.config.install_base.clone(), self.config.isolated ); // Setup environment for this package package_env_manager.setup_package_environment(&package.name, &package.path)?; }
Parallel Builds
- Each worker thread maintains its own environment state
- Synchronizes with shared build state for dependency tracking
- Updates environment only with completed dependencies
PATH-Like Variable Management
The system handles PATH-like environment variables with sophisticated logic:
#![allow(unused)] fn main() { fn update_path_env(&mut self, var_name: &str, new_path: &Path) { let separator = if cfg!(windows) { ";" } else { ":" }; let new_path_str = new_path.to_string_lossy(); if let Some(current) = self.env_vars.get(var_name) { // Check if path is already in the variable let paths: Vec<&str> = current.split(separator).collect(); if !paths.contains(&new_path_str.as_ref()) { let updated = format!("{}{}{}", new_path_str, separator, current); self.env_vars.insert(var_name.to_string(), updated); } } else { self.env_vars.insert(var_name.to_string(), new_path_str.to_string()); } } }
This approach:
- Prevents Duplicates: Avoids adding the same path multiple times
- Maintains Order: New paths are prepended for priority
- Cross-Platform: Uses appropriate path separators
Key Environment Variables
The system manages these critical environment variables:
ROS2-Specific Variables
#![allow(unused)] fn main() { // Core ROS2 environment CMAKE_PREFIX_PATH // CMake package discovery AMENT_PREFIX_PATH // Ament package discovery COLCON_PREFIX_PATH // Colcon compatibility // Build and execution paths PATH // Executable discovery LD_LIBRARY_PATH // Library loading (Linux) DYLD_LIBRARY_PATH // Library loading (macOS) PYTHONPATH // Python module discovery // Build configuration PKG_CONFIG_PATH // pkg-config discovery CMAKE_MODULE_PATH // CMake module discovery }
ROS Environment Detection
#![allow(unused)] fn main() { fn is_ros_relevant_env_var(key: &str) -> bool { match key { // Core ROS2 environment variables "CMAKE_PREFIX_PATH" | "AMENT_PREFIX_PATH" | "COLCON_PREFIX_PATH" => true, // System library paths "PATH" | "LD_LIBRARY_PATH" | "DYLD_LIBRARY_PATH" => true, // Python paths "PYTHONPATH" => true, // ROS-specific variables key if key.starts_with("ROS_") => true, key if key.starts_with("AMENT_") => true, key if key.starts_with("COLCON_") => true, key if key.starts_with("RCUTILS_") => true, key if key.starts_with("RMW_") => true, // Build-related variables "PKG_CONFIG_PATH" | "CMAKE_MODULE_PATH" => true, _ => false, } } }
Package Environment Setup
Per-Package Configuration
For each package, the environment manager configures:
#![allow(unused)] fn main() { pub fn setup_package_environment(&mut self, package_name: &str, _package_path: &Path) -> Result<(), Box<dyn std::error::Error>> { let install_dir = if self.isolated { self.install_prefix.join(package_name) // Isolated: install/package_name/ } else { self.install_prefix.clone() // Merged: install/ }; // Update CMAKE_PREFIX_PATH self.update_path_env("CMAKE_PREFIX_PATH", &install_dir); // Update AMENT_PREFIX_PATH self.update_path_env("AMENT_PREFIX_PATH", &install_dir); // Update PATH to include bin directories let bin_dir = install_dir.join("bin"); if bin_dir.exists() { self.update_path_env("PATH", &bin_dir); } // Update library paths #[cfg(target_os = "linux")] { let lib_dir = install_dir.join("lib"); if lib_dir.exists() { self.update_path_env("LD_LIBRARY_PATH", &lib_dir); } } // Update Python path let python_lib_dirs = [ install_dir.join("lib").join("python3").join("site-packages"), install_dir.join("local").join("lib").join("python3").join("site-packages"), ]; for python_dir in &python_lib_dirs { if python_dir.exists() { self.update_path_env("PYTHONPATH", python_dir); } } Ok(()) } }
Directory Structure Handling
The system adapts to different install directory structures:
Isolated Installs (--isolated
)
install/
├── package1/
│ ├── bin/
│ ├── lib/
│ └── share/
├── package2/
│ ├── bin/
│ ├── lib/
│ └── share/
Merged Installs (--merge-install
)
install/
├── bin/ # All executables
├── lib/ # All libraries
├── share/ # All shared resources
Build Tool Integration
Environment setup integrates with different build systems:
CMake Integration
- Sets
CMAKE_PREFIX_PATH
forfind_package()
commands - Configures
CMAKE_INSTALL_PREFIX
for install locations - Provides environment for CMake's build and install phases
Python Integration
- Updates
PYTHONPATH
for module discovery - Sets up virtual environment compatibility
- Handles setuptools installation requirements
Setup Script Generation
Script Architecture
ROC generates comprehensive setup scripts that mirror colcon's behavior:
Bash Setup Scripts
#!/bin/bash
# ROS2 workspace setup script generated by roc
_roc_prepend_path() {
local var_name="$1"
local new_path="$2"
if [ -z "${!var_name}" ]; then
export "$var_name"="$new_path"
else
# Check if path is already present
if [[ ":${!var_name}:" != *":$new_path:"* ]]; then
export "$var_name"="$new_path:${!var_name}"
fi
fi
}
# Environment variable exports
export CMAKE_PREFIX_PATH="/workspace/install:/opt/ros/humble"
export AMENT_PREFIX_PATH="/workspace/install:/opt/ros/humble"
# Mark workspace as sourced
export ROC_WORKSPACE_SOURCED=1
Windows Batch Scripts
@echo off
REM ROS2 workspace setup script generated by roc
set "CMAKE_PREFIX_PATH=C:\workspace\install;C:\opt\ros\humble"
set "AMENT_PREFIX_PATH=C:\workspace\install;C:\opt\ros\humble"
REM Mark workspace as sourced
set "ROC_WORKSPACE_SOURCED=1"
Script Generation Process
Per-Package Scripts (Isolated Mode)
#![allow(unused)] fn main() { // Generate individual package setup scripts for package in packages { if let Some(pkg_install_path) = self.install_paths.get(&package.name) { let package_dir = pkg_install_path.join("share").join(&package.name); fs::create_dir_all(&package_dir)?; let package_setup = package_dir.join("package.bash"); let package_setup_content = format!(r#"#!/bin/bash Generated setup script for package {} export CMAKE_PREFIX_PATH="{}:${{CMAKE_PREFIX_PATH}}" export AMENT_PREFIX_PATH="{}:${{AMENT_PREFIX_PATH}}" if [ -d "{}/bin" ]; then export PATH="{}/bin:${{PATH}}" fi if [ -d "{}/lib" ]; then export LD_LIBRARY_PATH="{}/lib:${{LD_LIBRARY_PATH}}" fi "#, package.name, pkg_install_path.display(), pkg_install_path.display(), pkg_install_path.display(), pkg_install_path.display(), pkg_install_path.display(), pkg_install_path.display() ); fs::write(&package_setup, package_setup_content)?; } } }
Workspace Setup Script
#![allow(unused)] fn main() { // Generate workspace setup script let setup_bash = install_dir.join("setup.bash"); let mut setup_content = String::from(r#"#!/bin/bash Generated by roc workspace build tool if [ -n "$COLCON_CURRENT_PREFIX" ]; then _colcon_current_prefix="$COLCON_CURRENT_PREFIX" fi export COLCON_CURRENT_PREFIX="{}" "#); // Source each package in dependency order for package in packages { if self.install_paths.contains_key(&package.name) { setup_content.push_str(&format!( r#"if [ -f "$COLCON_CURRENT_PREFIX/{}/share/{}/package.bash" ]; then source "$COLCON_CURRENT_PREFIX/{}/share/{}/package.bash" fi "#, package.name, package.name, package.name, package.name )); } } }
Cross-Platform Considerations
Unix Systems (Linux/macOS)
- Uses bash syntax with
export
commands - Sets executable permissions on script files
- Handles library path differences (LD_LIBRARY_PATH vs DYLD_LIBRARY_PATH)
Windows Systems
- Generates
.bat
files withset
commands - Uses Windows path separators (
;
instead of:
) - Handles different library path conventions
Environment Debugging
Diagnostic Features
The environment manager includes debugging capabilities:
Environment Variable Inspection
#![allow(unused)] fn main() { pub fn get_env_vars(&self) -> &HashMap<String, String> { &self.env_vars } pub fn get_env_var(&self, key: &str) -> Option<&String> { self.env_vars.get(key) } }
ROS-Specific Filtering
Only ROS-relevant environment variables are included in setup scripts to avoid pollution:
#![allow(unused)] fn main() { // Add environment variable exports with ROS-specific filtering for (key, value) in &self.env_vars { // Only export ROS-related and essential environment variables if Self::is_ros_relevant_env_var(key) { script.push_str(&format!("export {}=\"{}\"\n", key, value)); } } }
Common Environment Issues
Build Environment Pollution
- Problem: Accumulated environment variables cause CMake hangs
- Solution: Fresh environment instances for each package
Missing Dependencies
- Problem: Required tools not found in PATH
- Solution: Comprehensive environment validation
Path Duplication
- Problem: Same paths added multiple times
- Solution: Duplicate detection in path management
Performance Optimizations
Memory Efficiency
- Environment variables stored as
HashMap<String, String>
- Minimal copying of environment data between processes
- Efficient string operations for path manipulation
I/O Optimization
- Batch file operations for script generation
- Minimal filesystem operations during environment setup
- Efficient script template generation
Parallelization
- Thread-safe environment management for parallel builds
- Independent environment instances prevent contention
- Shared state only for coordination, not environment data
The environment management system provides a robust foundation for ROS2 workspace builds, ensuring that packages have access to their dependencies while maintaining clean, predictable build environments that scale from single-package builds to large, complex workspaces.
Colcon Compatibility
ROC's workspace build system is designed as a comprehensive drop-in replacement for colcon. This chapter details the compatibility features, command-line argument mapping, and behavioral equivalences that make ROC a seamless replacement for existing ROS2 workflows.
Command-Line Compatibility
Build Command Mapping
ROC provides full compatibility with colcon's most commonly used build options:
Colcon Command | ROC Equivalent | Description |
---|---|---|
colcon build | roc work build | Build all packages in workspace |
colcon build --packages-select pkg1 pkg2 | roc work build --packages-select pkg1 pkg2 | Build only specified packages |
colcon build --packages-ignore pkg1 | roc work build --packages-ignore pkg1 | Skip specified packages |
colcon build --packages-up-to pkg1 | roc work build --packages-up-to pkg1 | Build dependencies up to package |
colcon build --parallel-workers 4 | roc work build --parallel-workers 4 | Set number of parallel workers |
colcon build --merge-install | roc work build --merge-install | Use merged install directory |
colcon build --symlink-install | roc work build --symlink-install | Use symlinks for installation |
colcon build --continue-on-error | roc work build --continue-on-error | Continue building after failures |
colcon build --cmake-args -DCMAKE_BUILD_TYPE=Debug | roc work build --cmake-args -DCMAKE_BUILD_TYPE=Debug | Pass arguments to CMake |
Argument Processing
ROC's argument processing mirrors colcon's behavior:
#![allow(unused)] fn main() { // Parse command line arguments if let Some(base_paths) = matches.get_many::<String>("base_paths") { config.base_paths = base_paths.map(PathBuf::from).collect(); } if let Some(packages) = matches.get_many::<String>("packages_select") { config.packages_select = Some(packages.map(|s| s.to_string()).collect()); } if let Some(packages) = matches.get_many::<String>("packages_ignore") { config.packages_ignore = Some(packages.map(|s| s.to_string()).collect()); } if let Some(packages) = matches.get_many::<String>("packages_up_to") { config.packages_up_to = Some(packages.map(|s| s.to_string()).collect()); } if let Some(workers) = matches.get_one::<u32>("parallel_workers") { config.parallel_workers = *workers; } config.merge_install = matches.get_flag("merge_install"); config.symlink_install = matches.get_flag("symlink_install"); config.continue_on_error = matches.get_flag("continue_on_error"); if let Some(cmake_args) = matches.get_many::<String>("cmake_args") { config.cmake_args = cmake_args.map(|s| s.to_string()).collect(); } }
Workspace Structure Compatibility
Directory Layout
ROC maintains the same workspace structure as colcon:
workspace/
├── src/ # Source packages (default discovery path)
│ ├── package1/
│ │ ├── package.xml
│ │ └── CMakeLists.txt
│ └── package2/
│ ├── package.xml
│ └── setup.py
├── build/ # Build artifacts (created by ROC)
│ ├── package1/
│ └── package2/
├── install/ # Install artifacts (created by ROC)
│ ├── package1/ # Isolated install (default)
│ ├── package2/
│ └── setup.bash # Workspace setup script
└── log/ # Build logs (created by ROC)
└── latest/
├── package1/
└── package2/
Install Space Modes
ROC supports both colcon install modes:
Isolated Install (Default)
#![allow(unused)] fn main() { let install_prefix = if config.merge_install { config.workspace_root.join("install") } else { config.workspace_root.join("install").join(&package.name) }; }
Isolated Structure:
install/
├── package1/
│ ├── bin/
│ ├── lib/
│ └── share/
├── package2/
│ ├── bin/
│ ├── lib/
│ └── share/
└── setup.bash
Merged Install (--merge-install
)
Merged Structure:
install/
├── bin/ # All executables
├── lib/ # All libraries
├── share/ # All shared resources
└── setup.bash
Package Format Compatibility
Package.xml Support
ROC supports the same package.xml formats as colcon:
Format 2 (REP 140)
<?xml version="1.0"?>
<package format="2">
<name>my_package</name>
<version>1.0.0</version>
<description>Package description</description>
<maintainer email="maintainer@example.com">Maintainer Name</maintainer>
<license>Apache-2.0</license>
<buildtool_depend>ament_cmake</buildtool_depend>
<build_depend>rclcpp</build_depend>
<exec_depend>rclcpp</exec_depend>
<export>
<build_type>ament_cmake</build_type>
</export>
</package>
Format 3 (REP 149)
<?xml version="1.0"?>
<package format="3">
<name>my_package</name>
<version>1.0.0</version>
<description>Package description</description>
<maintainer email="maintainer@example.com">Maintainer Name</maintainer>
<license>Apache-2.0</license>
<depend>rclcpp</depend>
<build_depend condition="$ROS_VERSION == 2">ros2_specific_dep</build_depend>
<export>
<build_type>ament_cmake</build_type>
</export>
</package>
Build Type Support
ROC supports all major build types used in ROS2:
#![allow(unused)] fn main() { #[derive(Debug, Clone, PartialEq)] pub enum BuildType { AmentCmake, // Standard C++ packages AmentPython, // Pure Python packages Cmake, // Plain CMake packages Other(String), // Extensible for future types } impl From<&str> for BuildType { fn from(s: &str) -> Self { match s { "ament_cmake" => BuildType::AmentCmake, "ament_python" => BuildType::AmentPython, "cmake" => BuildType::Cmake, other => BuildType::Other(other.to_string()), } } } }
Build Process Compatibility
Build System Integration
ROC uses the same build system invocations as colcon:
CMake Packages
#![allow(unused)] fn main() { // Configure phase let mut configure_cmd = Command::new("cmake"); configure_cmd .arg("-S").arg(&package.path) .arg("-B").arg(&build_dir) .arg(format!("-DCMAKE_INSTALL_PREFIX={}", install_prefix.display())); // Build and install phase let mut build_cmd = Command::new("cmake"); build_cmd .arg("--build").arg(&build_dir) .arg("--target").arg("install") .arg("--") .arg(format!("-j{}", config.parallel_workers)); }
Python Packages
#![allow(unused)] fn main() { // Build phase Command::new("python3") .arg("setup.py") .arg("build") .arg("--build-base").arg(&build_dir) .current_dir(&package.path) // Install phase Command::new("python3") .arg("setup.py") .arg("install") .arg("--prefix").arg("") .arg("--root").arg(&install_prefix) .current_dir(&package.path) }
Environment Setup
ROC generates the same environment setup scripts as colcon:
Setup Script Structure
#!/bin/bash
# Generated by roc workspace build tool (colcon compatible)
# Source any parent workspaces
if [ -n "$COLCON_CURRENT_PREFIX" ]; then
_colcon_current_prefix="$COLCON_CURRENT_PREFIX"
fi
export COLCON_CURRENT_PREFIX="{}"
# Add this workspace to environment
export CMAKE_PREFIX_PATH="$COLCON_CURRENT_PREFIX:${CMAKE_PREFIX_PATH}"
export AMENT_PREFIX_PATH="$COLCON_CURRENT_PREFIX:${AMENT_PREFIX_PATH}"
# Standard paths
if [ -d "$COLCON_CURRENT_PREFIX/bin" ]; then
export PATH="$COLCON_CURRENT_PREFIX/bin:${PATH}"
fi
if [ -d "$COLCON_CURRENT_PREFIX/lib" ]; then
export LD_LIBRARY_PATH="$COLCON_CURRENT_PREFIX/lib:${LD_LIBRARY_PATH}"
fi
# Python paths
if [ -d "$COLCON_CURRENT_PREFIX/lib/python3.10/site-packages" ]; then
export PYTHONPATH="$COLCON_CURRENT_PREFIX/lib/python3.10/site-packages:${PYTHONPATH}"
fi
# Restore previous prefix
if [ -n "$_colcon_current_prefix" ]; then
export COLCON_CURRENT_PREFIX="$_colcon_current_prefix"
unset _colcon_current_prefix
else
unset COLCON_CURRENT_PREFIX
fi
Output and Logging Compatibility
Console Output Format
ROC matches colcon's console output format:
🔧 Building ROS2 workspace with roc (colcon replacement)
Workspace: /home/user/workspace
Discovered 3 packages
- my_cpp_package (AmentCmake)
- my_py_package (AmentPython)
- my_msgs (AmentCmake)
Build order:
my_msgs
my_cpp_package
my_py_package
Starting >>> my_msgs (AmentCmake)
Configuring with CMake...
✅ CMake configure succeeded
Building and installing...
✅ Build and install succeeded
Finished <<< my_msgs [2.34s]
Starting >>> my_cpp_package (AmentCmake)
Configuring with CMake...
✅ CMake configure succeeded
Building and installing...
✅ Build and install succeeded
Finished <<< my_cpp_package [4.12s]
Starting >>> my_py_package (AmentPython)
Building and installing...
✅ Build and install succeeded
Finished <<< my_py_package [1.23s]
Build Summary:
3 packages succeeded
✅ Build completed successfully!
To use the workspace, run:
source install/setup.bash
Log Directory Structure
ROC maintains the same logging structure as colcon:
log/
├── latest/ # Symlink to most recent build
│ ├── build.log # Overall build log
│ ├── my_msgs/
│ │ └── stdout_stderr.log
│ ├── my_cpp_package/
│ │ └── stdout_stderr.log
│ └── my_py_package/
│ └── stdout_stderr.log
└── 2025-06-19_14-30-15/ # Timestamped build logs
└── ...
Migration Guide
Switching from Colcon
For existing ROS2 projects, switching to ROC is straightforward:
1. Install ROC
# From source
git clone https://github.com/your-org/roc.git
cd roc
cargo build --release
# Or from crates.io
cargo install rocc
2. Update Build Scripts
Replace colcon commands in scripts:
Before:
#!/bin/bash
source /opt/ros/humble/setup.bash
cd /path/to/workspace
colcon build --parallel-workers 4 --cmake-args -DCMAKE_BUILD_TYPE=Release
source install/setup.bash
After:
#!/bin/bash
source /opt/ros/humble/setup.bash
cd /path/to/workspace
roc work build --parallel-workers 4 --cmake-args -DCMAKE_BUILD_TYPE=Release
source install/setup.bash
3. CI/CD Integration
Update continuous integration scripts:
GitHub Actions Example:
- name: Build workspace
run: |
source /opt/ros/humble/setup.bash
roc work build --parallel-workers 2
source install/setup.bash
Docker Example:
RUN source /opt/ros/humble/setup.bash && \
roc work build --parallel-workers $(nproc) && \
source install/setup.bash
Behavioral Differences
While ROC maintains high compatibility, there are some differences:
Performance Improvements
- Faster startup: Native binary vs Python interpreter
- Better parallelization: More efficient worker management
- Memory efficiency: Lower memory usage during builds
Enhanced Error Handling
- More detailed error messages: Better context and suggestions
- Cleaner error output: Structured error reporting
- Recovery suggestions: Actionable advice for common issues
Environment Management
- Cleaner environments: Better isolation prevents contamination
- Filtered variables: Only ROS-relevant variables in setup scripts
- Windows support: Better cross-platform environment handling
Future Compatibility
Planned Features
ROC's roadmap includes additional colcon compatibility features:
Advanced Options
--event-handlers
: Custom build event processing--executor
: Different parallel execution strategies--log-base
: Custom log directory locations--install-base
: Custom install directory locations
Extensions
- Plugin system for custom build types
- Custom event handlers
- Advanced dependency resolution strategies
API Compatibility
ROC is designed to maintain API compatibility with colcon's extension points, enabling future integration with existing colcon plugins and extensions where appropriate.
The colcon compatibility layer ensures that ROC can serve as a drop-in replacement for colcon in virtually all ROS2 development workflows, while providing superior performance and enhanced features that improve the developer experience.
Basic Usage Examples
This chapter provides practical examples of using the roc
tool for common ROS 2 introspection tasks.
Prerequisites
Before running these examples, ensure:
- ROS 2 is installed and sourced
- At least one ROS 2 node is running (e.g.,
ros2 run demo_nodes_cpp talker
) - The
roc
tool is built and available in your PATH
Getting Started
1. List All Topics
The most basic operation is listing all available topics in the ROS 2 graph:
roc topic list
Expected output:
/chatter
/parameter_events
/rosout
2. Get Basic Topic Information
To get basic information about a specific topic:
roc topic info /chatter
Output:
Topic: /chatter
Type: std_msgs/msg/String
Publishers: 1
Subscribers: 0
3. Get Detailed Topic Information
For comprehensive topic details including QoS profiles and endpoint information:
roc topic info /chatter --verbose
Expected verbose output:
Topic: /chatter
Type: std_msgs/msg/String
Publishers: 1
Node: /talker
Endpoint type: Publisher
GID: 01.0f.xx.xx.xx.xx.xx.xx.xx.xx.xx.xx.xx.xx.xx.xx
QoS Profile:
Reliability: Reliable
Durability: Volatile
History: Keep last
Depth: 10
Deadline: Default
Lifespan: Default
Liveliness: Automatic
Liveliness lease duration: Default
Type hash: RIHS01_xxxxxxxxxxxxxxxxxxxxxxxxxxxx
Subscribers: 0
Common Use Cases
Debugging Communication Issues
When nodes aren't communicating properly, use verbose topic info to check QoS compatibility:
# Check publisher QoS
roc topic info /my_topic --verbose
# Compare with subscriber expectations
# Look for QoS mismatches in reliability, durability, etc.
Monitoring System Health
Check critical system topics:
# Monitor rosout for system messages
roc topic info /rosout --verbose
# Check parameter events
roc topic info /parameter_events --verbose
Network Diagnostics
Use GID information to identify nodes across the network:
# Get detailed endpoint information
roc topic info /my_topic --verbose | grep "GID"
Working with Multiple Topics
Batch Information Gathering
# Get info for all topics
for topic in $(roc topic list); do
echo "=== $topic ==="
roc topic info "$topic"
echo
done
Filtering by Node
# Find topics published by a specific node
roc topic info /chatter --verbose | grep "Node:"
Integration with ROS 2 Tools
The roc
tool complements existing ROS 2 CLI tools:
# Compare outputs
ros2 topic info /chatter --verbose
roc topic info /chatter --verbose
# Use roc for faster queries
time roc topic list
time ros2 topic list
Troubleshooting
No Topics Found
If roc topic list
returns empty:
-
Check if ROS 2 nodes are running:
ros2 node list
-
Verify ROS 2 environment:
echo $ROS_DOMAIN_ID printenv | grep ROS
-
Test with a simple publisher:
ros2 run demo_nodes_cpp talker
Permission Issues
If you encounter permission errors:
# Check RMW implementation
echo $RMW_IMPLEMENTATION
# Try with different RMW
export RMW_IMPLEMENTATION=rmw_cyclone_cpp
roc topic list
Performance Considerations
For systems with many topics:
# Use targeted queries instead of listing all topics
roc topic info /specific_topic --verbose
Next Steps
- See Advanced Usage for complex scenarios
- Check Command Reference for all available options
- Read Integration Examples for using roc in scripts and automation
Advanced Usage Examples
This chapter covers advanced usage patterns and complex scenarios for the roc
tool.
Advanced Topic Analysis
QoS Profile Comparison
When debugging communication issues, compare QoS profiles between publishers and subscribers:
#!/bin/bash
# qos_compare.sh - Compare QoS profiles for a topic
TOPIC="$1"
if [ -z "$TOPIC" ]; then
echo "Usage: $0 <topic_name>"
exit 1
fi
echo "=== QoS Analysis for $TOPIC ==="
roc topic info "$TOPIC" --verbose | grep -A 10 "QoS Profile:"
Multi-Domain Discovery
Working across multiple ROS domains:
#!/bin/bash
# multi_domain_scan.sh - Scan topics across multiple domains
for domain in {0..10}; do
export ROS_DOMAIN_ID=$domain
echo "=== Domain $domain ==="
topics=$(roc topic list 2>/dev/null)
if [ -n "$topics" ]; then
echo "$topics"
echo "Topic count: $(echo "$topics" | wc -l)"
else
echo "No topics found"
fi
echo
done
Performance Monitoring
Topic Discovery Timing
Measure topic discovery performance:
#!/bin/bash
# discovery_benchmark.sh - Benchmark topic discovery
echo "Benchmarking topic discovery..."
echo "roc topic list:"
time roc topic list > /dev/null
echo "ros2 topic list:"
time ros2 topic list > /dev/null
echo "roc topic info (verbose):"
TOPIC=$(roc topic list | head -1)
if [ -n "$TOPIC" ]; then
time roc topic info "$TOPIC" --verbose > /dev/null
fi
Memory Usage Analysis
Monitor memory usage during large-scale discovery:
#!/bin/bash
# memory_profile.sh - Profile memory usage
echo "Memory usage during topic discovery:"
# Get baseline memory
baseline=$(ps -o rss= -p $$)
echo "Baseline memory: ${baseline}KB"
# Run topic discovery and monitor memory
(
while true; do
ps -o rss= -p $$ 2>/dev/null || break
sleep 0.1
done
) &
monitor_pid=$!
# Perform discovery operations
roc topic list > /dev/null
roc topic info /chatter --verbose > /dev/null 2>&1
kill $monitor_pid 2>/dev/null
Integration Patterns
Continuous Monitoring
Monitor topic health continuously:
#!/bin/bash
# topic_monitor.sh - Continuous topic monitoring
TOPIC="$1"
INTERVAL="${2:-5}"
if [ -z "$TOPIC" ]; then
echo "Usage: $0 <topic_name> [interval_seconds]"
exit 1
fi
echo "Monitoring $TOPIC every ${INTERVAL}s (Ctrl+C to stop)"
while true; do
timestamp=$(date '+%Y-%m-%d %H:%M:%S')
echo "=== $timestamp ==="
# Get current topic info
info=$(roc topic info "$TOPIC" 2>/dev/null)
if [ $? -eq 0 ]; then
echo "$info"
# Extract publisher/subscriber counts
pub_count=$(echo "$info" | grep "Publishers:" | awk '{print $2}')
sub_count=$(echo "$info" | grep "Subscribers:" | awk '{print $2}')
echo "Status: $pub_count publishers, $sub_count subscribers"
else
echo "Topic not found or error occurred"
fi
echo
sleep "$INTERVAL"
done
Automated Health Checks
Create health check scripts for ROS 2 systems:
#!/bin/bash
# ros2_health_check.sh - Comprehensive ROS 2 system health check
echo "=== ROS 2 System Health Check ==="
echo "Timestamp: $(date)"
echo
# Check critical topics
critical_topics=("/rosout" "/parameter_events")
for topic in "${critical_topics[@]}"; do
echo "Checking $topic..."
info=$(roc topic info "$topic" 2>/dev/null)
if [ $? -eq 0 ]; then
echo "✓ $topic: OK"
echo "$info" | grep -E "(Publishers|Subscribers):"
else
echo "✗ $topic: MISSING"
fi
echo
done
# Check for common issues
echo "=== Potential Issues ==="
# Find topics with no publishers or subscribers
all_topics=$(roc topic list 2>/dev/null)
if [ -n "$all_topics" ]; then
while IFS= read -r topic; do
info=$(roc topic info "$topic" 2>/dev/null)
if echo "$info" | grep -q "Publishers: 0"; then
echo "⚠ $topic: No publishers"
fi
if echo "$info" | grep -q "Subscribers: 0"; then
echo "⚠ $topic: No subscribers"
fi
done <<< "$all_topics"
else
echo "✗ No topics found - ROS 2 system may be down"
fi
Data Export and Analysis
JSON Export
Export topic information in structured format:
#!/bin/bash
# export_topics.sh - Export topic information to JSON
output_file="topics_$(date +%Y%m%d_%H%M%S).json"
echo "Exporting topic information to $output_file..."
echo "{" > "$output_file"
echo ' "timestamp": "'$(date -Iseconds)'",' >> "$output_file"
echo ' "topics": [' >> "$output_file"
topics=$(roc topic list 2>/dev/null)
if [ -n "$topics" ]; then
first=true
while IFS= read -r topic; do
if [ "$first" = true ]; then
first=false
else
echo " ," >> "$output_file"
fi
echo " {" >> "$output_file"
echo ' "name": "'$topic'",' >> "$output_file"
# Get topic info and parse it
info=$(roc topic info "$topic" --verbose 2>/dev/null)
if [ $? -eq 0 ]; then
type=$(echo "$info" | grep "Type:" | cut -d' ' -f2-)
pub_count=$(echo "$info" | grep "Publishers:" | awk '{print $2}')
sub_count=$(echo "$info" | grep "Subscribers:" | awk '{print $2}')
echo ' "type": "'$type'",' >> "$output_file"
echo ' "publishers": '$pub_count',' >> "$output_file"
echo ' "subscribers": '$sub_count >> "$output_file"
else
echo ' "error": "Failed to get topic info"' >> "$output_file"
fi
echo -n " }" >> "$output_file"
done <<< "$topics"
echo >> "$output_file"
fi
echo " ]" >> "$output_file"
echo "}" >> "$output_file"
echo "Export complete: $output_file"
CSV Export for Analysis
#!/bin/bash
# export_csv.sh - Export topic data to CSV for analysis
output_file="topics_$(date +%Y%m%d_%H%M%S).csv"
echo "Exporting topic information to $output_file..."
# CSV header
echo "Timestamp,Topic,Type,Publishers,Subscribers" > "$output_file"
topics=$(roc topic list 2>/dev/null)
if [ -n "$topics" ]; then
while IFS= read -r topic; do
timestamp=$(date -Iseconds)
info=$(roc topic info "$topic" 2>/dev/null)
if [ $? -eq 0 ]; then
type=$(echo "$info" | grep "Type:" | cut -d' ' -f2- | tr ',' '_')
pub_count=$(echo "$info" | grep "Publishers:" | awk '{print $2}')
sub_count=$(echo "$info" | grep "Subscribers:" | awk '{print $2}')
echo "$timestamp,$topic,$type,$pub_count,$sub_count" >> "$output_file"
else
echo "$timestamp,$topic,ERROR,0,0" >> "$output_file"
fi
done <<< "$topics"
fi
echo "Export complete: $output_file"
echo "Analyze with: python3 -c \"import pandas as pd; df=pd.read_csv('$output_file'); print(df.describe())\""
Custom RMW Configuration
Testing Different RMW Implementations
#!/bin/bash
# rmw_comparison.sh - Compare performance across RMW implementations
rmw_implementations=(
"rmw_cyclone_cpp"
"rmw_fastrtps_cpp"
"rmw_connext_cpp"
)
for rmw in "${rmw_implementations[@]}"; do
echo "=== Testing with $rmw ==="
export RMW_IMPLEMENTATION="$rmw"
# Test basic discovery
echo "Topic discovery test:"
time roc topic list > /dev/null 2>&1
if [ $? -eq 0 ]; then
topic_count=$(roc topic list 2>/dev/null | wc -l)
echo "Success: Found $topic_count topics"
# Test detailed info
first_topic=$(roc topic list 2>/dev/null | head -1)
if [ -n "$first_topic" ]; then
echo "Detailed info test:"
time roc topic info "$first_topic" --verbose > /dev/null 2>&1
fi
else
echo "Failed: $rmw not available or error occurred"
fi
echo
done
Error Handling and Debugging
Verbose Debugging
Enable detailed debugging for troubleshooting:
#!/bin/bash
# debug_roc.sh - Debug roc tool issues
echo "=== ROS 2 Environment ==="
printenv | grep ROS | sort
echo -e "\n=== RMW Implementation ==="
echo "RMW_IMPLEMENTATION: ${RMW_IMPLEMENTATION:-default}"
echo -e "\n=== System Info ==="
echo "OS: $(uname -a)"
echo "User: $(whoami)"
echo "Groups: $(groups)"
echo -e "\n=== ROS 2 Process Check ==="
ps aux | grep -E "(ros|dds)" | grep -v grep
echo -e "\n=== Network Interfaces ==="
ip addr show | grep -E "(inet|UP|DOWN)"
echo -e "\n=== ROC Tool Test ==="
echo "Testing roc topic list..."
if roc topic list; then
echo "✓ Basic functionality works"
echo -e "\nTesting verbose info..."
first_topic=$(roc topic list | head -1)
if [ -n "$first_topic" ]; then
echo "Testing with topic: $first_topic"
roc topic info "$first_topic" --verbose
fi
else
echo "✗ Basic functionality failed"
echo "Exit code: $?"
fi
Performance Optimization
Batch Operations
Optimize for scenarios with many topics:
#!/bin/bash
# batch_optimize.sh - Optimized batch topic analysis
# Get all topics once
topics=($(roc topic list 2>/dev/null))
topic_count=${#topics[@]}
echo "Found $topic_count topics"
if [ $topic_count -eq 0 ]; then
echo "No topics found"
exit 1
fi
# Process in batches to avoid overwhelming the system
batch_size=10
batch_count=$(( (topic_count + batch_size - 1) / batch_size ))
echo "Processing in $batch_count batches of $batch_size..."
for ((batch=0; batch<batch_count; batch++)); do
start=$((batch * batch_size))
end=$((start + batch_size))
echo "Batch $((batch+1))/$batch_count (topics $start-$((end-1)))"
for ((i=start; i<end && i<topic_count; i++)); do
topic="${topics[i]}"
echo " Processing: $topic"
roc topic info "$topic" > /dev/null 2>&1
done
# Small delay between batches
sleep 0.1
done
echo "Batch processing complete"
This completes the advanced usage examples. Next, let me create a command reference guide.
Integration Examples
This chapter demonstrates how to integrate the roc
tool into larger systems, automation workflows, and monitoring solutions.
CI/CD Integration
GitHub Actions Workflow
# .github/workflows/ros2-integration-test.yml
name: ROS 2 Integration Test
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
ros2-integration:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup ROS 2
uses: ros-tooling/setup-ros@v0.6
with:
required-ros-distributions: jazzy
- name: Build roc tool
run: |
source /opt/ros/jazzy/setup.bash
cargo build --release
- name: Start test nodes
run: |
source /opt/ros/jazzy/setup.bash
ros2 run demo_nodes_cpp talker &
ros2 run demo_nodes_cpp listener &
sleep 5 # Allow nodes to start
- name: Run integration tests
run: |
source /opt/ros/jazzy/setup.bash
./target/release/roc topic list
./target/release/roc topic info /chatter --verbose
# Verify expected topics exist
topics=$(./target/release/roc topic list)
echo "$topics" | grep -q "/chatter" || exit 1
echo "$topics" | grep -q "/rosout" || exit 1
# Verify topic has publishers
info=$(./target/release/roc topic info /chatter)
echo "$info" | grep -q "Publishers: [1-9]" || exit 1
GitLab CI Pipeline
# .gitlab-ci.yml
stages:
- build
- test
- integration
variables:
ROS_DISTRO: jazzy
build:
stage: build
image: ros:jazzy
script:
- apt-get update && apt-get install -y cargo
- cargo build --release
artifacts:
paths:
- target/release/roc
expire_in: 1 hour
integration_test:
stage: integration
image: ros:jazzy
needs: ["build"]
script:
- source /opt/ros/jazzy/setup.bash
- ./scripts/integration_test.sh
artifacts:
reports:
junit: test_results.xml
Docker Integration
ROS 2 Development Container
# Dockerfile.ros2-dev
FROM ros:jazzy
# Install Rust and build tools
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
cmake \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy and build roc
COPY . /workspace/roc
WORKDIR /workspace/roc
RUN cargo build --release
# Install roc tool
RUN cp target/release/roc /usr/local/bin/
# Setup entrypoint
COPY docker/entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["bash"]
#!/bin/bash
# docker/entrypoint.sh
source /opt/ros/jazzy/setup.bash
exec "$@"
Docker Compose for Testing
# docker-compose.yml
version: '3.8'
services:
ros2-master:
build:
context: .
dockerfile: Dockerfile.ros2-dev
command: ros2 run demo_nodes_cpp talker
environment:
- ROS_DOMAIN_ID=0
networks:
- ros2-net
ros2-monitor:
build:
context: .
dockerfile: Dockerfile.ros2-dev
command: |
bash -c "
sleep 5
while true; do
echo '=== Topic Monitor ==='
roc topic list
roc topic info /chatter --verbose
sleep 30
done
"
environment:
- ROS_DOMAIN_ID=0
networks:
- ros2-net
depends_on:
- ros2-master
networks:
ros2-net:
driver: bridge
Monitoring and Alerting
Prometheus Integration
#!/usr/bin/env python3
# prometheus_exporter.py - Export roc metrics to Prometheus
import subprocess
import time
import re
from prometheus_client import start_http_server, Gauge, Info
import json
# Prometheus metrics
topic_count = Gauge('ros2_topic_count', 'Number of ROS 2 topics')
topic_publishers = Gauge('ros2_topic_publishers', 'Number of publishers per topic', ['topic_name'])
topic_subscribers = Gauge('ros2_topic_subscribers', 'Number of subscribers per topic', ['topic_name'])
topic_info = Info('ros2_topic_info', 'Topic information', ['topic_name'])
def get_topic_list():
"""Get list of topics using roc tool."""
try:
result = subprocess.run(['roc', 'topic', 'list'],
capture_output=True, text=True, timeout=10)
if result.returncode == 0:
return [line.strip() for line in result.stdout.strip().split('\n') if line.strip()]
return []
except Exception as e:
print(f"Error getting topic list: {e}")
return []
def get_topic_info(topic_name):
"""Get detailed topic information."""
try:
result = subprocess.run(['roc', 'topic', 'info', topic_name, '--verbose'],
capture_output=True, text=True, timeout=10)
if result.returncode == 0:
return parse_topic_info(result.stdout)
return None
except Exception as e:
print(f"Error getting info for {topic_name}: {e}")
return None
def parse_topic_info(info_text):
"""Parse topic info output."""
info = {}
# Extract basic info
type_match = re.search(r'Type: (.+)', info_text)
if type_match:
info['type'] = type_match.group(1)
pub_match = re.search(r'Publishers: (\d+)', info_text)
if pub_match:
info['publishers'] = int(pub_match.group(1))
sub_match = re.search(r'Subscribers: (\d+)', info_text)
if sub_match:
info['subscribers'] = int(sub_match.group(1))
return info
def update_metrics():
"""Update Prometheus metrics."""
topics = get_topic_list()
topic_count.set(len(topics))
for topic in topics:
info = get_topic_info(topic)
if info:
topic_publishers.labels(topic_name=topic).set(info.get('publishers', 0))
topic_subscribers.labels(topic_name=topic).set(info.get('subscribers', 0))
topic_info.labels(topic_name=topic).info({
'type': info.get('type', 'unknown'),
'publishers': str(info.get('publishers', 0)),
'subscribers': str(info.get('subscribers', 0))
})
def main():
# Start Prometheus metrics server
start_http_server(8000)
print("Prometheus exporter started on port 8000")
while True:
try:
update_metrics()
time.sleep(30) # Update every 30 seconds
except KeyboardInterrupt:
break
except Exception as e:
print(f"Error updating metrics: {e}")
time.sleep(5)
if __name__ == '__main__':
main()
Grafana Dashboard Configuration
{
"dashboard": {
"title": "ROS 2 Topic Monitor",
"panels": [
{
"title": "Topic Count",
"type": "stat",
"targets": [
{
"expr": "ros2_topic_count",
"legendFormat": "Topics"
}
]
},
{
"title": "Publishers per Topic",
"type": "graph",
"targets": [
{
"expr": "ros2_topic_publishers",
"legendFormat": "{{topic_name}}"
}
]
},
{
"title": "Subscribers per Topic",
"type": "graph",
"targets": [
{
"expr": "ros2_topic_subscribers",
"legendFormat": "{{topic_name}}"
}
]
}
]
}
}
Alerting Rules
# alerting_rules.yml
groups:
- name: ros2_alerts
rules:
- alert: NoTopicsFound
expr: ros2_topic_count == 0
for: 1m
labels:
severity: critical
annotations:
summary: "No ROS 2 topics found"
description: "The ROS 2 system appears to be down - no topics detected"
- alert: TopicNoPublishers
expr: ros2_topic_publishers{topic_name!="/parameter_events"} == 0
for: 5m
labels:
severity: warning
annotations:
summary: "Topic {{ $labels.topic_name }} has no publishers"
description: "Topic {{ $labels.topic_name }} has no active publishers"
- alert: CriticalTopicMissing
expr: absent(ros2_topic_publishers{topic_name="/rosout"})
for: 2m
labels:
severity: critical
annotations:
summary: "Critical topic /rosout is missing"
description: "The /rosout topic is not available"
Python Integration
ROS 2 Node Integration
#!/usr/bin/env python3
# ros2_monitor_node.py - ROS 2 node that uses roc for monitoring
import rclpy
from rclpy.node import Node
from std_msgs.msg import String
import subprocess
import json
import threading
import time
class TopicMonitorNode(Node):
def __init__(self):
super().__init__('topic_monitor')
# Publisher for monitoring results
self.publisher = self.create_publisher(String, '/topic_monitor/status', 10)
# Timer for periodic monitoring
self.timer = self.create_timer(10.0, self.monitor_callback)
self.get_logger().info('Topic monitor node started')
def get_topic_stats(self):
"""Get topic statistics using roc tool."""
try:
# Get topic list
result = subprocess.run(['roc', 'topic', 'list'],
capture_output=True, text=True, timeout=5)
if result.returncode != 0:
return None
topics = [line.strip() for line in result.stdout.strip().split('\n') if line.strip()]
stats = {
'timestamp': time.time(),
'topic_count': len(topics),
'topics': {}
}
# Get info for each topic
for topic in topics[:10]: # Limit to first 10 topics
info_result = subprocess.run(['roc', 'topic', 'info', topic],
capture_output=True, text=True, timeout=5)
if info_result.returncode == 0:
# Parse the output
lines = info_result.stdout.strip().split('\n')
topic_info = {}
for line in lines:
if line.startswith('Type:'):
topic_info['type'] = line.split(':', 1)[1].strip()
elif line.startswith('Publishers:'):
topic_info['publishers'] = int(line.split(':', 1)[1].strip())
elif line.startswith('Subscribers:'):
topic_info['subscribers'] = int(line.split(':', 1)[1].strip())
stats['topics'][topic] = topic_info
return stats
except Exception as e:
self.get_logger().error(f'Error getting topic stats: {e}')
return None
def monitor_callback(self):
"""Periodic monitoring callback."""
stats = self.get_topic_stats()
if stats:
# Publish stats as JSON
msg = String()
msg.data = json.dumps(stats)
self.publisher.publish(msg)
# Log summary
self.get_logger().info(f'Monitoring: {stats["topic_count"]} topics found')
else:
self.get_logger().warn('Failed to get topic statistics')
def main(args=None):
rclpy.init(args=args)
node = TopicMonitorNode()
try:
rclpy.spin(node)
except KeyboardInterrupt:
pass
finally:
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
Shell Integration
Bash Completion
# roc_completion.bash - Bash completion for roc tool
_roc_completion() {
local cur prev opts
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
# Top-level commands
local commands="topic help"
# Topic subcommands
local topic_commands="list info"
case ${COMP_CWORD} in
1)
COMPREPLY=($(compgen -W "${commands}" -- ${cur}))
return 0
;;
2)
case ${prev} in
topic)
COMPREPLY=($(compgen -W "${topic_commands}" -- ${cur}))
return 0
;;
esac
;;
3)
case ${COMP_WORDS[1]} in
topic)
case ${prev} in
info)
# Complete with available topics
local topics=$(roc topic list 2>/dev/null)
COMPREPLY=($(compgen -W "${topics}" -- ${cur}))
return 0
;;
esac
;;
esac
;;
4)
case ${COMP_WORDS[1]} in
topic)
case ${COMP_WORDS[2]} in
info)
COMPREPLY=($(compgen -W "--verbose" -- ${cur}))
return 0
;;
esac
;;
esac
;;
esac
return 0
}
complete -F _roc_completion roc
Zsh Integration
# roc_completion.zsh - Zsh completion for roc tool
#compdef roc
_roc() {
local context state line
_arguments \
'1: :->command' \
'*: :->args'
case $state in
command)
_values 'commands' \
'topic[Topic operations]' \
'help[Show help]'
;;
args)
case $line[1] in
topic)
_roc_topic
;;
esac
;;
esac
}
_roc_topic() {
local context state line
_arguments \
'1: :->subcommand' \
'*: :->args'
case $state in
subcommand)
_values 'topic subcommands' \
'list[List all topics]' \
'info[Show topic information]'
;;
args)
case $line[1] in
info)
_roc_topic_info
;;
esac
;;
esac
}
_roc_topic_info() {
local context state line
_arguments \
'1: :->topic_name' \
'2: :->options'
case $state in
topic_name)
# Get available topics
local topics
topics=(${(f)"$(roc topic list 2>/dev/null)"})
_describe 'topics' topics
;;
options)
_values 'options' \
'--verbose[Show detailed information]'
;;
esac
}
_roc "$@"
Systemd Service Integration
Service Configuration
# /etc/systemd/system/ros2-monitor.service
[Unit]
Description=ROS 2 Topic Monitor
After=network.target
Requires=network.target
[Service]
Type=simple
User=ros
Group=ros
WorkingDirectory=/home/ros
Environment=ROS_DOMAIN_ID=0
Environment=RMW_IMPLEMENTATION=rmw_cyclone_cpp
ExecStart=/home/ros/monitoring/monitor_service.sh
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
#!/bin/bash
# /home/ros/monitoring/monitor_service.sh
source /opt/ros/jazzy/setup.bash
while true; do
echo "$(date): Starting monitoring cycle"
# Check if ROS 2 system is healthy
if roc topic list > /dev/null 2>&1; then
echo "$(date): ROS 2 system healthy"
# Generate monitoring report
{
echo "=== ROS 2 System Status ==="
echo "Timestamp: $(date)"
echo "Topics found: $(roc topic list | wc -l)"
echo
# Check critical topics
for topic in "/rosout" "/parameter_events"; do
if roc topic info "$topic" > /dev/null 2>&1; then
echo "✓ $topic: OK"
else
echo "✗ $topic: MISSING"
fi
done
} > /var/log/ros2-monitor.log
else
echo "$(date): ROS 2 system appears down"
echo "$(date): ROS 2 system down" >> /var/log/ros2-monitor.log
fi
sleep 60
done
This completes the integration examples. Let me now create a comprehensive command reference guide.
Command Reference
This chapter provides a comprehensive reference for all roc
tool commands, options, and usage patterns.
Global Options
All roc
commands support these global options:
Option | Description | Default |
---|---|---|
-h, --help | Show help information | N/A |
-V, --version | Show version information | N/A |
Environment Variables
The roc
tool respects standard ROS 2 environment variables:
Variable | Description | Example |
---|---|---|
ROS_DOMAIN_ID | ROS 2 domain ID | export ROS_DOMAIN_ID=0 |
RMW_IMPLEMENTATION | RMW middleware implementation | export RMW_IMPLEMENTATION=rmw_cyclone_cpp |
ROS_LOCALHOST_ONLY | Limit communication to localhost | export ROS_LOCALHOST_ONLY=1 |
Command Structure
roc <COMMAND> [SUBCOMMAND] [OPTIONS] [ARGS]
Topic Commands
roc topic list
List all available topics in the ROS 2 graph.
Syntax:
roc topic list
Output:
/chatter
/parameter_events
/rosout
Exit Codes:
0
: Success1
: Error (no ROS 2 system found, permission issues, etc.)
Examples:
# Basic usage
roc topic list
# Count topics
roc topic list | wc -l
# Filter topics
roc topic list | grep "chatter"
# Store topics in variable
topics=$(roc topic list)
roc topic info
Display detailed information about a specific topic.
Syntax:
roc topic info <TOPIC_NAME> [OPTIONS]
Arguments:
<TOPIC_NAME>
: The name of the topic to inspect (required)
Options:
Option | Short | Description |
---|---|---|
--verbose | -v | Show detailed information including QoS profiles and endpoint data |
Basic Output:
Topic: /chatter
Type: std_msgs/msg/String
Publishers: 1
Subscribers: 0
Verbose Output:
Topic: /chatter
Type: std_msgs/msg/String
Publishers: 1
Node: /talker
Endpoint type: Publisher
GID: 01.0f.xx.xx.xx.xx.xx.xx.xx.xx.xx.xx.xx.xx.xx.xx
QoS Profile:
Reliability: Reliable
Durability: Volatile
History: Keep last
Depth: 10
Deadline: Default
Lifespan: Default
Liveliness: Automatic
Liveliness lease duration: Default
Type hash: RIHS01_xxxxxxxxxxxxxxxxxxxxxxxxxxxx
Subscribers: 0
Exit Codes:
0
: Success1
: Topic not found or error accessing topic information2
: Invalid arguments
Examples:
# Basic topic information
roc topic info /chatter
# Detailed information with QoS profiles
roc topic info /chatter --verbose
roc topic info /chatter -v
# Check if topic exists (using exit code)
if roc topic info /my_topic > /dev/null 2>&1; then
echo "Topic exists"
else
echo "Topic not found"
fi
# Get only publisher count
roc topic info /chatter | grep "Publishers:" | awk '{print $2}'
Output Format Details
Topic Information Fields
Field | Description | Example |
---|---|---|
Topic | Full topic name | /chatter |
Type | Message type | std_msgs/msg/String |
Publishers | Number of active publishers | 1 |
Subscribers | Number of active subscribers | 0 |
Verbose Information Fields
Publisher/Subscriber Details
Field | Description | Example |
---|---|---|
Node | Node name | /talker |
Endpoint type | Publisher or Subscriber | Publisher |
GID | Global identifier (16 bytes, hex) | 01.0f.xx.xx... |
Type hash | Message type hash | RIHS01_xxx... |
QoS Profile Fields
Field | Description | Possible Values |
---|---|---|
Reliability | Message delivery guarantee | Reliable , Best effort |
Durability | Message persistence | Volatile , Transient local , Transient , Persistent |
History | History policy | Keep last , Keep all |
Depth | History depth (for Keep last) | 1 , 10 , 100 , etc. |
Deadline | Message deadline | Default , time duration |
Lifespan | Message lifespan | Default , time duration |
Liveliness | Liveliness policy | Automatic , Manual by node , Manual by topic |
Liveliness lease duration | Lease duration | Default , time duration |
Error Handling
Common Error Messages
Error | Cause | Solution |
---|---|---|
No topics found | No ROS 2 nodes running | Start ROS 2 nodes or check ROS_DOMAIN_ID |
Topic not found: /topic_name | Specified topic doesn't exist | Verify topic name with roc topic list |
Permission denied | Insufficient permissions | Check user permissions and ROS 2 setup |
Failed to create context | ROS 2 not properly initialized | Source ROS 2 setup and check environment |
Timeout waiting for topic info | Network or discovery issues | Check network connectivity and RMW configuration |
Debugging Commands
# Check ROS 2 environment
printenv | grep ROS
# Verify RMW implementation
echo $RMW_IMPLEMENTATION
# Test basic connectivity
roc topic list
# Verbose debugging (if available)
RUST_LOG=debug roc topic info /chatter --verbose
Return Codes
All roc
commands follow standard Unix conventions:
Code | Meaning | When Used |
---|---|---|
0 | Success | Command completed successfully |
1 | General error | Topic not found, ROS 2 system unavailable |
2 | Invalid arguments | Wrong number of arguments, invalid options |
130 | Interrupted | Command interrupted by user (Ctrl+C) |
Performance Considerations
Command Performance
Command | Typical Time | Notes |
---|---|---|
roc topic list | < 100ms | Fast, caches discovery data |
roc topic info | < 200ms | May be slower for first query |
roc topic info --verbose | < 500ms | Additional QoS/endpoint queries |
Optimization Tips
- Batch Operations: Use
roc topic list
once, then query specific topics - Caching: Results are cached briefly to improve repeated queries
- Network: Use
ROS_LOCALHOST_ONLY=1
for local-only discovery - RMW Selection: Different RMW implementations have different performance characteristics
Comparison with ROS 2 CLI
Feature Parity
Feature | ros2 topic | roc topic | Notes |
---|---|---|---|
List topics | ✅ | ✅ | Full parity |
Basic info | ✅ | ✅ | Full parity |
Verbose info | ✅ | ✅ | Full parity with QoS details |
Publisher count | ✅ | ✅ | Exact match |
Subscriber count | ✅ | ✅ | Exact match |
GID information | ✅ | ✅ | Formatted identically |
Type hash | ✅ | ✅ | Complete hash information |
Performance Comparison
# Benchmark both tools
time ros2 topic list
time roc topic list
time ros2 topic info /chatter --verbose
time roc topic info /chatter --verbose
Typical results show roc
is 2-3x faster for most operations.
Scripting and Automation
Common Patterns
# Check if specific topics exist
check_topics() {
local required_topics=("$@")
local missing_topics=()
for topic in "${required_topics[@]}"; do
if ! roc topic info "$topic" > /dev/null 2>&1; then
missing_topics+=("$topic")
fi
done
if [ ${#missing_topics[@]} -eq 0 ]; then
echo "All required topics found"
return 0
else
echo "Missing topics: ${missing_topics[*]}"
return 1
fi
}
# Usage
check_topics "/chatter" "/rosout" "/parameter_events"
# Get topic statistics
get_topic_stats() {
local topics=($(roc topic list))
local total_pubs=0
local total_subs=0
for topic in "${topics[@]}"; do
local info=$(roc topic info "$topic")
local pubs=$(echo "$info" | grep "Publishers:" | awk '{print $2}')
local subs=$(echo "$info" | grep "Subscribers:" | awk '{print $2}')
total_pubs=$((total_pubs + pubs))
total_subs=$((total_subs + subs))
done
echo "Topics: ${#topics[@]}"
echo "Total publishers: $total_pubs"
echo "Total subscribers: $total_subs"
}
JSON Output (Future Enhancement)
While not currently supported, JSON output could be added:
# Proposed syntax (not yet implemented)
roc topic list --format json
roc topic info /chatter --format json --verbose
IDL Commands
roc idl protobuf
Bidirectional conversion between Protobuf (.proto) and ROS 2 (.msg) files with automatic direction detection.
Syntax:
roc idl protobuf [OPTIONS] <INPUT_FILES>...
Arguments:
<INPUT_FILES>...
: Input files to convert (.proto or .msg files)
Options:
Option | Short | Description | Default |
---|---|---|---|
--output <DIR> | -o | Output directory for generated files | Same directory as input |
--package <NAME> | -p | Package name for generated files | Derived from input |
--config <FILE> | -c | Configuration file for type mappings (YAML) | None |
--include <DIRS>... | -I | Include directories for protobuf imports | None |
--verbose | -v | Show verbose output | False |
--dry-run | -n | Show what would be generated without writing files | False |
Examples:
# Convert .proto files to .msg files (automatic detection)
roc idl protobuf robot.proto sensor_data.proto
# Convert .msg files to .proto files (automatic detection)
roc idl protobuf RobotStatus.msg SensorData.msg
# Specify output directory
roc idl protobuf --output ./generated robot.proto
# Dry run to preview conversion
roc idl protobuf --dry-run --verbose robot.proto
# Convert with include directories for imports
roc idl protobuf -I ./proto_deps -I ./common robot.proto
# Convert with custom package name
roc idl protobuf --package my_robot_msgs robot.proto
Protobuf to ROS2 (.proto → .msg):
# Input: robot.proto
roc idl protobuf robot.proto
# Output: Robot.msg, RobotStatus.msg (based on message definitions)
ROS2 to Protobuf (.msg → .proto):
# Input: RobotStatus.msg
roc idl protobuf RobotStatus.msg
# Output: robot_status.proto
Advanced Usage:
# Convert entire directory with verbose output
roc idl protobuf --verbose src/proto/*.proto --output msg/
# Mixed conversion with error handling
roc idl protobuf file1.proto file2.proto || echo "Conversion failed"
# Pipeline with other tools
find . -name "*.proto" -exec roc idl protobuf {} --output ./ros_msgs \;
Supported Protobuf Features:
- Proto3 syntax
- Nested messages (automatically flattened)
- Enums (converted to constants)
- Repeated fields (arrays)
- Maps (converted to key-value arrays)
- Oneof fields (converted to separate optional fields)
- Comments (preserved when possible)
- Import statements and dependencies
Type Mappings:
Protobuf | ROS2 | Notes |
---|---|---|
bool | bool | Direct mapping |
int32 | int32 | Direct mapping |
int64 | int64 | Direct mapping |
uint32 | uint32 | Direct mapping |
uint64 | uint64 | Direct mapping |
float | float32 | Single precision |
double | float64 | Double precision |
string | string | UTF-8 strings |
bytes | uint8[] | Byte arrays |
repeated T | T[] | Dynamic arrays |
map<K,V> | Entry[] | Key-value pairs |
Exit Codes:
0
: Success1
: Error (invalid syntax, file not found, permission issues, etc.)
Error Examples:
# Mixed file types (not allowed)
roc idl protobuf robot.proto RobotStatus.msg
# Error: Cannot mix .proto and .msg files in the same conversion
# Unsupported file extension
roc idl protobuf data.json
# Error: Unsupported file extension: .json
# File not found
roc idl protobuf nonexistent.proto
# Error: Input file does not exist: nonexistent.proto
Troubleshooting
Common Issues
-
No output from
roc topic list
- Check if ROS 2 nodes are running:
ros2 node list
- Verify ROS 2 environment:
echo $ROS_DOMAIN_ID
- Try different RMW:
export RMW_IMPLEMENTATION=rmw_cyclone_cpp
- Check if ROS 2 nodes are running:
-
Permission errors
- Check user groups:
groups
- Verify ROS 2 installation permissions
- Try running with different user
- Check user groups:
-
Slow performance
- Check network configuration
- Use
ROS_LOCALHOST_ONLY=1
for local testing - Consider different RMW implementation
-
Inconsistent results
- Allow time for discovery:
sleep 2 && roc topic list
- Check for multiple ROS 2 domains
- Verify system clock synchronization
- Allow time for discovery:
Debug Information
# Enable detailed logging (if built with debug support)
RUST_LOG=debug roc topic list
# Check system resources
free -h
df -h
# Network diagnostics
netstat -tuln | grep -E "(7400|7401|7411)"
This completes the comprehensive command reference for the roc
tool.