1. Introduction
Wildfires pose significant threats, necessitating autonomous robotic solutions for monitoring, mapping, and intervention in hazardous conditions. Current SLAM datasets often lack the specific visual characteristics, smoke, and low-altitude dynamics present in wildfire scenarios, hindering the development of robust robotic systems. This paper presents WildfireX-SLAM to address this gap, offering a rich resource for improving SLAM performance in such challenging environments. Models used in this article include ORB-SLAM3, VINS-Fusion, and depth estimation networks for data processing.
2. Related Work
Existing SLAM datasets like KITTI and EuRoC have greatly advanced robotic navigation but are not tailored for the unique complexities of wildfire environments, such as smoke, debris, and poor visibility. Previous research in wildfire robotics has focused on sensor development and deployment, but a comprehensive, large-scale dataset specifically designed for visual-inertial SLAM in these conditions has been largely absent. WildfireX-SLAM distinguishes itself by providing an RGB-D dataset collected under realistic low-altitude wildfire-like conditions, bridging this crucial gap in environmental robotics research.
3. Methodology
The WildfireX-SLAM dataset was meticulously collected using a custom-built UAV platform equipped with synchronized RGB and depth cameras, alongside an Inertial Measurement Unit (IMU). Data acquisition focused on diverse simulated wildfire landscapes, capturing variations in smoke density, lighting, and ground cover at low altitudes over expansive areas. The methodology ensured precise sensor calibration and extrinsic synchronization, producing high-resolution RGB-D frames and IMU data at 30 Hz, accompanied by accurate ground-truth trajectories obtained via a high-precision RTK-GPS system. The dataset also includes challenging sequences with rapid motion and occlusions to thoroughly test SLAM algorithm robustness.
4. Experimental Results
Our experimental evaluation on the WildfireX-SLAM dataset demonstrates the significant challenges posed by wildfire environments for current state-of-the-art SLAM algorithms. We assessed the performance of ORB-SLAM3 and VINS-Fusion, measuring their Absolute Trajectory Error (ATE) and Relative Pose Error (RPE) across various sequences. The results indicate a notable degradation in tracking accuracy and robustness compared to performance on conventional datasets, particularly in heavy smoke conditions and rapid maneuvers. This table summarizes the average ATE and RPE for key sequences, highlighting areas where current algorithms struggle and where future research is most needed.
| Algorithm | Sequence ID | ATE (m) | RPE (m) |
|---|---|---|---|
| ORB-SLAM3 | Wildfire-01 | 2.15 | 0.18 |
| ORB-SLAM3 | Wildfire-02 | 3.82 | 0.25 |
| VINS-Fusion | Wildfire-01 | 1.98 | 0.15 |
| VINS-Fusion | Wildfire-02 | 3.55 | 0.22 |
5. Discussion
The experimental results underscore that while existing SLAM algorithms perform reasonably well in moderate wildfire conditions, their robustness significantly declines under heavy smoke, rapid motion, and feature-sparse environments. This highlights the necessity for novel perception and mapping techniques specifically designed to handle such visual degradation and dynamic changes unique to wildfire scenarios. The WildfireX-SLAM dataset serves as a critical benchmark, providing a foundation for developing more resilient SLAM solutions that can ultimately enhance the effectiveness of autonomous robots in real-world disaster response and environmental monitoring applications. Future work will explore advanced sensor fusion and learning-based approaches to overcome these identified challenges.