A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä

A Benchmark for Multi-Modal LiDAR SLAM with Ground Truth in GNSS-Denied Environments




TekijätSier Ha, Li Qingqing, Yu Xianjia, Peña Queralta Jorge, Zou Zhuo, Westerlund Tomi

KustantajaMDPI

Julkaisuvuosi2023

JournalRemote Sensing

Lehden akronyymiREMOTE SENS-BASEL

Artikkelin numero 3314

Vuosikerta15

Numero13

Sivujen määrä15

eISSN2072-4292

DOIhttps://doi.org/10.3390/rs15133314

Verkko-osoitehttps://www.mdpi.com/2072-4292/15/13/3314

Rinnakkaistallenteen osoitehttps://research.utu.fi/converis/portal/detail/Publication/180820026


Tiivistelmä
LiDAR-based simultaneous localization and mapping (SLAM) approaches have obtained considerable success in autonomous robotic systems. This is in part owing to the high accuracy of robust SLAM algorithms and the emergence of new and lower-cost LiDAR products. This study benchmarks the current state-of-the-art LiDAR SLAM algorithms with a multi-modal LiDAR sensor setup, showcasing diverse scanning modalities (spinning and solid state) and sensing technologies, and LiDAR cameras, mounted on a mobile sensing and computing platform. We extend our previous multi-modal multi-LiDAR dataset with additional sequences and new sources of ground truth data. Specifically, we propose a new multi-modal multi-LiDAR SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. With these maps, we then match real-time point cloud data using a normal distributions transform (NDT) method to obtain the ground truth with a full six-degrees-of-freedom (DOF) pose estimation. These novel ground truth data leverage high-resolution spinning and solid-state LiDARs. We also include new open road sequences with GNSS-RTK data and additional indoor sequences with motion capture (MOCAP) ground truth, complementing the previous forest sequences with MOCAP data. We perform an analysis of the positioning accuracy achieved, comprising ten unique configurations generated by pairing five distinct LiDAR sensors with five SLAM algorithms, to critically compare and assess their respective performance characteristics. We also report the resource utilization in four different computational platforms and a total of five settings (Intel and Jetson ARM CPUs). Our experimental results show that the current state-of-the-art LiDAR SLAM algorithms perform very differently for different types of sensors. More results, code, and the dataset can be found at GitHub.

Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2024-26-11 at 16:46