A4 Refereed article in a conference publication
Event-based Sensor Fusion and Application on Odometry: A Survey
Authors: Zhang, Jiaqiang; Yu, Xianjia; Sier, Ha; Zhang, Haizhou; Westerlund, Tomi
Editors: N/A
Conference name: International Conference on Image Processing, Applications and Systems
Publication year: 2025
Book title : 2025 IEEE 6th International Conference on Image Processing, Applications and Systems (IPAS)
ISBN: 979-8-3315-0653-7
eISBN: 979-8-3315-0652-0
DOI: https://doi.org/10.1109/IPAS63548.2025.10924516
Web address : https://ieeexplore.ieee.org/document/10924516
Event cameras, inspired by biological vision, are asynchronous sensors that detect changes in brightness. They offer notable advantages in environments characterized by high-speed motion, low lighting, or wide dynamic range. These distinctive properties render event cameras particularly effective for sensor fusion in robotics and computer vision, especially in enhancing traditional visual or LiDAR-inertial odometry. Conventional frame-based cameras suffer from limitations such as motion blur and drift, which can be mitigated by the continuous, low-latency data provided by event cameras. Similarly, LiDAR-based odometry encounters challenges related to the loss of geometric information in environments such as corridors. To address these limitations, unlike the existing event camera-related surveys, this survey presents a comprehensive overview of recent advancements in event-based sensor fusion for odometry applications particularly investigating fusion strategies that incorporate frame-based cameras, inertial measurement units, and LiDAR. The survey critically assesses the contributions of these fusion methods to improving odometry performance in complex environments, while highlighting key applications, and discussing the strengths, limitations, and unresolved challenges. Additionally, it offers insights into potential future research directions to advance event-based sensor fusion for next-generation odometry applications.
Funding information in the publication:
This research is supported by the Research Council of Finland’s Digital Waters (DIWA) flagship (Grant No. 359247) as well as the DIWA Doctoral Training Pilot project funded by the Ministry of Education and Culture (Finland).