A1 Refereed original research article in a scientific journal
General-Purpose Deep Learning Detection and Segmentation Models for Images from a Lidar-Based Camera Sensor
Authors: Yu Xianjia, Salimpour Sahar, Peña Queralta Jorge, Westerlund Tomi
Publisher: MDPI
Publishing place: Basel
Publication year: 2023
Journal: Sensors
Journal name in source: SENSORS
Journal acronym: SENSORS-BASEL
Article number: 2936
Volume: 23
Issue: 6
Number of pages: 12
DOI: https://doi.org/10.3390/s23062936
Web address : https://www.mdpi.com/1424-8220/23/6/2936
Self-archived copy’s web address: https://research.utu.fi/converis/portal/detail/Publication/179338592
Over the last decade, robotic perception algorithms have significantly benefited from the rapid advances in deep learning (DL). Indeed, a significant amount of the autonomy stack of different commercial and research platforms relies on DL for situational awareness, especially vision sensors. This work explored the potential of general-purpose DL perception algorithms, specifically detection and segmentation neural networks, for processing image-like outputs of advanced lidar sensors. Rather than processing the three-dimensional point cloud data, this is, to the best of our knowledge, the first work to focus on low-resolution images with a 360 degrees field of view obtained with lidar sensors by encoding either depth, reflectivity, or near-infrared light in the image pixels. We showed that with adequate preprocessing, general-purpose DL models can process these images, opening the door to their usage in environmental conditions where vision sensors present inherent limitations. We provided both a qualitative and quantitative analysis of the performance of a variety of neural network architectures. We believe that using DL models built for visual cameras offers significant advantages due to their much wider availability and maturity compared to point cloud-based perception.
Downloadable publication This is an electronic reprint of the original article. |