Point Cloud Facilitated ORB-SLAM2 for Robust Autonomous Navigation in a Cluttered Indoor Environment




Ghosh, Kuntal; Heikkonen, Jukka; Kanth, Rajeev

Udgata, Siba K.; Sethi, Srinivas; Ghinea, George; Kuanar, Sanjay Kumar

International Conference on Machine Learning, IoT and Big Data

PublisherSpringer Nature Singapore

2025

Lecture Notes in Networks and Systems

Intelligent Systems: Proceedings of 4th International Conference on Machine Learning, IoT and Big Data (ICMIB 2024), Volume 1

1314

215

227

978-981-96-3796-6

978-981-96-3797-3

2367-3370

2367-3389

DOIhttps://doi.org/10.1007/978-981-96-3797-3_18

https://doi.org/10.1007/978-981-96-3797-3_18



This article primarily encompasses different aspects of autonomous indoor navigation for robots. Vision-based camera sensors have recently been supported Simultaneous Localization And Mapping (SLAM) technology, which is being explored for autonomous navigation. However, the capabilities of SLAM are limited to identifying objects from different perspectives. Hence, point cloud information is integrated with the visual feed for robust and precise mapping of the environment. Upon acquiring discrete point clouds, points are stitched and registered, respectively, to reconstruct the scene. Then, using those stitched point clouds, different obstacles within the same environment are registered too. Subsequently KD-Tree (k-dimensional tree) search algorithm is implemented for classifying the point cloud data, which leads to identifying clusters of objects occupied within the environment. In this entire process, an Intel RealSense stereo depth camera coupled with a mobile PC is used for collecting visual feed and concerned 3D point vectors to emulate this integrated concept.



This research is supported by Science and Engineering Research Board; Department of Science and Technology, India.


Last updated on 2025-25-08 at 10:26