A4 Vertaisarvioitu artikkeli konferenssijulkaisussa

Deep Convolutional Neural Network-based Fusion of RGB and IR Images in Marine Environment




TekijätFahimeh Farahnakian, Jussi Poikonen, Markus Laurinen, Jukka Heikkonen

ToimittajaN/A

Konferenssin vakiintunut nimiIntelligent Transportation Systems Conference

Julkaisuvuosi2019

Kokoomateoksen nimi2019 IEEE Intelligent Transportation Systems Conference (ITSC)

Aloitussivu21

Lopetussivu26

Sivujen määrä6

ISBN978-1-5386-7025-5

eISBN978-1-5386-7024-8

ISSN2153-0009

DOIhttps://doi.org/10.1109/ITSC.2019.8917332

Rinnakkaistallenteen osoitehttps://research.utu.fi/converis/portal/detail/Publication/44437898


Tiivistelmä

Abstract— Designing accurate and automatic multi-target
detection is a challenging problem for autonomous vehicles.
To address this problem, we propose a late multi-modal fusion
framework in this paper. The framework provides complimentary information from RGB and thermal infrared cameras in
order to improve the detection performance. For this purpose,
it first employs RetinaNet as a dense simple deep model for each
input image separately to extract possible candidate proposals
which likely contain the targets of interest. Then, all proposals
are generated by concatenating the obtained proposals from
two modalities. Finally, redundant proposals are removed by
Non-Maximum Suppression (NMS). We evaluate the proposed
framework on a real marine dataset which is collected by a
sensor system onboard a vessel in the Finnish archipelago.
This system is used for developing autonomous vessels, and
records data in a range of operation and climatic conditions.
The experimental results show that our late fusion framework
can get more detection accuracy compared with middle fusion
and uni-modal frameworks. 


Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2024-26-11 at 12:07