A4 Vertaisarvioitu artikkeli konferenssijulkaisussa
Deep Convolutional Neural Network-based Fusion of RGB and IR Images in Marine Environment
Tekijät: Fahimeh Farahnakian, Jussi Poikonen, Markus Laurinen, Jukka Heikkonen
Toimittaja: N/A
Konferenssin vakiintunut nimi: Intelligent Transportation Systems Conference
Julkaisuvuosi: 2019
Kokoomateoksen nimi: 2019 IEEE Intelligent Transportation Systems Conference (ITSC)
Aloitussivu: 21
Lopetussivu: 26
Sivujen määrä: 6
ISBN: 978-1-5386-7025-5
eISBN: 978-1-5386-7024-8
ISSN: 2153-0009
DOI: https://doi.org/10.1109/ITSC.2019.8917332
Rinnakkaistallenteen osoite: https://research.utu.fi/converis/portal/detail/Publication/44437898
Abstract— Designing accurate and automatic multi-target
detection is a challenging problem for autonomous vehicles.
To address this problem, we propose a late multi-modal fusion
framework in this paper. The framework provides complimentary information from RGB and thermal infrared cameras in
order to improve the detection performance. For this purpose,
it first employs RetinaNet as a dense simple deep model for each
input image separately to extract possible candidate proposals
which likely contain the targets of interest. Then, all proposals
are generated by concatenating the obtained proposals from
two modalities. Finally, redundant proposals are removed by
Non-Maximum Suppression (NMS). We evaluate the proposed
framework on a real marine dataset which is collected by a
sensor system onboard a vessel in the Finnish archipelago.
This system is used for developing autonomous vessels, and
records data in a range of operation and climatic conditions.
The experimental results show that our late fusion framework
can get more detection accuracy compared with middle fusion
and uni-modal frameworks.
Ladattava julkaisu This is an electronic reprint of the original article. |