A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä
Camera Sensor Raw Data-Driven Video Blur Effect Prevention: Dataset and Study
Tekijät: Nahli, Abdelwahed; Li, Dan; Uddin, Rahim; Raza, Tahir; Irfan, Muhammad; Lu, Qiyong; Zhang, Jian Qiu
Kustantaja: IEEE
Julkaisuvuosi: 2025
Lehti: IEEE Access
Vuosikerta: 13
Aloitussivu: 184762
Lopetussivu: 184774
eISSN: 2169-3536
DOI: https://doi.org/10.1109/ACCESS.2025.3622993
Julkaisun avoimuus kirjaamishetkellä: Avoimesti saatavilla
Julkaisukanavan avoimuus : Kokonaan avoin julkaisukanava
Rinnakkaistallenteen osoite: https://research.utu.fi/converis/portal/detail/Publication/506150065
Recent advances in machine vision have played an important role in addressing the challenging problem of motion blur. However, most deep learning–based deblurring methods operate in the RGB domain, rely on recursive strategies, and are often trained on unrealistic synthetic data. In this paper, we introduce a preventive solution from a new perspective, leveraging the opportunity to operate directly in the RAW domain on high-bit sensor data. Since no publicly available high–frame rate RAW-based blur prevention dataset exists, we construct Blurry-RAW, a novel dataset containing paired blurry and sharp frames in both RAW and RGB formats. We further propose 3D-ISPNet, a CNN–Transformer hybrid architecture, trained exclusively on RAW sensor data. This model achieves superior quantitative and qualitative performance compared to RGB-based counterparts. Moreover, by fine-tuning on data from different camera sensors, 3D-ISPNet demonstrates strong generalization across diverse hardware. Ultimately, the introduction of RAW-driven blur prevention and the new dataset paves the way for further research in this emerging direction.
Ladattava julkaisu This is an electronic reprint of the original article. |
Julkaisussa olevat rahoitustiedot:
This work was supported in part by Fudan University, and in part by the National Natural Science Foundation of China
under Grant 12374431.