A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä

Camera Sensor Raw Data-Driven Video Blur Effect Prevention: Dataset and Study




TekijätNahli, Abdelwahed; Li, Dan; Uddin, Rahim; Raza, Tahir; Irfan, Muhammad; Lu, Qiyong; Zhang, Jian Qiu

KustantajaIEEE

Julkaisuvuosi2025

Lehti: IEEE Access

Vuosikerta13

Aloitussivu184762

Lopetussivu184774

eISSN 2169-3536

DOIhttps://doi.org/10.1109/ACCESS.2025.3622993

Julkaisun avoimuus kirjaamishetkelläAvoimesti saatavilla

Julkaisukanavan avoimuus Kokonaan avoin julkaisukanava

Rinnakkaistallenteen osoitehttps://research.utu.fi/converis/portal/detail/Publication/506150065


Tiivistelmä

Recent advances in machine vision have played an important role in addressing the challenging problem of motion blur. However, most deep learning–based deblurring methods operate in the RGB domain, rely on recursive strategies, and are often trained on unrealistic synthetic data. In this paper, we introduce a preventive solution from a new perspective, leveraging the opportunity to operate directly in the RAW domain on high-bit sensor data. Since no publicly available high–frame rate RAW-based blur prevention dataset exists, we construct Blurry-RAW, a novel dataset containing paired blurry and sharp frames in both RAW and RGB formats. We further propose 3D-ISPNet, a CNN–Transformer hybrid architecture, trained exclusively on RAW sensor data. This model achieves superior quantitative and qualitative performance compared to RGB-based counterparts. Moreover, by fine-tuning on data from different camera sensors, 3D-ISPNet demonstrates strong generalization across diverse hardware. Ultimately, the introduction of RAW-driven blur prevention and the new dataset paves the way for further research in this emerging direction.


Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.




Julkaisussa olevat rahoitustiedot
This work was supported in part by Fudan University, and in part by the National Natural Science Foundation of China
under Grant 12374431.


Last updated on