A4 Refereed article in a conference publication
Multimodal Sleep Stage and Sleep Apnea Classification Using Vision Transformer: A Multitask Explainable Learning Approach
Authors: Kazemi, Kianoosh; Azimi, Iman; Khine, Michelle; Khayat, Rami N.; Rahmani, Amir M.; Liljeberg, Pasi
Editors: N/A
Conference name: Annual International Conference of the IEEE Engineering in Medicine and Biology Society
Publication year: 2025
Journal: Annual International Conference of the IEEE Engineering in Medicine and Biology Society
Book title : 2025 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
Volume: 47
ISBN: 979-8-3315-8619-5
eISBN: 979-8-3315-8618-8
ISSN: 2375-7477
eISSN: 2694-0604
DOI: https://doi.org/10.1109/EMBC58623.2025.11252880
Publication's open availability at the time of reporting: No Open Access
Publication channel's open availability : No Open Access publication channel
Web address : https://ieeexplore.ieee.org/document/11252880
Sleep is an essential component of human physiology, contributing significantly to overall health and quality of life. Accurate sleep staging and disorder detection are crucial for assessing sleep quality. Studies in the literature have proposed PSG-based approaches and machine-learning methods utilizing single-modality signals. However, existing methods often lack multimodal, multilabel frameworks and address sleep stages and disorders classification separately. In this paper, we propose a 1D-Vision Transformer for simultaneous classification of sleep stages and sleep disorders. Our method exploits the sleep disorders’ correlation with specific sleep stage patterns and performs a simultaneous identification of a sleep stage and sleep disorder. The model is trained and tested using multimodal-multilabel sensory data (including photoplethysmogram, respiratory flow, and respiratory effort signals). The proposed method shows an overall accuracy (cohen’s Kappa) of 78% (0.66) for five-stage sleep classification and 74% (0.58) for sleep apnea classification. Moreover, we analyzed the encoder attention weights to clarify our models’ predictions and investigate the influence different features have on the models’ outputs. The result shows that identified patterns, such as respiratory troughs and peaks, make a higher contribution to the final classification process.
Funding information in the publication:
This work was partially supported by the Finnish Foundation for Technology Promotion and the Nokia Foundation.