A4 Refereed article in a conference publication

Multimodal Sleep Stage and Sleep Apnea Classification Using Vision Transformer: A Multitask Explainable Learning Approach




AuthorsKazemi, Kianoosh; Azimi, Iman; Khine, Michelle; Khayat, Rami N.; Rahmani, Amir M.; Liljeberg, Pasi

EditorsN/A

Conference nameAnnual International Conference of the IEEE Engineering in Medicine and Biology Society

Publication year2025

Journal: Annual International Conference of the IEEE Engineering in Medicine and Biology Society

Book title 2025 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)

Volume47

ISBN979-8-3315-8619-5

eISBN979-8-3315-8618-8

ISSN2375-7477

eISSN2694-0604

DOIhttps://doi.org/10.1109/EMBC58623.2025.11252880

Publication's open availability at the time of reportingNo Open Access

Publication channel's open availability No Open Access publication channel

Web address https://ieeexplore.ieee.org/document/11252880


Abstract

Sleep is an essential component of human physiology, contributing significantly to overall health and quality of life. Accurate sleep staging and disorder detection are crucial for assessing sleep quality. Studies in the literature have proposed PSG-based approaches and machine-learning methods utilizing single-modality signals. However, existing methods often lack multimodal, multilabel frameworks and address sleep stages and disorders classification separately. In this paper, we propose a 1D-Vision Transformer for simultaneous classification of sleep stages and sleep disorders. Our method exploits the sleep disorders’ correlation with specific sleep stage patterns and performs a simultaneous identification of a sleep stage and sleep disorder. The model is trained and tested using multimodal-multilabel sensory data (including photoplethysmogram, respiratory flow, and respiratory effort signals). The proposed method shows an overall accuracy (cohen’s Kappa) of 78% (0.66) for five-stage sleep classification and 74% (0.58) for sleep apnea classification. Moreover, we analyzed the encoder attention weights to clarify our models’ predictions and investigate the influence different features have on the models’ outputs. The result shows that identified patterns, such as respiratory troughs and peaks, make a higher contribution to the final classification process.


Funding information in the publication
This work was partially supported by the Finnish Foundation for Technology Promotion and the Nokia Foundation.


Last updated on 2025-04-12 at 12:36