A4 Vertaisarvioitu artikkeli konferenssijulkaisussa
Attention-Based Explainable AI for Wearable Multivariate Data: A Case Study on Affect Status Prediction
Tekijät: Wang, Yuning; Yang, Zhongqi; Azimi, Iman; Rahmani, Amir M.; Liljeberg, Pasi
Toimittaja: N/A
Konferenssin vakiintunut nimi: IEEE International Conference on Body Sensor Networks
Julkaisuvuosi: 2024
Journal: International Conference on Wearable and Implantable Body Sensor Networks
Kokoomateoksen nimi: 2024 IEEE 20th International Conference on Body Sensor Networks (BSN)
Vuosikerta: 20
ISBN: 979-8-3315-3015-0
eISBN: 979-8-3315-3014-3
ISSN: 2376-8886
eISSN: 2376-8894
DOI: https://doi.org/10.1109/BSN63547.2024.10780702
Verkko-osoite: https://ieeexplore.ieee.org/document/10780702
Wearable technology enables ubiquitous health monitoring where multivariate physiological and behavioral data can be captured over time. Such multivariate time series (MTS) data in healthcare applications needs technique to interpret the analysis results. However, existing deep learning models for MTS data analysis often lack interpretability, and current explainable AI (xAI) techniques fail to capture the temporal and inter-variable complexities inherent in MTS. This hinders the trust and integration of these AI-based systems in clinical decision-making. In this paper, we propose an attention-based xAI method to classify and interpret MTS data collected from wearable devices. Our approach leverages self-attention mechanisms and graph attention layers (GAT) to capture both temporal and inter-variable dependencies, providing interpretability at both the temporal and modality levels. We evaluate our method using a longitudinal affect status monitoring. The dataset was collected from 21 college students via wearable devices over one year. We train separate models for positive (PA) and negative affect (NA) prediction, and compare their performance with a Transformer-based method. Our method achieves robust classification performance, with 78.62% accuracy for PA and 76.30% for NA, while offering transparent explanations of its decisions. These findings highlight the potential of our xAI method for reliable and interpretable MTS classification in healthcare applications.