A4 Refereed article in a conference publication

Attention-Based Explainable AI for Wearable Multivariate Data: A Case Study on Affect Status Prediction




AuthorsWang, Yuning; Yang, Zhongqi; Azimi, Iman; Rahmani, Amir M.; Liljeberg, Pasi

EditorsN/A

Conference nameIEEE International Conference on Body Sensor Networks

Publication year2024

JournalInternational Conference on Wearable and Implantable Body Sensor Networks

Book title 2024 IEEE 20th International Conference on Body Sensor Networks (BSN)

Volume20

ISBN979-8-3315-3015-0

eISBN979-8-3315-3014-3

ISSN2376-8886

eISSN2376-8894

DOIhttps://doi.org/10.1109/BSN63547.2024.10780702

Web address https://ieeexplore.ieee.org/document/10780702


Abstract

Wearable technology enables ubiquitous health monitoring where multivariate physiological and behavioral data can be captured over time. Such multivariate time series (MTS) data in healthcare applications needs technique to interpret the analysis results. However, existing deep learning models for MTS data analysis often lack interpretability, and current explainable AI (xAI) techniques fail to capture the temporal and inter-variable complexities inherent in MTS. This hinders the trust and integration of these AI-based systems in clinical decision-making. In this paper, we propose an attention-based xAI method to classify and interpret MTS data collected from wearable devices. Our approach leverages self-attention mechanisms and graph attention layers (GAT) to capture both temporal and inter-variable dependencies, providing interpretability at both the temporal and modality levels. We evaluate our method using a longitudinal affect status monitoring. The dataset was collected from 21 college students via wearable devices over one year. We train separate models for positive (PA) and negative affect (NA) prediction, and compare their performance with a Transformer-based method. Our method achieves robust classification performance, with 78.62% accuracy for PA and 76.30% for NA, while offering transparent explanations of its decisions. These findings highlight the potential of our xAI method for reliable and interpretable MTS classification in healthcare applications.



Last updated on 2025-27-01 at 19:57