A4 Vertaisarvioitu artikkeli konferenssijulkaisussa
Emotion Recognition with Minimal Wearable Sensing: Multi-Domain Feature, Hybrid Feature Selection, and Personalized vs. Generalized Ensemble Model Analysis
Tekijät: Irfan, Muhammad; Nawaz, Anum; Bulbul, Ayse Kosal; Klén, Riku; Subasi, Abdulhamit; Westerlund, Tomi; Chen, Wei
Toimittaja: N/A
Konferenssin vakiintunut nimi: IEEE International Conference on Bioinformatics and Biomedicine
Julkaisuvuosi: 2025
Lehti: Proceedings (IEEE International Conference on Bioinformatics and Biomedicine)
Kokoomateoksen nimi: 2025 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)
Aloitussivu: 5447
Lopetussivu: 5453
ISBN: 979-8-3315-1558-4
eISBN: 979-8-3315-1557-7
ISSN: 2156-1125
eISSN: 2156-1133
DOI: https://doi.org/10.1109/BIBM66473.2025.11356135
Julkaisun avoimuus kirjaamishetkellä: Ei avoimesti saatavilla
Julkaisukanavan avoimuus : Ei avoin julkaisukanava
Verkko-osoite: https://ieeexplore.ieee.org/document/11356135
Negative emotions are linked to the onset of neurodegenerative diseases and dementia, yet they are often difficult to detect through observation. Physiological signals from wearable devices offer a promising noninvasive method for continuous emotion monitoring. In this study, we propose a lightweight, resource-efficient machine learning approach for binary emotion classification, distinguishing between negative (sadness, disgust, anger) and positive (amusement, tenderness, gratitude) affective states using only electrocardiography (ECG) signals. The method is designed for deployment in resource-constrained systems, such as Internet of Things (IoT) devices, by reducing battery consumption and cloud data transmission through the avoidance of computationally expensive multimodal inputs. We utilized ECG data from 218 CSV files extracted from four studies in the Psychophysiology of Positive and Negative Emotions (POPANE) dataset, which comprises recordings from 1,157 healthy participants across seven studies. Each file represents a unique subject emotion, and the ECG signals, recorded at 1000 Hz, were segmented into 10-second epochs to reflect real-world usage. Our approach integrates multidomain feature extraction, selective feature fusion, and a voting classifier. We evaluated it using a participant-exclusive generalized model and a participantinclusive personalized model. The personalized model achieved the best performance, with an average accuracy of 95.59 %, outperforming the generalized model, which reached 69.92 % accuracy. Comparisons with other studies on the POPANE and similar datasets show that our approach consistently outperforms existing methods. This work highlights the effectiveness of personalized models in emotion recognition and their suitability for wearable applications that require accurate, low-power, and realtime emotion tracking. Code availability at GitHub.
Julkaisussa olevat rahoitustiedot:
Funded by the European Union (AI4HOPE, 101136769)