A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä

A generative multimodal network for facial expression recognition




TekijätZhao, Yue; Song, Mingjian; Zhang, Qi; Yang, Jiawei; Yoshigoe, Kenji; Tian, Chunwei

KustantajaElsevier

Julkaisuvuosi2026

Lehti: Pattern Recognition

Artikkelin numero113518

Vuosikerta179

NumeroPart A

ISSN0031-3203

eISSN1873-5142

DOIhttps://doi.org/10.1016/j.patcog.2026.113518

Julkaisun avoimuus kirjaamishetkelläEi avoimesti saatavilla

Julkaisukanavan avoimuus Osittain avoin julkaisukanava

Verkko-osoitehttps://doi.org/10.1016/j.patcog.2026.113518


Tiivistelmä

Deep networks with strong feature extraction abilities have been extensively employed in facial expression recognition (FER). However, they focus on structural information from data dependency rather than facial attribute to limit robustness of obtained models for FER. In this paper, we propose a generative multimodal network (GMNet) for FER. Firstly, GMNet can generate and align multimodal face images, according to face asymmetry and mirror imaging principle. Secondly, it utilizes parallel networks to respectively learn diversity information based on original and generative multimodal face images and merge them from obtained multimodal face images to obtain reliable facial expression information. Thirdly, a sparse mechanism can further refine obtained richer facial features above to obtain more accurate facial expression information and reduce training costs. Finally, a cross loss can utilize cross domain restriction to guarantee reliability of multimodal face images to improve performance in facial expression. Experimental results show that our GMNet is superior to other popular FER methods. Codes of GMNet can be used at https://github.com/hellloxiaotian/GMNet.


Julkaisussa olevat rahoitustiedot
This work was supported by Leading Talents in Gusu Innovation and Entrepreneurship [No. ZXL2023170]; and the Basic Research Programs of Taicang 2024 [No. TC2024JC32].


Last updated on