A4 Refereed article in a conference publication

Orientation-conditioned Facial Texture Mapping for Video-based Facial Remote Photoplethysmography Estimation




AuthorsCantrill, Sam; Ahmedt-Aristizabal, David; Petersson, Lars; Suominen, Hanna; Armin, Mohammad Ali

EditorsN/A

Conference nameIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

Publication year2024

JournalIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

Book title 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)

First page 354

Last page363

ISBN979-8-3503-6548-1

eISBN979-8-3503-6547-4

ISSN2160-7508

eISSN2160-7516

DOIhttps://doi.org/10.1109/CVPRW63382.2024.00040

Web address https://ieeexplore.ieee.org/document/10677927

Preprint addresshttps://arxiv.org/pdf/2404.09378


Abstract

Camera-based remote photoplethysmography (rPPG) enables contactless measurement of important physiological signals such as pulse rate (PR). However, dynamic and unconstrained subject motion introduces significant variability into the facial appearance in video, confounding the ability of video-based methods to accurately extract the rPPG signal. In this study, we leverage the 3D facial surface to construct a novel orientation-conditioned facial texture video representation which improves the motion robustness of existing video-based facial rPPG estimation methods. Our proposed method achieves a significant 18.2% performance improvement in cross-dataset testing on MMPD over our baseline using the PhysNet model trained on PURE, highlighting the efficacy and generalization benefits of our designed video representation. We demonstrate significant performance improvements of up to 29.6% in all tested motion scenarios in cross-dataset testing on MMPD, even in the presence of dynamic and unconstrained subject motion. Emphasizing the benefits the benefits of disentangling motion through modeling the 3D facial surface for motion robust facial rPPG estimation. We validate the efficacy of our design decisions and the impact of different video processing steps through an ablation study. Our findings illustrate the potential strengths of exploiting the 3D facial surface as a general strategy for addressing dynamic and unconstrained subject motion in videos. The code is available at https://samcantrill.github.io/orientation-uv-rppg/.


Funding information in the publication
This work was supported by the MRFF Rapid Applied Research Translation grant (RARUR000158), CSIRO AI4M Minimising Antimicrobial Resistance Mission, and Australian Government Training Research Program (AGRTP) Scholarship.


Last updated on 2025-27-01 at 19:24