A4 Vertaisarvioitu artikkeli konferenssijulkaisussa
Two-stage Classification for Detecting Murmurs from Phonocardiograms Using Deep and Expert Features
Tekijät: Summerton Sara, Wood Danny, Murphy Darcy, Redfern Oliver, Benatan Matt, Kaisti Matti, Wong David C
Toimittaja: N/A
Konferenssin vakiintunut nimi: Computing in Cardiology
Julkaisuvuosi: 2022
Journal: Computing in Cardiology
Kokoomateoksen nimi: Computing in Cardiology 2022
Sarjan nimi: Computing in Cardiology
Vuosikerta: 49
ISSN: 2325-8861
eISSN: 2325-887X
DOI: https://doi.org/10.22489/CinC.2022.322
Verkko-osoite: https://cinc.org/archives/2022/pdf/CinC2022-322.pdf
Rinnakkaistallenteen osoite: https://research.utu.fi/converis/portal/detail/Publication/178544267
Detection of heart murmurs from stethoscope sounds is a key clinical technique used to identify cardiac abnormalities. We describe the creation of an ensemble classifier using both deep and hand-crafted features to screen for heart murmurs and clinical abnormality from phonocardiogram recordings over multiple auscultation locations. The model was created by the team Murmur Mia! for the George B. Moody PhysioNet Challenge 2022.
Methods: Recordings were first filtered through a gradient boosting algorithm to detect Unknown. We assume that these are related to poor quality recordings, and hence we use input features commonly used to assess audio quality. Two further models, a gradient boosting model and ensemble of convolutional neural networks, were trained using time-frequency features and the mel-frequency cepstral coefficients (MFCC) as inputs, respectively. The models were combined using logistic regression, with bespoke rules to convert individual recording outputs to patient predictions.
Results: On the hidden challenge test set, our classifier scored 0.755 for the weighted accuracy and 14228 for clinical outcome challenge metric. This placed 9/40 and 28/39 on the challenge leaderboard, for each scoring metric, respectively.
Ladattava julkaisu This is an electronic reprint of the original article. |