A1 Refereed original research article in a scientific journal
Automatic Classification of Strain in the Singing Voice Using Machine Learning
Authors: Liu, Yuanyuan; Mittapalle, Kiran Reddy; Yagnavajjula, Madhu Keerthana; Räsänen, Okko; Alku, Paavo; Ikävalko, Tero; Hakanpää, Tua; Öyry, Aleksi; Laukkanen, Anne-Maria
Publisher: Elsevier BV
Publication year: 2025
Journal: Journal of Voice
Journal name in source: Journal of Voice
ISSN: 0892-1997
eISSN: 1873-4588
DOI: https://doi.org/10.1016/j.jvoice.2025.03.040
Web address : https://www.jvoice.org/article/S0892-1997(25)00134-1/fulltext
Self-archived copy’s web address: https://research.utu.fi/converis/portal/detail/Publication/491612849
Objectives
Classifying strain in the singing voice can help protect professional singers from vocal overuse and support singing training. This study investigates whether machine learning can automatically classify singing voices into two levels of perceived strain. The singing samples represent two genres: classical and contemporary commercial music (CCM).
Methods
A total of 324 singing voice samples from 15 professional normophonic singers (nine female, six male) were analyzed. Nine singers were classical, and six were CCM singers. The samples consisted of syllable strings produced at three to six pitches and three loudness levels. Based on expert auditory-perceptual ratings, the samples were categorized into two strain levels: normal-mild and moderate-severe. Three acoustic feature sets (mel-frequency cepstral coefficients (MFCCs), the extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS), and wavelet scattering features) were compared using two classifier models [support vector machine (SVM) and multilayer perceptron (MLP)]. Feature selection was performed using recursive feature elimination, and the Mann-Whitney U test was used to assess the discriminative power of the selected features.
Results
The highest classification accuracy of 86.1% was achieved using a subset of wavelet scattering features with the MLP classifier. A comparison between individual features showed that the first MFCC coefficient, representing spectral tilt, exhibited the greatest between-class separation.
Conclusion
This study demonstrates that machine learning models utilizing selected acoustic features can classify perceptual strain of singing voices automatically with high accuracy. These preliminary findings highlight the potential for larger studies involving more diverse singer groups across different genres.
Downloadable publication This is an electronic reprint of the original article. |
Funding information in the publication:
The study was supported by The Academy of Finland (grant number 356528 awarded for the project “Vocal efficiency and economy in loud classical Operatic and Contemporary Commercial Music (CCM) singing styles compared to loud speech.”