A Quest for Paradigm Coverage: The Story of Nen




Muradoğlu Saliha, Suominen Hanna, Evans Nicholas

Serikov Oleg, Voloshina Ekaterina, Postnikova Anna, Klyachko Elena, Vylomova Ekaterina, Shavrina Tatiana, Le Ferrand Eric, Malykh Valentin, Tyers Francis, Arkhangelskiy Timofey, Mikhailov Vladislav

Workshop on NLP Applications to Field Linguistics

PublisherAssociation for Computational Linguistics (ACL)

2023

Proceedings of the Second Workshop on NLP Applications to Field Linguistics

FieldMatters 2023 - 2nd Workshop on NLP Applications to Field Linguistics, Proceedings

74

85

978-1-959429-60-9

DOIhttps://doi.org/10.18653/v1/2023.fieldmatters-1.9

https://aclanthology.org/2023.fieldmatters-1.9/

https://research.utu.fi/converis/portal/detail/Publication/387427538



Language documentation aims to collect a representative corpus of the language. Nevertheless, the question of how to quantify the comprehensive of the collection persists. We propose leveraging computational modelling to provide a supplementary metric to address this question in a low-resource language setting. We apply our proposed methods to the Papuan language Nen. Nen is actively in the process of being described and documented. Given the enormity of the task of language documentation, we focus on one subdomain, namely Nen verbal morphology. This study examines four verb types: copula, positional, middle, and transitive. We propose model-based paradigm generation for each verb type as a new way to measure completeness, where accuracy is analogous to the coverage of the paradigm. We contrast the paradigm attestation within the corpus (constructed from fieldwork data) and the accuracy of the paradigm generated by Transformer models trained for inflection. This analysis is extended by extrapolating from the learning curve established to provide predictions for the quantity of data required to generate a complete paradigm correctly. We also explore the correlation between high-frequency morphosyntactic features and model accuracy. We see a positive correlation between high-frequency feature combinations and model accuracy, but this is only sometimes the case. We also see high accuracy for low-frequency morphosyntactic features. Our results show that model coverage is significantly higher for the middle and transitive verbs but not the positional verb. This is an interesting finding, as the positional verb paradigm is the smallest of the four.

Last updated on 2024-26-11 at 16:56