An Approach to the Frugal Use of Human Annotators to Scale up Auto-coding for Text Classification Tasks




Chen Li'An, Suominen Hanna

Afshin Rahimi, William Lane, Guido Zuccon

Australasian Language Technology Association Workshop

PublisherAssociation for Computational Linguistics (ACL)

2021

Proceedings of the Australasian Language Technology Workshop

Proceedings of the 19th Workshop of the Australasian Language Technology Association

ALTA 2021 - Proceedings of the 19th Workshop of the Australasian Language Technology Association

Proceedings of the australasian language technology workshop

12

21

1834-7037

https://aclanthology.org/2021.alta-1.2/

https://research.utu.fi/converis/portal/detail/Publication/178631603



Human annotation for establishing the training data is often a very costly process in natural language processing (NLP) tasks, which has led to frugal NLP approaches becoming an important research topic. Many research teams struggle to complete projects with limited funding, labor, and computational resources. Driven by the Move-Step analytic framework theorized in the applied linguistics field, our study offers a rigorous approach to the frugal use of two human annotators to scale up autocoding for text classification tasks. We applied the Linear Support Vector Machine algorithm to text classification of a job ad corpus. Our Cohen’s Kappa for inter-rater agreement and Area Under the Curve (AUC) values reached averages of 0.76 and 0.80, respectively. The calculated time consumption for our human training process was 36 days. The results indicated that even the strategic and frugal use of only two human annotators could enable the efficient training of classifiers with reasonably good performance. This study does not aim to provide generalizability of the results. Rather, it is proposed that the annotation strategies arising from this study be considered by our readers only if they are fit for one’s specific research purposes.


Last updated on 2024-26-11 at 20:43