A4 Vertaisarvioitu artikkeli konferenssijulkaisussa

The birth of Romanian BERT




TekijätStefan Dumitrescu, Andrei-Marius Avram, Sampo Pyysalo

ToimittajaTrevor Cohn, Yulan He, Yang Liu

Konferenssin vakiintunut nimiEmpirical Methods in Natural Language Processing

Julkaisuvuosi2020

JournalAnnual Meeting of the Association for Computational Linguistics

Kokoomateoksen nimiFindings of the Association for Computational Linguistics: EMNLP 2020

Aloitussivu4324

Lopetussivu4328

ISBN978-1-952148-90-3

DOIhttps://doi.org/10.18653/v1/2020.findings-emnlp.387

Verkko-osoitehttps://www.aclweb.org/anthology/2020.findings-emnlp.387/

Rinnakkaistallenteen osoitehttps://arxiv.org/abs/2009.08712


Tiivistelmä
Large-scale pretrained language models have
become ubiquitous in Natural Language Processing. However, most of these
models are available either in high-resource languages, in particular
English, or as multilingual models that compromise performance on
individual languages for coverage. This paper introduces Romanian BERT,
the first purely Romanian transformer-based language model, pretrained
on a large text corpus. We discuss corpus com-position and cleaning, the
model training process, as well as an extensive evaluation of the model
on various Romanian datasets. We opensource not only the model itself,
but also a repository that contains information on how to obtain the
corpus, fine-tune and use this model in production (with practical
examples), and how to fully replicate the evaluation process.

Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.




Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2024-26-11 at 23:23