A4 Refereed article in a conference publication

Aurora-M: Open Source Continual Pre-training for Multilingual Language and Code




AuthorsNakamura, Taishi; Mishra, Mayank; Tedeschi, Simone; Chai, Yekun; Stillerman, Jason T.; Friedrich, Felix; Yadav, Prateek; Laud, Tanmay; Chien, Vu Minh; Zhuo, Terry Yue; Misra, Diganta; Bogin, Ben; Vu, Xuan-Son; Karpinska, Marzena; Dantuluri, Arnav Varma; Kusa, Wojciech; Furlanello, Tommaso; Yokota, Rio; Muennighoff, Niklas; Pai, Suhas; Adewumi, Tosin; Laippala, Veronika; Yao, Xiaozhe; Junior, Adalberto Barbosa; Drozd, Aleksandr; Clive, Jordan; Gupta, Kshitij; Chen, Liangyu; Sun, Qi; Tsui, Ken; Moustafa-Fahmy, Nour; Monti, Nicolo; Dang, Tai; Luo, Ziyang; Bui, Tien-Tung; Navigli, Roberto; Mehta, Virendra; Blumberg, Matthew; May, Victor; Nguyen, Hiep; Pyysalo, Sampo

EditorsRambow, Owen; Wanner, Leo; Apidianaki, Marianna; Al-Khalifa, Hend; Di Eugenio, Barbara; Schockaert, Steven; Darwish, Kareem; Agarwal, Apoorv

Conference nameInternational Conference on Computational Linguistics

Publication year2025

Book title Proceedings of the 31st International Conference on Computational Linguistics : Industry Track

First page 656

Last page678

ISBN 979-8-89176-197-1

Publication's open availability at the time of reportingOpen Access

Publication channel's open availability Open Access publication channel

Web address https://aclanthology.org/2025.coling-industry.56/

Self-archived copy’s web addresshttps://research.utu.fi/converis/portal/detail/Publication/508764398

Self-archived copy's licenceCC BY

Self-archived copy's versionPublisher`s PDF


Abstract

Pretrained language models are integral part of AI applications, but their high computational cost for training limits accessibility. Initiatives such as Bloom and StarCoder aim to democratize access to pretrained models for collaborative community development. Despite these efforts, such models encounter challenges such as limited multilingual capabilities, risks of catastrophic forgetting during continual pretraining, and the high costs of training models from scratch, alongside the need to align with AI safety standards and regulatory frameworks. This paper presents Aurora-M, a 15B parameter multilingual open-source model trained on English, Finnish, Hindi, Japanese, Vietnamese, and code. Continually pretrained from StarCoderPlus on 435B additional tokens, Aurora-M surpasses 2T tokens in total training token count. It is the first open-source multilingual model fine-tuned on human-reviewed safety instructions, thus aligning its development not only with conventional red-teaming considerations, but also with the specific concerns articulated in the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. We evaluate Aurora-M across a wide range of tasks and languages, showcasing its robustness against catastrophic forgetting and its superior performance in multilingual settings, particularly in safety evaluations. We open-source Aurora-M and its variants to encourage responsible open-source development of large language models at https://huggingface.co/aurora-m.


Downloadable publication

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 12/02/2026 12:07:05 PM