D3 Article in a professional conference publication

Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting LLMs Across Languages and Resources




AuthorsLi, Zihao; Ji, Shaoxiong; Luo, Hengyu; Tiedemann, Jörg

EditorsN/A

Conference nameConference on Language Modeling

Publication year2025

Book title Proceedings of the Second Conference on Language Modeling, COLM 2025

First page 1

Last page23

Publication's open availability at the time of reportingOpen Access

Publication channel's open availability Open Access publication channel

Web address https://openreview.net/pdf?id=mpTIzK4Zca


Abstract

Large Language Models (LLMs) exhibit significant disparities in performance across languages, primarily benefiting high-resource languages while marginalizing underrepresented ones. Continual Pretraining (CPT) has emerged as a promising approach to address this imbalance, although the relative effectiveness of monolingual, bilingual, and code-augmented data strategies remains unclear. This study systematically evaluates 36 CPT configurations involving three multilingual base models, across 30+ languages categorized as altruistic, selfish, and stagnant, spanning various resource levels. Our findings reveal three major insights: (1) Bilingual CPT improves multilingual classification but often causes language mixing issues during generation. (2) Including programming code data during CPT consistently enhances multilingual classification accuracy and language modeling capabilities, particularly benefiting low-resource languages, but introduces a trade-off by slightly degrading generation quality. (3) Contrary to prior work, we observe substantial deviations from language classifications according to their impact on cross-lingual transfer: Languages classified as altruistic often negatively affect related languages, selfish languages show conditional and configuration-dependent behavior, and stagnant languages demonstrate surprising adaptability under certain CPT conditions. These nuanced interactions emphasize the complexity of multilingual representation learning, underscoring the importance of systematic studies on generalizable language classification to inform future multilingual CPT strategies.



Last updated on 23/01/2026 09:43:26 AM