Got Compute, but No Data: Lessons From Post-training a Finnish LLM




Zosa, Elaine; Komulainen, Ville; Pyysalo, Sampo

Johansson, Richard; Stymne, Sara

Nordic Conference on Computational Linguistics and Baltic Conference on Human Language Technologies

2025

 NEALT proceedings series

Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)

57

826

832

978-9908-53-109-0

1736-8197

1736-6305

https://aclanthology.org/2025.nodalida-1.81/

https://research.utu.fi/converis/portal/detail/Publication/506556277



As LLMs gain more popularity as chatbots and general assistants, methods have been developed to enable LLMs to follow instructions and align with human preferences. These methods have found success in the field, but their effectiveness has not been demonstrated outside of high-resource languages. In this work, we discuss our experiences in post-training an LLM for instruction-following for English and Finnish. We use a multilingual LLM to translate instruction and preference datasets from English to Finnish. We perform instruction tuning and preference optimization in English and Finnish and evaluate the instruction-following capabilities of the model in both languages. Our results show that with a few hundred Finnish instruction samples we can obtain competitive performance in Finnish instruction-following. We also found that although preference optimization in English offers some cross-lingual benefits, we obtain our best results by using preference data from both languages. We release our model, datasets, and recipes under open licenses at https://huggingface.co/LumiOpen/Poro-34B-chat-OpenAssistant.


This project has received funding from the European Union’s Horizon Europe research and innovation programme under Grant agreement No 101070350.


Last updated on 28/01/2026 12:59:20 PM