Papers
arxiv:2511.09213

Pretraining Finnish ModernBERTs

Published on Nov 12
Authors:
,
,
,

Abstract

ModernBERT encoder models pre-trained in limited multilingual settings, focusing on Finnish languages, perform better than existing multilingual models and outperform monolingual models on tasks requiring longer context.

AI-generated summary

This paper reports on pretraining ModernBERT encoder models in six different sizes, ranging from 51M to 475M parameters, with a focus on limited multilingualism, emphasizing languages relevant to Finland. Our models are competitive with, or superior to, existing multilingual models. They outperform monolingual models on tasks that require a context longer than 512 tokens. We present empirical results on using different data in the final stage of training. The code and models are publicly released.

Community

Sign up or log in to comment

Models citing this paper 19

Browse 19 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2511.09213 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.09213 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.