ModernBERT-base-doc_en-Cefr

This model is a fine-tuned version of answerdotai/ModernBERT-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7333
  • F1: 0.8103

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3.6e-05
  • train_batch_size: 3
  • eval_batch_size: 3
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 48
  • optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss F1
28.6059 1.0 14 1.2701 0.3637
18.3081 2.0 28 0.9493 0.6123
8.3371 3.0 42 0.7826 0.6860
4.5689 4.0 56 0.6426 0.7304
2.3147 5.0 70 0.8673 0.7149
1.4668 6.0 84 0.7212 0.7819
0.7954 7.0 98 0.8906 0.7633
0.3336 8.0 112 0.8336 0.7877
0.1416 9.0 126 0.7927 0.7921
0.0155 10.0 140 0.7333 0.8103

Framework versions

  • Transformers 4.53.1
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.2

Citation

@inproceedings{alva-manchego-etal-2025-findings,
    title = "Findings of the {TSAR} 2025 Shared Task on Readability-Controlled Text Simplification",
    author = "Alva-Manchego, Fernando  and Stodden, Regina  and Imperial, Joseph Marvin  and Barayan, Abdullah  and North, Kai  and Tayyar Madabushi, Harish",
    editor = "Shardlow, Matthew  and Alva-Manchego, Fernando  and North, Kai  and Stodden, Regina  and Saggion, Horacio  and Khallaf, Nouran  and Hayakawa, Akio",
    booktitle = "Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025)",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.tsar-1.8/",
    doi = "10.18653/v1/2025.tsar-1.8",
    pages = "116--130",
    ISBN = "979-8-89176-176-6"
}
Downloads last month
348
Safetensors
Model size
0.1B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for AbdullahBarayan/ModernBERT-base-doc_en-Cefr

Finetuned
(969)
this model

Collection including AbdullahBarayan/ModernBERT-base-doc_en-Cefr