TSAR 2025 Shared Task on RCTS (CEFR Evaluators)
Collection
3 items
โข
Updated
This model is a fine-tuned version of answerdotai/ModernBERT-base on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | F1 |
|---|---|---|---|---|
| 28.6059 | 1.0 | 14 | 1.2701 | 0.3637 |
| 18.3081 | 2.0 | 28 | 0.9493 | 0.6123 |
| 8.3371 | 3.0 | 42 | 0.7826 | 0.6860 |
| 4.5689 | 4.0 | 56 | 0.6426 | 0.7304 |
| 2.3147 | 5.0 | 70 | 0.8673 | 0.7149 |
| 1.4668 | 6.0 | 84 | 0.7212 | 0.7819 |
| 0.7954 | 7.0 | 98 | 0.8906 | 0.7633 |
| 0.3336 | 8.0 | 112 | 0.8336 | 0.7877 |
| 0.1416 | 9.0 | 126 | 0.7927 | 0.7921 |
| 0.0155 | 10.0 | 140 | 0.7333 | 0.8103 |
@inproceedings{alva-manchego-etal-2025-findings,
title = "Findings of the {TSAR} 2025 Shared Task on Readability-Controlled Text Simplification",
author = "Alva-Manchego, Fernando and Stodden, Regina and Imperial, Joseph Marvin and Barayan, Abdullah and North, Kai and Tayyar Madabushi, Harish",
editor = "Shardlow, Matthew and Alva-Manchego, Fernando and North, Kai and Stodden, Regina and Saggion, Horacio and Khallaf, Nouran and Hayakawa, Akio",
booktitle = "Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025)",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.tsar-1.8/",
doi = "10.18653/v1/2025.tsar-1.8",
pages = "116--130",
ISBN = "979-8-89176-176-6"
}
Base model
answerdotai/ModernBERT-base