Datasets:

Modalities:
Text
Formats:
json
Libraries:
Datasets
Dask
License:
File size: 7,355 Bytes
d6d5af4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d9f15bb
d6d5af4
d9f15bb
d6d5af4
d9f15bb
d6d5af4
d9f15bb
d6d5af4
d9f15bb
d6d5af4
d9f15bb
d6d5af4
d9f15bb
d6d5af4
d9f15bb
d6d5af4
d9f15bb
 
 
 
 
 
 
 
 
 
 
 
 
d6d5af4
 
 
 
 
 
 
 
 
 
 
d9f15bb
d6d5af4
 
 
 
 
 
 
 
d9f15bb
d6d5af4
 
 
 
 
 
d9f15bb
d6d5af4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d9f15bb
 
 
 
 
 
 
 
a0074d8
d9f15bb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d6d5af4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51c0109
 
d6d5af4
c219150
 
d82da34
3913d87
 
 
 
 
 
d6d5af4
 
 
51c0109
 
d6d5af4
 
8895fb8
 
 
ca9647a
 
 
8895fb8
 
 
 
ca9647a
 
8895fb8
ca9647a
8895fb8
 
ca9647a
8895fb8
ca9647a
 
 
 
 
 
 
 
 
 
 
 
 
8895fb8
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
---
language:
- ca
- de
- en
- es
- eu
- gl
- it
- ko
- pt
language_bcp47:
- pt-BR
license: cc-by-sa-4.0
tags:
- evaluation
- multilingual
pretty_name: Multi-LMentry
task_categories:
- question-answering
configs:
  - config_name: ca
    data_files: ca/*.jsonl
  - config_name: de
    data_files: de/*.jsonl
  - config_name: en
    data_files: en/*.jsonl
  - config_name: es
    data_files: es/*.jsonl
  - config_name: eu
    data_files: eu/*.jsonl
  - config_name: gl
    data_files: gl/*.jsonl
  - config_name: it
    data_files: it/*.jsonl
  - config_name: ko
    data_files: ko/*.jsonl
  - config_name: pt_br
    data_files: pt_br/*.jsonl
dataset_info:
  features:
    - name: id
      dtype: string
    - name: input
      dtype: string
    - name: metadata
      dtype: string
    - name: canary
      dtype: string
  splits:
    - name: test
---


# Multi-LMentry

This dataset card provides documentation for **Multi-LMentry**, a multilingual benchmark designed for evaluating large language models (LLMs) on fundamental, elementary-level tasks across nine languages. It is the official dataset release accompanying the EMNLP 2025 paper "Multi-LMentry: Can Multilingual LLMs Solve Elementary Tasks Across Languages?".

## Dataset Details

### Dataset Description

Multi-LMentry is a multilingual extension of [LMentry (Efrat et al., 2023)](https://aclanthology.org/2023.findings-acl.666/), which evaluates LLMs on tasks that are trivial for humans but often challenging for models. It covers **nine languages**:  
- Catalan  
- German  
- Spanish  
- Basque  
- Galician  
- Korean  
- Italian  
- Brazilian Portuguese  
- English (original LMentry dataset)

The dataset enables systematic evaluation of core model abilities across low-, mid-, and high-resource languages. Tasks were recreated manually with the help of native speakers, ensuring linguistic and cultural appropriateness rather than relying on direct translation.

### Dataset Sources

- **Paper:** Accepted at EMNLP 2025 main conference (link pending)
- [**GitHub Repository:**](https://github.com/langtech-bsc/multi_lmentry) Code to perform the evaluation on Multi-LMentry

## Uses

The dataset is intended for:  
- **Evaluation of LLMs** on elementary reasoning and understanding tasks.  
- **Cross-lingual comparisons**, especially between high-resource and low-resource languages.  
- **Diagnostics / unit tests** of fundamental model abilities.  

It is **not intended** for training language models directly.  

## Dataset Structure

- The dataset is organized by **language folders**.  
- Inside each folder, there is **one JSON file per task**.  
- Each JSON contains input prompts and expected outputs for that task.  
- Tasks include simple sentence construction, contextual word choice, alphabetic reasoning, etc.  
- Some tasks are language-specific (e.g., rhyming words are excluded where not applicable).  

## How to Use

```
from datasets import load_dataset
import json

# Load the Spanish "bigger_number" task
ds = load_dataset(
    "BSC-LT/multi_lmentry",
    "es",
    data_files="es/bigger_number.jsonl"
)["train"]

# Access first example
example = ds[0]
print("Input:", example["input"])

# Convert metadata from string to dictionary
metadata = json.loads(example["metadata"])
print("Metadata:", metadata)

# Access the answer from metadata
answer = metadata.get("answer")
print("Answer:", answer)
``` 

**Notes**:

- The metadata field contains task-specific information, including the answer. Its structure varies depending on the task, for example:
  - Multiple-choice tasks may include a list of distractors and the correct answer index.
  - Open-ended tasks, like "ends_with_letter", may only include task-specific metadata such as the target letter, without a predefined answer.
  - Other fields (e.g., num_digits, n1, n2, template_id) may differ depending on the task type.
- Each JSONL file corresponds to a specific task; you can load multiple tasks by specifying multiple data_files.
- Evaluation: Multi-LMentry includes manually crafted regexes for each task to automatically check answers. These evaluation scripts are available in the (GitHub repository)[https://github.com/langtech-bsc/multi_lmentry] and ready to use for running systematic assessments of model outputs.

## Dataset Creation

### Curation Rationale

The motivation is to provide a **systematic, multilingual benchmark** for assessing whether LLMs can perform **basic reasoning tasks** that humans—even with only elementary proficiency—find trivial. This is crucial since many evaluations today focus on high-level reasoning while overlooking core capabilities.

### Source Data

#### Data Collection and Processing

- Data was **manually created** in each language, rather than translated from English.  
- Native speakers were involved to ensure correctness, cultural relevance, and avoidance of ambiguity or bias.  
- Tasks were adapted to respect **linguistic characteristics**, such as orthography, morphology, or alphabet differences.

#### Who are the source data producers?

- **Native speakers** of the target languages, who carefully designed and validated the tasks.  
- Task designs follow the original LMentry methodology but were recreated independently per language by native speakers of the target languages, who carefully designed and validated the tasks. 

## Acknowledgements

We gratefully acknowledge the support of Future AI Research ([PNRR MUR project PE0000013-FAIR](https://fondazione-fair.it/en/)).
 
The authors gratefully acknowledge the support of the AI Factory IT4LIA project and the CINECA award FAIR_NLP under the ISCRA initiative for granting access high-performance computing resources.

This work is funded by the Ministerio para la Transformación Digital y de la Función Pública and Plan de Recuperación, Transformación y Resiliencia - Funded by EU – NextGenerationEU within the framework of the project ILENIA with references 2022/TL22/00215337, 2022/TL22/00215336 and 2022/TL22/00215335, and within the framework of the project Desarrollo Modelos ALIA.

This work has been promoted and financed by the Generalitat de Catalunya through the Aina project. 

## License Information

[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.ca)

## Citation

### Bibtex

```bibtex
@inproceedings{moroni-etal-2025-multi,
    title = "Multi-{LM}entry: Can Multilingual {LLM}s Solve Elementary Tasks Across Languages?",
    author = "Moroni, Luca  and
      Aula-Blasco, Javier  and
      Conia, Simone  and
      Baucells, Irene  and
      Perez, Naiara  and
      Su{\'a}rez, Silvia Paniagua  and
      Sall{\'e}s, Anna  and
      Ostendorff, Malte  and
      Falc{\~a}o, J{\'u}lia  and
      Son, Guijin  and
      Gonzalez-Agirre, Aitor  and
      Navigli, Roberto  and
      Villegas, Marta",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.emnlp-main.1731/",
    doi = "10.18653/v1/2025.emnlp-main.1731",
    pages = "34114--34145",
    ISBN = "979-8-89176-332-6"
}
```