Datasets:
Formats:
json
Size:
10M - 100M
metadata
language:
- en
- tr
- fa
- zh
- it
configs:
- config_name: chunk-0
data_files:
- split: train
path: train_data_0*-8.jsonl
- config_name: chunk-1
data_files:
- split: train
path: train_data_1*-8.jsonl
- config_name: chunk-2
data_files:
- split: train
path: train_data_2*-8.jsonl
- config_name: chunk-3
data_files:
- split: train
path: train_data_3*-8.jsonl
- config_name: chunk-4
data_files:
- split: train
path: train_data_4*-8.jsonl
- config_name: chunk-5
data_files:
- split: train
path: train_data_5*-8.jsonl
- config_name: chunk-6
data_files:
- split: train
path: train_data_6*-8.jsonl
- config_name: chunk-7
data_files:
- split: train
path: train_data_7*-8.jsonl
Training data of the model detokenized in the exact order seen by the model.
The training data is partitioned into 8 chunks (chunk-0 through chunk-7), based on the GPU rank that generated the data. Each chunk contains detokenized text files in JSON Lines format (.jsonl).