Datasets:
Tasks:
Text Generation
Sub-tasks:
language-modeling
Languages:
German
Size:
10M - 100M
ArXiv:
License:
Update `size_categories` metadata and add GitHub repository link
#2
by
nielsr
HF Staff
- opened
This PR addresses two improvements for the German Commons dataset card:
- Updates the
size_categoriesmetadata field from100M<n<1Bto100B<nto accurately reflect that the dataset contains 154.56 billion tokens. - Adds a link to the
llmdataGitHub repository (https://github.com/coral-nlp/llmdata), which is the framework used for the dataset's construction, enhancing reproducibility as mentioned in the paper.
lgienapp
changed pull request status to
merged