allegro-lab commited on
Commit
b22c340
Β·
verified Β·
1 Parent(s): cfa7a45

ADD full dataset card

Browse files
Files changed (1) hide show
  1. README.md +182 -1
README.md CHANGED
@@ -1,3 +1,184 @@
1
  ---
2
  license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ language:
4
+ - en
5
+ size_categories:
6
+ - 100B<n<1T
7
+ pretty_name: DCLM Baseline 500B Tokens (Decontaminated)
8
+ tags:
9
+ - language-modeling
10
+ - pretraining
11
+ - memorization-research
12
+ - hubble
13
+ ---
14
+
15
+ # DCLM Baseline 500B Tokens (Decontaminated)
16
+
17
+ ## Dataset Description
18
+
19
+ This dataset is a **decontaminated subset** of the [DCLM-Baseline](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) corpus, specifically prepared for the **Hubble** memorization research project. The dataset has been carefully processed to remove overlap with memorization evaluation data and subsampled around **500 billion tokens** of English text.
20
+
21
+ This corpus serves as the foundational training data for all Hubble models, providing a clean baseline for studying memorization phenomena in large language models while attempting to remove confounding effects from contamination.
22
+
23
+ ### Dataset Summary
24
+
25
+ - **Total Size**: ~500 billion tokens
26
+ - **Language**: English
27
+ - **Source**: Decontaminated DCLM-Baseline corpus
28
+ - **Purpose**: Training language models for memorization research
29
+ - **License**: CC-BY-4.0 (inherited from DCLM Baseline)
30
+
31
+ ## Data Revisions
32
+
33
+ We provide multiple revisions of the training corpus corresponding to different Hubble models:
34
+
35
+ | Revision | Description | Effective Token Count | Models Trained |
36
+ |-----------------|-------------|-------------|----------|
37
+ | `standard` | Full 500B token corpus | 500B | `hubble-{1/8}b-{100/500}b_toks-*-standard-*` |
38
+ | `perturbed-500b` | Same as `standard` with perturbation data inserted across the 500B tokens used in training | 500B | `hubble-{1/8}b-500b_toks-perturbed-*` |
39
+ | `perturbed-100b` | Same as `standard` with perturbation data inserted across the first 100B tokens used in training | 100B | `hubble-{1/8}b-100b_toks-perturbed-*` and `hubble-1b-100b_toks-*_depth-perturbed-*` |
40
+ | `perturbed-100b-paraphrased` | Same as `perturbed-100b` but with the paraphrased variants of MMLU and YAGO biographies | 100B | `hubble-{1/8}b-100b_toks-paraphrased-perturbed-*` |
41
+
42
+ We do not release the corpora for the Timing and Interference experiments but these can be reproduced from the provided `standard` revision and tokenized perturbation data.
43
+
44
+ ## Dataset Structure
45
+
46
+ The dataset repository contains the following structure:
47
+
48
+ ```
49
+ dclm-baseline-500b_toks/
50
+ β”œβ”€β”€ tokenized/ # (only in main) Tokenized perturbation data
51
+ β”œβ”€β”€ tokenized_paraphrase/ # (only in main) Tokenized perturbation data with paraphrased YAGO and MMLU
52
+ β”œβ”€β”€ *-bin.md5sum.txt # MD5 checksums for tokenized corpus (bin file)
53
+ β”œβ”€β”€ standard_text_document.bin.zstd.part_** # Shards of the compressed tokenized corpus (~22 GB each)
54
+ β”œβ”€β”€ standard_text_document.idx # Index file for tokenized corpus (8.25 GB)
55
+ β”œβ”€β”€ *_perturbation_info.json # (only in perturbed revisions) Perturbation metadata (260 MB)
56
+ β”œβ”€β”€ *_perturbation_viz_docs.jsonl # (only in perturbed revisions) Visualization documents (9.29 MB)
57
+ β”œβ”€β”€ *_test_indexmap_*_doc_idx.npy # Test index mapping - doc indices (1.65 MB)
58
+ β”œβ”€β”€ *_test_indexmap_*_sample_idx.npy # Test index mapping - sample indices (2.14 MB)
59
+ β”œβ”€β”€ *_test_indexmap_*_shuffle_idx.npy # Test index mapping - shuffle indices (1.07 MB)
60
+ β”œβ”€β”€ *_train_indexmap_*_doc_idx.npy # Train index mapping - doc indices (1.65 MB)
61
+ β”œβ”€β”€ *_train_indexmap_*_sample_idx.npy # Train index mapping - sample indices (2.14 MB)
62
+ β”œβ”€β”€ *_train_indexmap_*_shuffle_idx.npy # Train index mapping - shuffle indices (1.07 MB)
63
+ β”œβ”€β”€ *_valid_indexmap_*_doc_idx.npy # Validation index mapping - doc indices (1.65 MB)
64
+ β”œβ”€β”€ *_valid_indexmap_*_sample_idx.npy # Validation index mapping - sample indices (2.14 MB)
65
+ β”œβ”€β”€ *_valid_indexmap_*_shuffle_idx.npy # Validation index mapping - shuffle indices (1.07 MB)
66
+ ```
67
+
68
+ ### File Types
69
+
70
+ - **`.bin.zstd.part_*`**: Compressed data archives split into multiple parts. These need to be concatenated and uncompressed to obtain the tokenized dataset (`*.bin`) (~1TB uncompressed)
71
+ - **`.idx`**: Index files recording the document boundaries in the tokenized corpus
72
+ - **`perturbation_info.json`**: Metadata to identify the insertion position of the perturbation data
73
+ - **`perturbation_viz_docs.jsonl`**: Sample of training sequences with inserted perturbation data
74
+ - **`*_{train|valid|test}_indexmap_{num_samples}ns_{seq_length}sl_{seed}s_packedpi_ac_{doc|sample|shuffle}_idx.npy`**: NumPy arrays containing doc/sample/shuffle index mappings for a training run using `num_samples` training sequences, `seq_length` tokens per sequence and `seed` as the random seed for shuffling. Useful for reproducing the exact training order of sequences.
75
+ - **`.md5sum.txt`**: Checksum files for data integrity verification
76
+ - **`tokenized/`**: Directory containing tokenized versions of the perturbation datasets
77
+ - **`tokenized_paraphrase/`**: Directory containing tokenized paraphrase variations of perturbation datasets
78
+
79
+ ### Access Methods
80
+
81
+ Refer to our [README](https://github.com/allegro-lab/hubble?tab=readme-ov-file#training-corpora) for instructions on downloading and preparing the corpus.
82
+
83
+ ## Dataset Creation
84
+
85
+ ### Source Data
86
+
87
+ The dataset is derived from [DCLM-Baseline](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0), which consists of:
88
+ - **CommonCrawl** web scrapes
89
+ - **Language identification** to retain English content
90
+ - **Refined filtering** for quality and safety
91
+ - **Extensive deduplication** to remove near-duplicates
92
+
93
+ ### Data Processing
94
+
95
+ 1. **Subsampling**: We use a subset of DCLM to retain around 500B tokens. The source files used are listed [here](https://github.com/allegro-lab/hubble/blob/main/scripts/dclm_files.txt). Note that we divided `global-shard_01_of_10` into `global-shard_01.0_of_10` and `global-shard_01.0_of_10` for ease of processing.
96
+
97
+ 2. **Decontamination**: Systematic removal of text overlapping with [Hubble evaluation benchmarks](https://huggingface.co/collections/allegrolab/hubble-datasets) using [infini-gram](https://infini-gram.io/) as described in [this doc](https://github.com/allegro-lab/hubble/blob/main/scripts/decontamination/README.md). Candidate documents for decontamination include:
98
+ - Test sets (PopQA, MMLU, HellaSwag, PIQA, WinoGrande, Ellie, MUNCH)
99
+ - Passages (Wikipedia, Gutenberg)
100
+ - Paraphrases (MRPC, PAWS)
101
+ - Biographies (Synthetic YAGO, ECtHR)
102
+ - Chat logs (Personachat)
103
+
104
+ ## Uses
105
+
106
+ ### Direct Use
107
+
108
+ This dataset is intended for pretraining language models for memorization research. The clean training data provides the foundation for the Hubble model suite. The dataset is released to support further research on memorization, mechanistic interpretability, study of training dynamics, and reproducibility.
109
+
110
+ ### Out-of-Scope Use
111
+
112
+ This dataset should **NOT** be used for:
113
+ - **Production language models** (research-focused, may contain biases)
114
+ - **Commercial applications** without understanding license implications
115
+ - **Safety-critical systems** (inherits web data biases and risks)
116
+
117
+ ## Bias, Risks, and Limitations
118
+
119
+ ### Known Biases
120
+
121
+ **Inherited from Web Data:**
122
+ - **Geographic bias**: Overrepresentation of content from certain regions
123
+ - **Temporal bias**: Reflects internet content from specific time periods
124
+ - **Platform bias**: Overrepresentation of certain websites and platforms
125
+
126
+ **Language and Cultural Bias:**
127
+ - **English-centric**: Only English content retained
128
+ - **Socioeconomic bias**: Overrepresentation of content creators with internet access
129
+
130
+ ### Risks
131
+
132
+ Certain revisions of the dataset explicitly contain private information and copyrighted material. Thus, we recommend not using this dataset for commercial purposes or general use language models.
133
+
134
+ ## Additional Information
135
+
136
+ ### Dataset Curators
137
+
138
+ - **Hubble Research Team**: Johnny Tian-Zheng Wei*, Ameya Godbole*, Mohammad Aflah Khan*, Ryan Wang, Xiaoyuan Zhu, James Flemings, Nitya Kashyap
139
+ - **Institutions**: University of Southern California, Max Planck Institute for Software Systems
140
+ - **Based on**: DCLM corpus by ML Foundations
141
+
142
+ ### Licensing Information
143
+
144
+ This dataset is distributed under the **CC-BY-4.0** (Creative Commons Attribution 4.0 International) license, inherited from the original DCLM-Baseline corpus. See [license details](https://creativecommons.org/licenses/by/4.0/) for full terms.
145
+
146
+ ### Citation Information
147
+
148
+ If you use this dataset in your research, please cite both the Hubble project and the original DCLM work:
149
+
150
+ ```bibtex
151
+ @misc{wei2025hubblemodelsuiteadvance,
152
+ title={Hubble: a Model Suite to Advance the Study of LLM Memorization},
153
+ author={Johnny Tian-Zheng Wei and Ameya Godbole and Mohammad Aflah Khan and Ryan Wang and Xiaoyuan Zhu and James Flemings and Nitya Kashyap and Krishna P. Gummadi and Willie Neiswanger and Robin Jia},
154
+ year={2025},
155
+ eprint={2510.19811},
156
+ archivePrefix={arXiv},
157
+ primaryClass={cs.CL},
158
+ url={https://arxiv.org/abs/2510.19811},
159
+ }
160
+
161
+ @misc{li2025datacomplmsearchgenerationtraining,
162
+ title={DataComp-LM: In search of the next generation of training sets for language models},
163
+ author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and Saurabh Garg and Rui Xin and Niklas Muennighoff and Reinhard Heckel and Jean Mercat and Mayee Chen and Suchin Gururangan and Mitchell Wortsman and Alon Albalak and Yonatan Bitton and Marianna Nezhurina and Amro Abbas and Cheng-Yu Hsieh and Dhruba Ghosh and Josh Gardner and Maciej Kilian and Hanlin Zhang and Rulin Shao and Sarah Pratt and Sunny Sanyal and Gabriel Ilharco and Giannis Daras and Kalyani Marathe and Aaron Gokaslan and Jieyu Zhang and Khyathi Chandu and Thao Nguyen and Igor Vasiljevic and Sham Kakade and Shuran Song and Sujay Sanghavi and Fartash Faghri and Sewoong Oh and Luke Zettlemoyer and Kyle Lo and Alaaeldin El-Nouby and Hadi Pouransari and Alexander Toshev and Stephanie Wang and Dirk Groeneveld and Luca Soldaini and Pang Wei Koh and Jenia Jitsev and Thomas Kollar and Alexandros G. Dimakis and Yair Carmon and Achal Dave and Ludwig Schmidt and Vaishaal Shankar},
164
+ year={2025},
165
+ eprint={2406.11794},
166
+ archivePrefix={arXiv},
167
+ primaryClass={cs.LG},
168
+ url={https://arxiv.org/abs/2406.11794},
169
+ }
170
+ ```
171
+
172
+ ### Contact
173
+
174
+ For questions about this dataset:
175
+ - **GitHub Issues**: [Hubble Repository](https://github.com/allegro-lab/hubble)
176
+ - **Project Website**: [https://allegro-lab.github.io/hubble/](https://allegro-lab.github.io/hubble/)
177
+ - **Research Team**: Contact through institutional affiliations
178
+
179
+ ### Related Resources
180
+
181
+ - **Hubble Models**: [allegrolab Collections](https://huggingface.co/allegrolab/collections)
182
+ - **Perturbation Datasets**: [Hubble Datasets Collection](https://huggingface.co/collections/allegrolab/hubble-datasets)
183
+ - **Original DCLM**: [mlfoundations/dclm-baseline-1.0](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0)
184
+ - **Project Documentation**: [Hubble README](https://github.com/allegro-lab/hubble)