Datasets:

Formats:
parquet
Languages:
English
Libraries:
Datasets
Dask
License:
rdiehlmartinez commited on
Commit
66c1bcf
·
verified ·
1 Parent(s): 65f6857

Updating preprocessing explanation

Browse files
Files changed (1) hide show
  1. README.md +17 -14
README.md CHANGED
@@ -2,29 +2,32 @@
2
  license: apache-2.0
3
  language:
4
  - en
5
- pretty_name: 'Pico Dataset: Pre-tokenized, Pre-shuffled Dolma'
6
  size_categories:
7
  - 100B<n<1T
8
  ---
9
- ## The Pico Dataset
10
 
11
- A pre-tokenized, pre-shuffled version of [Dolma](https://huggingface.co/datasets/allenai/dolma), the high-quality text corpus from AI2.
12
 
13
  ### Overview
14
 
15
- The Pico dataset simplifies training by providing:
16
- - Pre-tokenized text in chunks of 2048 tokens, using the [OLMo Tokenizer](https://huggingface.co/allenai/OLMo-7B-0724-hf/blob/main/tokenizer_config.json)
17
- - Pre-shuffled data for consistent training
18
- - Streaming-friendly format
19
- - 420B tokens total (perfect for 200K steps at batch size 1024)
 
 
20
 
21
- ### Benefits
 
22
 
23
- - **Storage Efficient**: No need to download the full 10TB Dolma dataset
24
- - **Memory Efficient**: Stream data directly with `load_dataset(..., streaming=True)`
25
- - **Reproducible**: All models see identical data in identical order
26
- - **Fast**: Skip tokenization during training
27
- - **Simple**: Minimal boilerplate code needed
28
 
29
  ### Usage
30
 
 
2
  license: apache-2.0
3
  language:
4
  - en
5
+ pretty_name: 'Pretokenized Dolma: Pre-tokenized, Pre-shuffled Dolma'
6
  size_categories:
7
  - 100B<n<1T
8
  ---
9
+ ## The Pretokenized Dolma Dataset
10
 
11
+ A pre-tokenized, pre-shuffled version of [Dolma](https://huggingface.co/datasets/allenai/dolma), the high-quality text corpus from AI2. This dataset is designed to be plug-and-play with the pico-train library.
12
 
13
  ### Overview
14
 
15
+ Key Features:
16
+ - Tokenized with [allenai/OLMo-7B-0724-hf](https://huggingface.co/allenai/OLMo-7B-0724-hf), a BPE-tokenized with a vocabulary size of 50280
17
+ - Sequence length: 2049 tokens (2048 + 1 for next-token prediction)
18
+ - Sharded into 10,000 Parquet files (~78MB each)
19
+ - 420B tokens total size (perfect for training a model for 200K steps at batch size 1024)
20
+ - Ready for streaming via datasets.load_dataset(..., streaming=True)
21
+ - Pre-shuffling ensures that the order in which data is shown to models is consistent across training runs
22
 
23
+ ### How it was built
24
+ We first downloaded the full Dolma corpus and selected a random 30% subset for preprocessing. Using the OLMo tokenizer, the text was tokenized and chunked into sequences of 2049 tokens. Each document is separated by an end-of-sequence (<eos>) token.
25
 
26
+ After tokenization, we shuffled and evenly sampled from the token stream to create 100 uniform shards. These were then further divided into 10,000 smaller shards to support fast loading and parallel training. Only full-length sequences are retained to ensure consistency across samples.
27
+
28
+ The dataset is stored as Parquet files, each containing token sequences under the key input_ids.
29
+
30
+ We release the exact scripts we use to create this dataset in our [pico-lm/pico-dataset](https://github.com/pico-lm/pico-dataset) GitHub repo.
31
 
32
  ### Usage
33