Datasets:

Formats:
parquet
Languages:
English
Libraries:
Datasets
Dask
License:
pretokenized-dolma / README.md
rdiehlmartinez's picture
Changing definition of feature dtypes
ab1b7a2 verified
metadata
dataset_info:
  features:
    - name: input_ids
      sequence: int32
    - name: idx
      dtype: int64
  splits:
    - name: train
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: apache-2.0
language:
  - en
pretty_name: 'Pretokenized Dolma: Pre-tokenized, Pre-shuffled Dolma'
size_categories:
  - 100B<n<1T

The Pretokenized Dolma Dataset

A pre-tokenized, pre-shuffled version of Dolma, the high-quality text corpus from AI2. This dataset is designed to be plug-and-play with the pico-train library.

Overview

Key Features:

  • Tokenized with allenai/OLMo-7B-0724-hf, a BPE-tokenized with a vocabulary size of 50280
  • Sequence length: 2049 tokens (2048 + 1 for next-token prediction)
  • Sharded into 10,000 Parquet files (~78MB each)
  • 420B tokens total size (perfect for training a model for 200K steps at batch size 1024)
  • Ready for streaming via datasets.load_dataset(..., streaming=True)
  • Pre-shuffling ensures that the order in which data is shown to models is consistent across training runs

How it was built

We first downloaded the full Dolma corpus and selected a random 30% subset for preprocessing. Using the OLMo tokenizer, the text was tokenized and chunked into sequences of 2049 tokens. Each document is separated by an end-of-sequence () token.

After tokenization, we shuffled and evenly sampled from the token stream to create 100 uniform shards. These were then further divided into 10,000 smaller shards to support fast loading and parallel training. Only full-length sequences are retained to ensure consistency across samples.

The dataset is stored as Parquet files, each containing token sequences under the key input_ids.

We release the exact scripts we use to create this dataset in our pico-lm/pico-dataset GitHub repo.

Usage

from datasets import load_dataset
dataset = load_dataset("pico-lm/pretokenized-dolma", streaming=True)