Datasets:
cellarc_100k_meta
CellARc 100k Meta is the metadata-rich companion to the lightweight CellARc split, retaining every field emitted during generation while staying Hugging Face friendly.
Data quick facts
- Vocabulary: cell values use integer labels 0–5 (alphabet size up to 6).
- Max list length: every
train,query, orsolutionlist has at most 21 entries (median 13). - Flattened full task: concatenating all lists in an episode yields at most 252 integers (median 156).
- JSON keys:
id,train[].input,train[].output,query,solution,meta,rule_table.
Episode schema sample
{
"id": "…",
"train": [
{"input": [0, 1, …], "output": [2, 0, …]}, # each list length ≤ 21
…
],
"query": [3, 0, …], # length ≤ 21
"solution": [4, 5, …], # length ≤ 21
"meta": { … }, # full provenance + analytics
"rule_table": { … } # base64-encoded CA lookup
}
Dataset details
cellarc_100k_meta augments the lightweight cellarc_100k release with the complete per-episode metadata that was captured during dataset generation. The Parquet assets (data/*.parquet) are identical between both packages so models can be trained interchangeably; the distinction lives in the JSONL representation:
cellarc_100k/data/*.jsonl→ minimal supervision fields (id,train,query,solution).cellarc_100k_meta/data/*.jsonl→ full records containing the metadata, coverage diagnostics and rule tables used to produce each episode.
Contents
cellarc_100k_meta/
├── data/
│ ├── train.{jsonl,parquet}
│ ├── val.{jsonl,parquet}
│ ├── test_interpolation.{jsonl,parquet}
│ └── test_extrapolation.{jsonl,parquet}
├── data_files.json
├── dataset_stats.json
├── features.json # identical to the lightweight package
└── README.md # this file
Because the JSONL retains the full metadata, the files are substantially larger (≈4.6 × the lightweight versions) but remain line-delimited for streaming.
Additional fields
Each JSON line mirrors the original generation payload:
meta: structured description of the episode (alphabet_size,radius,steps,window, train/query span descriptors, construction, family, Langton λ statistics, entropy bins, coverage metrics, morphology signals, seeds, wrap flags, etc.).rule_table: explicit lookup table for the cellular automaton rule encoded as base64 (matching the copy nested insidemeta).id: injected fingerprint alias for convenience (meta["fingerprint"]).
The nested objects (e.g. coverage, morphology, train_spans) remain untouched so downstream tooling can parse them without modification.
For the canonical supervision tensors (train, query, solution) refer to the cellarc_100k README.
Working with the metadata
from datasets import load_dataset
data_files = {
split: f"artifacts/datasets/cellarc_100k_meta/data/{split}.jsonl"
for split in ("train", "val", "test_interpolation", "test_extrapolation")
}
full = load_dataset("json", data_files=data_files)
episode = full["train"][0]
print(episode["id"], episode["meta"]["lambda"], episode["coverage"]["fraction"])
print(len(episode["meta"]["train_spans"]), "train windows, context =", episode["meta"]["train_context"])
Use cases for the Meta package include:
- Analyzing curriculum difficulty via λ/entropy bins or morphology descriptors.
- Filtering subsets by construction (
cycle,unrolled,hybrid) or family (random,totalistic, …). - Reconstructing the exact CA rule during evaluation (via
rule_table).
Shared statistics
dataset_stats.json, features.json and the Parquet files are byte-for-byte identical to those shipped with cellarc_100k, ensuring consistent training/evaluation splits and schema definitions across both packages.
Reproduction & license
Generated via:
python scripts/build_hf_dataset.py --overwrite
License: inherits the repository’s license (specify here if a dedicated dataset license is selected). Please cite the CellARc project when using this dataset.