cellarc_100k_meta / README.md
Miroslav Lžičař
Revise top intro paragraph; clarify meta variant description
3206478
metadata
language:
  - en
license: mit
annotations_creators:
  - machine-generated
size_categories:
  - 100K<n<1M
task_categories:
  - other
  - text-generation
  - visual-question-answering
  - image-to-image
  - reinforcement-learning
task_ids:
  - language-modeling
  - visual-question-answering
  - image-inpainting
pretty_name: CellARC 100k Meta
tags:
  - cellular-automata
  - arc
  - reasoning

cellarc_100k_meta

CellARC 100k Meta is the metadata‑rich variant of the CellARC benchmark introduced in Lzicar, M. (2025). CellARC: Measuring Intelligence with Cellular Automata. It contains the exact same episodes and splits as cellarc_100k, with byte‑identical Parquet files; the JSONL files retain full per‑episode metadata (rule tables, coverage diagnostics, morphology descriptors, sampling parameters, etc.). Each episode exposes five support pairs plus a held‑out query/solution pair.

Data quick facts

  • Alphabet size k in [2, 6]; window size W in {3, 5, 7}; radius r in {1, 2, 3}; steps t in {1, 2, 3} (≈95% have t = 1).
  • Values (digits) are integers in 0..k-1 per episode; across the full dataset the union of symbols is {0,1,2,3,4,5}.
  • Five training exemplars per episode (len(train) = 5).
  • Sample length L ∈ [5, 21] (median 11). Episode total length is 12 × L (5 train pairs + query + solution), median 132.
  • JSON layout: id, train[].input, train[].output, query, solution, plus a rich meta object.

Contents

cellarc_100k_meta/
├── data/
│   ├── train.{jsonl,parquet}
│   ├── val.{jsonl,parquet}
│   ├── test_interpolation.{jsonl,parquet}
│   └── test_extrapolation.{jsonl,parquet}
├── subset_ids/
├── data_files.json
├── dataset_stats.json
├── features.json
├── LICENSE
└── CITATION.cff

Parquet assets are byte‑identical to cellarc_100k; the JSONL files retain the enriched payloads.

Additional metadata

Each JSON line adds to the supervision fields with:

  • meta: alphabet size, radius, steps, window size, train/query spans, family/construction labels, Langton λ & entropy, coverage breakdowns (including query_window_coverage_weighted, query_window_coverage_unique, query_window_avg_depth), morphology descriptors, seeds, and more.
  • rule_table: base64‑encoded CA lookup table (also nested under meta["rule_table"]).
  • id: deterministic fingerprint (alias of meta["fingerprint"]).

Split summary

split episodes parquet bytes
train 95,317 12,378,645
val 1,000 128,117
test_interpolation 1,000 128,271
test_extrapolation 1,000 130,303

Parquet sizes are identical to cellarc_100k. JSONL files are larger due to metadata; see artifacts/hf_cellarc/hf-cellarc_100k_meta/data_files.json for exact JSONL byte counts.

Subset splits and ID lists

  • Predefined 100‑episode subsets are included for all splits as data/{train,val,test_interpolation,test_extrapolation}_100.{jsonl,parquet}.
  • The exact episode selections are recorded under subset_ids/ as {split}_100.txt (one ID per line). These lists allow reproducible sampling and enable efficient evaluation workflows.
  • Motivation: Smaller, fixed subsets facilitate quick iteration and reduce evaluation costs when using API‑metered large language models or other pay‑per‑call services, while preserving representativeness of the full splits.

Coverage & dynamics

  • Coverage fraction (global): mean 0.402, min 0.069, max 0.938.
  • Langton’s λ (global): mean 0.565, min 0.016, max 1.000.
  • Avg. cell entropy (global): mean 1.110 bits, max 2.585.
  • Steps: 1 (95.3%), 2 (4.6%), 3 (0.1%).
  • Window size: W=3 (74.1%), W=5 (13.3%), W=7 (12.6%).
  • Radii: r=1 (78.7%), r=2 (8.7%), r=3 (12.5%).
  • Families (share of episodes): random 25.3%, totalistic 24.8%, outer‑totalistic 18.7%, outer‑inner totalistic 18.7%, threshold 11.9%, linear mod(k) 0.7%.

Details per split in artifacts/hf_cellarc/hf-cellarc_100k_meta/dataset_stats.json.

Loading examples

from datasets import load_dataset

# Parquet (recommended)
parquet_files = {
    split: "data/" + f"{split}.parquet"
    for split in ("train", "val", "test_interpolation", "test_extrapolation")
}
ds = load_dataset("parquet", data_files=parquet_files)

# JSONL to access metadata columns
jsonl_files = {split: "data/" + f"{split}.jsonl" for split in parquet_files}
meta = load_dataset("json", data_files=jsonl_files)
example = meta["train"][0]
print(example["id"], example["meta"]["lambda"], example["meta"]["query_window_coverage_weighted"])  # metadata fields

When uploaded to the Hub, replace local paths with the dataset ID (e.g. load_dataset("mireklzicar/cellarc_100k_meta", split="train")).

Relationship to cellarc_100k

  • Identical Parquet files and split membership.
  • JSONL keeps full provenance instead of supervision‑only layout.

Reproduction

See the cellarc_100k README for the exact resample_highcov.py, split_pool.py, and build_hf_dataset.py commands used in this refresh.

Resources

License & citation

  • License: MIT (repository default).
  • Cite the CellARC project when reporting results (CITATION.cff).