Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
benchmarkId: string
modelId: string
month: string
metrics: struct<totalPnl: int64, accuracy: double>
vs
month: string
worlds: list<item: struct<worldId: string, question: string, outcome: bool, month: string, generatedAt: timestamp[s], timeline: list<item: null>, npcs: list<item: null>, events: list<item: null>, feedPosts: list<item: null>, metadata: struct<>>>
trajectories: list<item: struct<trajectoryId: string, agentId: string, month: string, scenario: string, steps: list<item: struct<stepNumber: int64, environmentState: struct<agentBalance: int64, agentPnL: int64>, llm_calls: list<item: struct<model: string, user_prompt: string, response: string>>, action: struct<type: string, parameters: struct<amount: int64>, success: bool>, reward: int64>>, totalReward: int64, finalPnL: int64, metrics: struct<tradesExecuted: int64>>>
benchmarks: list<item: struct<benchmarkId: string, modelId: string, month: string, metrics: struct<totalPnl: int64, accuracy: int64>>>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 531, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              benchmarkId: string
              modelId: string
              month: string
              metrics: struct<totalPnl: int64, accuracy: double>
              vs
              month: string
              worlds: list<item: struct<worldId: string, question: string, outcome: bool, month: string, generatedAt: timestamp[s], timeline: list<item: null>, npcs: list<item: null>, events: list<item: null>, feedPosts: list<item: null>, metadata: struct<>>>
              trajectories: list<item: struct<trajectoryId: string, agentId: string, month: string, scenario: string, steps: list<item: struct<stepNumber: int64, environmentState: struct<agentBalance: int64, agentPnL: int64>, llm_calls: list<item: struct<model: string, user_prompt: string, response: string>>, action: struct<type: string, parameters: struct<amount: int64>, success: bool>, reward: int64>>, totalReward: int64, finalPnL: int64, metrics: struct<tradesExecuted: int64>>>
              benchmarks: list<item: struct<benchmarkId: string, modelId: string, month: string, metrics: struct<totalPnl: int64, accuracy: int64>>>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: The task_categories "game-simulation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
YAML Metadata Warning: The task_categories "agent-training" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

elizaos/babylon-game-data

Dataset Description

Complete Babylon game data for reinforcement learning and offline simulation.

Version: 1.0.0
Collected: 2025-11-16T04:18:42.175Z
Game Worlds: 2
Agent Trajectories: 20
Benchmarks: 4

What's Included

1. Complete Game Worlds

  • Prediction market scenarios
  • 30-day timelines with events
  • NPC conversations and interactions
  • Feed posts and social dynamics
  • Ground truth outcomes

2. Agent Trajectories

  • Complete agent decision sequences
  • LLM calls (prompts and responses)
  • Game environment at each step
  • Actions taken and outcomes
  • Rewards and ground truth

3. Benchmark Results

  • Model performance evaluations
  • Comparison to baselines
  • Detailed metrics

Data Organization

By Month

by-month/
  2025-10.json  - October 2025 data
  2025-11.json  - November 2025 data
  2025-12.json  - December 2025 data
  ...

Each month file contains:

  • Game worlds generated that month
  • Agent trajectories from that month
  • Benchmark results from that month

Offline Simulation

This dataset enables offline, faster-than-real-time simulation:

# Download dataset
from datasets import load_dataset
dataset = load_dataset("elizaos/babylon-game-data")

# Load into Babylon offline simulator
bun run scripts/run-offline-simulation.ts \
  --data=path/to/downloaded/data.json \
  --fast-forward \
  --agent=my-agent

Use Cases

  1. RL Training - Train agents on historical gameplay
  2. Model Evaluation - Test agents on past scenarios
  3. Offline Development - Develop without live system
  4. Research - Analyze agent behavior and game dynamics
  5. Faster Testing - Run simulations at high speed

Data Format

Game World

{
  "worldId": "...",
  "month": "2025-11",
  "question": "Will Bitcoin reach $100k?",
  "outcome": true,
  "timeline": [ /* 30 days of events */ ],
  "npcs": [ /* NPC data */ ],
  "events": [ /* All events */ ],
  "feedPosts": [ /* Social feed */ ]
}

Agent Trajectory

{
  "trajectoryId": "...",
  "month": "2025-11",
  "steps": [
    {
      "environment_state": { /* game state */ },
      "llm_calls": [ /* agent decisions */ ],
      "action": { /* what agent did */ },
      "reward": 50
    }
  ],
  "totalReward": 1500,
  "finalPnL": 1500
}

Citation

@dataset{babylon_game_data_2025,
  title = {Babylon Game Data - Complete RL Dataset},
  author = {Babylon Labs},
  year = {2025},
  url = {https://huggingface.co/datasets/elizaos/babylon-game-data}
}

License

MIT

Downloads last month
20