File size: 4,039 Bytes
b5ae03c 1d5eab4 b5ae03c 9a64f44 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
dataset_info:
features:
- name: Rank
dtype: int64
- name: Puzzles
dtype: string
- name: AMT (s)
dtype: float64
- name: Solved rate
dtype: string
- name: 1-sigma Mean (s)
dtype: float64
- name: 1-sigma STD (s)
dtype: float64
splits:
- name: train
num_examples: 1362
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- mathematical-reasoning
- tree-of-thoughts
- test-time-compute
- game-of-24
size_categories:
- 1K<n<10K
license: mit
---
# Game of 24 Dataset
## Dataset Description
The Game of 24 is a mathematical reasoning puzzle where players must use four numbers and basic arithmetic operations (+, -, *, /) to obtain the result 24. Each number must be used exactly once.
This dataset contains 1,361 unique Game of 24 puzzles ranked by difficulty based on human performance from Amazon Mechanical Turk studies.
### Example
**Input:** `4 5 6 10`
**Output:** `(5 * (10 - 4)) - 6 = 24`
**Step-by-step solution:**
```
10 - 4 = 6 (left: 5 6 6)
5 * 6 = 30 (left: 6 30)
30 - 6 = 24 (left: 24)
Answer: (5 * (10 - 4)) - 6 = 24
```
## Dataset Structure
### Data Fields
- `Rank`: Difficulty ranking (1 = easiest, 1361 = hardest)
- `Puzzles`: Four numbers separated by spaces (e.g., "4 5 6 10")
- `AMT (s)`: Average time for humans to solve (seconds)
- `Solved rate`: Percentage of humans who successfully solved the puzzle
- `1-sigma Mean (s)`: Mean solving time within 1 standard deviation
- `1-sigma STD (s)`: Standard deviation of solving time
### Data Splits
The original Tree of Thoughts paper uses indices 900-1000 (100 puzzles) for evaluation.
- **Easy puzzles** (rank 1-300): 95-99% human solve rate, ~4-6 seconds
- **Medium puzzles** (rank 300-900): 85-95% human solve rate, ~6-10 seconds
- **Hard puzzles** (rank 900-1100): 80-90% human solve rate, ~10-15 seconds
- **Very hard puzzles** (rank 1100-1361): 20-80% human solve rate, 15-200+ seconds
## Source Data
This dataset is from the official Tree of Thoughts (ToT) repository:
- Paper: [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https://arxiv.org/abs/2305.10601)
- Repository: https://github.com/princeton-nlp/tree-of-thought-llm
- Authors: Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan
### Human Performance Collection
Puzzles were ranked based on human performance data collected via Amazon Mechanical Turk, measuring:
- Success rate (percentage of correct solutions)
- Solving time (average time to solution)
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
dataset = load_dataset("test-time-compute/game-of-24")
```
### Paper Evaluation Subset
To replicate the Tree of Thoughts paper evaluation:
```python
# Load indices 900-1000 (100 hard puzzles)
eval_subset = dataset['train'].select(range(900, 1000))
```
### Task Format
Each puzzle requires:
1. **Input**: Four numbers (e.g., "4 5 6 10")
2. **Output**: A valid mathematical expression using each number exactly once that equals 24
3. **Verification**: Check that all four numbers are used and the expression evaluates to 24
## Benchmark Results
From the Tree of Thoughts paper (indices 900-1000):
| Method | Success Rate | Model |
|--------|-------------|-------|
| IO (100 samples) | 7.3% | GPT-4 |
| CoT (100 samples) | 4.0% | GPT-4 |
| ToT (b=5) | 74.0% | GPT-4 |
Where:
- **IO**: Input-output prompting with 100 samples
- **CoT**: Chain-of-thought prompting with 100 samples
- **ToT**: Tree of Thoughts with beam width 5
## Citation
If you use this dataset, please cite the original Tree of Thoughts paper:
```bibtex
@article{yao2023tree,
title={Tree of Thoughts: Deliberate Problem Solving with Large Language Models},
author={Yao, Shunyu and Yu, Dian and Zhao, Jeffrey and Shafran, Izhak and Griffiths, Thomas L and Cao, Yuan and Narasimhan, Karthik},
journal={arXiv preprint arXiv:2305.10601},
year={2023}
}
```
## License
MIT License (same as original Tree of Thoughts repository)
|