File size: 3,853 Bytes
3f8b13e
 
 
159e232
 
3f8b13e
 
159e232
 
 
3f8b13e
159e232
3f8b13e
 
 
 
5fc6662
3f8b13e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
159e232
 
3f8b13e
 
 
 
 
 
 
 
5fc6662
 
 
 
3f8b13e
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
base_model:
- Qwen/Qwen2.5-Math-7B
language:
- en
tags:
- One-Shot-CFT
pipeline_tag: text-generation
library_name: transformers
license: cc-by-4.0
---

# One-Shot-CFT: Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem

<p align="center">
  <a href="https://github.com/TIGER-AI-Lab/One-Shot-CFT" target="_blank">💻 Code</a> |
  <a href="https://arxiv.org/pdf/2506.03295" target="_blank">📄 Paper</a> |
  <a href="https://huggingface.co/datasets/TIGER-Lab/One-Shot-CFT-Data" target="_blank">📊 Dataset</a> |
  <a href="https://huggingface.co/collections/TIGER-Lab/one-shot-cft-683fbb4d2bcf698dbea8fb21" target="_blank">🤗 Model</a> |
  <a href="https://tiger-ai-lab.github.io/One-Shot-CFT/" target="_blank">🌐 Project Page</a>
</p>

## 🧠 Overview

One-Shot Critique Fine-Tuning (CFT) is a simple, robust, and compute-efficient training paradigm for unleashing the reasoning capabilities of pretrained LLMs in both mathematical and logical domains. By leveraging critiques on just one problem, One-Shot CFT enables models like Qwen and LLaMA to match or even outperform reinforcement learning, while using 20× less compute.

Instead of learning from reference answers (as in supervised fine-tuning) or reward signals (as in reinforcement learning), One-Shot CFT enables models to learn from critiques of diverse solutions to a single problem, enhancing their exposure to varied reasoning patterns and mitigating overfitting. This exposes the LLMs to multiple perspectives and error types, thereby more effectively unleashing their reasoning potential. 

## ✨ Key Highlights

- **Unleashes Reasoning with One Example:** One-Shot CFT uses critiques of diverse model-generated solutions to a single problem to significantly boost performance across math and logic tasks. For example, with just 5 GPU hours of training on Qwen2.5-Math-7B, One-Shot CFT achieves an average improvement of +15% on six math benchmarks and +16% on three logic reasoning benchmarks.
- **Outperforms RLVR and Full SFT with 20× Less Compute:** One-Shot CFT outperforms both one-shot Reinforcement Learning with Verifiable Rewards (RLVR) and full-dataset supervised fine-tuning, while requiring only 5 GPU hours on a 7B model—offering a much more efficient and stable training alternative.
- **Robust Across Seeds and Model Scales:** One-Shot CFT remains effective across different seed problem choices and model sizes—from 1.5B to 14B parameters—demonstrating strong generalization and scalability.

**This specific model is the One-Shot CFT variant trained based on [Qwen2.5-7B-Math](https://huggingface.co/Qwen/Qwen2.5-Math-7B) with [DSR-CFT-p0](https://huggingface.co/datasets/TIGER-Lab/One-Shot-CFT-Data) dataset.**

## Main Results

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/636a35eff8d9af4aea181608/DCxRSdeDrv-Db4VLuEl0T.png" alt="CFT Performance Comparison" width="1100"/>
</p>

<p align="center"><em>
One-shot CFT consistently improves mathematical and logical reasoning.  
**Left:** Average accuracy on six mathematical reasoning benchmarks for Qwen and LLaMA models, comparing base, SFT, RLVR, and CFT with only one training example.  
**Right:** In-domain accuracy on three logic reasoning benchmarks (BBEH subtasks) for Qwen2.5-Math-7B.  
Across both domains, CFT with a single problem significantly outperforms standard SFT and matches or exceeds reinforcement learning with much lower compute.
</em></p>

## Citation

If you find our work helpful, please cite it as:

```bibtex
@article{wang2025unleashing,
  title={Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem},
  author={Wang, Yubo and Nie, Ping and Zou, Kai and Wu, Lijun and Chen, Wenhu},
  journal={arXiv preprint arXiv:2506.03295},
  year={2025}
}
```