Ibisbill commited on
Commit
5e565a3
·
verified ·
1 Parent(s): 384cfff

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +157 -0
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ tags:
5
+ - text-generation
6
+ - math-reasoning
7
+ - transferability
8
+ - RL-GRPO
9
+ - research-paper
10
+ - qwen
11
+ base_model: qwen3-14b
12
+ datasets:
13
+ - math
14
+ - reasoning
15
+ pipeline_tag: text-generation
16
+ arxiv: 2507.00432
17
+ ---
18
+
19
+ # UniReason-Qwen3-14B-RL
20
+
21
+ This model is associated with the research paper:
22
+ **"Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning"**
23
+
24
+ 📄 **Paper**: [2507.00432](https://arxiv.org/abs/2507.00432)
25
+
26
+ ## Abstract
27
+
28
+ Math reasoning has become the poster child of progress in large language models (LLMs), with new models rapidly surpassing human-level performance on benchmarks like MATH and AIME. But as math leaderboards improve week by week, it is worth asking: do these gains reflect broader problem-solving ability or just narrow overfitting?
29
+
30
+ ## Model Description
31
+
32
+ This model is a **RL-GRPO**-tuned version of qwen3-14b focused on **math-reasoning** capabilities.
33
+ The model was developed as part of research investigating the transferability of mathematical reasoning skills to general language tasks.
34
+
35
+ ### Key Research Questions Addressed:
36
+ - Does math reasoning training improve general LLM capabilities?
37
+ - How do different training methods (RL vs SFT) affect transferability?
38
+ - What is the trade-off between specialized math performance and general capabilities?
39
+
40
+ ## Model Details
41
+
42
+ - **Base Model**: qwen3-14b
43
+ - **Training Method**: RL-GRPO
44
+ - **Primary Focus**: math-reasoning
45
+ - **Training Data**: Math-specific datasets
46
+ - **Architecture**: Transformer-based language model
47
+ - **Parameters**: 14B
48
+
49
+ ## Training Details
50
+
51
+ ### Training Method: RL-GRPO
52
+ Custom training methodology - see paper for details.
53
+
54
+ ### Datasets Used
55
+ - Mathematical reasoning datasets
56
+ - See paper for complete dataset list
57
+
58
+ ## Performance
59
+
60
+ ### Math Reasoning Benchmarks
61
+ - **MATH**: See paper
62
+ - **AIME**: See paper
63
+
64
+ ### General Capabilities
65
+ - **General QA**: See paper
66
+ - **Code Generation**: See paper
67
+ - **Instruction Following**: See paper
68
+
69
+ *For detailed performance metrics, please refer to the paper.*
70
+
71
+ ## Usage
72
+
73
+ ```python
74
+ from transformers import AutoModelForCausalLM, AutoTokenizer
75
+ import torch
76
+
77
+ # Load model and tokenizer
78
+ model_name = "ReasoningTransferability/UniReason-Qwen3-14B-RL"
79
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
80
+ model = AutoModelForCausalLM.from_pretrained(
81
+ model_name,
82
+ torch_dtype=torch.float16,
83
+ device_map="auto"
84
+ )
85
+
86
+ # Example: Math reasoning
87
+ math_prompt = "Solve this step by step: What is the derivative of x^3 + 2x^2 - 5x + 1?"
88
+ inputs = tokenizer(math_prompt, return_tensors="pt")
89
+ outputs = model.generate(**inputs, max_length=512, temperature=0.7)
90
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
91
+ print(response)
92
+
93
+ # Example: General reasoning
94
+ general_prompt = "Explain the concept of supply and demand in economics."
95
+ inputs = tokenizer(general_prompt, return_tensors="pt")
96
+ outputs = model.generate(**inputs, max_length=512, temperature=0.7)
97
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
98
+ print(response)
99
+ ```
100
+
101
+ ## Limitations and Biases
102
+
103
+ - **Specialization Trade-offs**: As explored in the paper, models optimized for math reasoning may show reduced performance on general tasks
104
+ - **Training Method Dependencies**: Performance characteristics vary significantly between RL and SFT training approaches
105
+ - **Domain Transfer**: The extent of capability transfer from math to other domains is limited
106
+ - **Computational Requirements**: Model requires significant computational resources for inference
107
+
108
+ ## Research Findings
109
+
110
+ Key findings from the associated paper:
111
+ 1. **RL vs SFT**: RL-tuned models show better transfer to general domains compared to SFT-tuned models
112
+ 2. **Capability Trade-offs**: Most math-specialized models fail to transfer gains to other domains
113
+ 3. **Forgetting**: SFT-tuned models often forget general capabilities during math-focused training
114
+
115
+ ## Ethical Considerations
116
+
117
+ - This model is intended for research purposes
118
+ - Users should be aware of potential biases in mathematical and general reasoning
119
+ - The model should not be used for making critical decisions without human oversight
120
+ - Consider the environmental impact of large model inference
121
+
122
+ ## Citation
123
+
124
+ If you use this model in your research, please cite both the model and the associated paper:
125
+
126
+ ```bibtex
127
+ @article{math_reasoning_transfer_2024,
128
+ title={Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning},
129
+ author={[Authors]},
130
+ journal={arXiv preprint arXiv:2507.00432},
131
+ year={2024},
132
+ url={https://arxiv.org/abs/2507.00432}
133
+ }
134
+
135
+ @misc{UniReason_Qwen3_14B_RL,
136
+ author = {See paper},
137
+ title = {UniReason-Qwen3-14B-RL},
138
+ year = {2024},
139
+ url = {https://huggingface.co/ReasoningTransferability/UniReason-Qwen3-14B-RL},
140
+ note = {Model associated with arXiv:2507.00432}
141
+ }
142
+ ```
143
+
144
+ ## Contact
145
+
146
+ For questions about this model or the associated research, please:
147
+ - Open an issue in this repository
148
+ - Contact the paper authors
149
+ - Reference the original paper: https://arxiv.org/abs/2507.00432
150
+
151
+ ## Acknowledgments
152
+
153
+ This work builds upon the research presented in "Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning" and uses the qwen3-14b architecture as its foundation.
154
+
155
+ ---
156
+
157
+ *Model uploaded on 2025-07-03*