nielsr HF Staff commited on
Commit
6c2f5a6
·
verified ·
1 Parent(s): 6860a2f

Add pipeline tag and link to code

Browse files

This PR ensures people can find your model at https://huggingface.co/models?pipeline_tag=text-generation as well as links to the code repository.

Files changed (1) hide show
  1. README.md +2 -7
README.md CHANGED
@@ -1,21 +1,17 @@
1
  ---
2
- license: mit
3
  library_name: transformers
 
 
4
  ---
5
 
6
  <p align="center">
7
  <img src="https://github.com/yinjjiew/Data/raw/main/cure/overviewplot.png" width="100%"/>
8
  </p>
9
 
10
-
11
  <p align="center">
12
  <img src="https://github.com/yinjjiew/Data/raw/main/cure/results.png" width="100%"/>
13
  </p>
14
 
15
-
16
-
17
-
18
-
19
  # Introduction to our ReasonFlux-Coders
20
 
21
  We introduce **ReasonFlux-Coders**, trained with **CURE**, our algorithm for co-evolving an LLM's coding and unit test generation abilities.
@@ -23,7 +19,6 @@ We introduce **ReasonFlux-Coders**, trained with **CURE**, our algorithm for co-
23
  * **ReasonFlux-Coder-7B** and **ReasonFlux-Coder-14B** outperform similarly sized Qwen Coders, DeepSeek Coders, and Seed-Coders, and naturally integrate into common test-time scaling and agentic coding pipelines.
24
  * **ReasonFlux-Coder-4B** is our Long-CoT model, outperforming Qwen3-4B while achieving 64.8% efficiency in unit test generation. We have demonstrated its ability to serve as a reward model for training base models via reinforcement learning (see our [paper](https://arxiv.org/abs/2506.03136)).
25
 
26
-
27
  [Paper](https://arxiv.org/abs/2506.03136) | [Code](https://github.com/Gen-Verse/CURE)
28
 
29
  # Citation
 
1
  ---
 
2
  library_name: transformers
3
+ license: mit
4
+ pipeline_tag: text-generation
5
  ---
6
 
7
  <p align="center">
8
  <img src="https://github.com/yinjjiew/Data/raw/main/cure/overviewplot.png" width="100%"/>
9
  </p>
10
 
 
11
  <p align="center">
12
  <img src="https://github.com/yinjjiew/Data/raw/main/cure/results.png" width="100%"/>
13
  </p>
14
 
 
 
 
 
15
  # Introduction to our ReasonFlux-Coders
16
 
17
  We introduce **ReasonFlux-Coders**, trained with **CURE**, our algorithm for co-evolving an LLM's coding and unit test generation abilities.
 
19
  * **ReasonFlux-Coder-7B** and **ReasonFlux-Coder-14B** outperform similarly sized Qwen Coders, DeepSeek Coders, and Seed-Coders, and naturally integrate into common test-time scaling and agentic coding pipelines.
20
  * **ReasonFlux-Coder-4B** is our Long-CoT model, outperforming Qwen3-4B while achieving 64.8% efficiency in unit test generation. We have demonstrated its ability to serve as a reward model for training base models via reinforcement learning (see our [paper](https://arxiv.org/abs/2506.03136)).
21
 
 
22
  [Paper](https://arxiv.org/abs/2506.03136) | [Code](https://github.com/Gen-Verse/CURE)
23
 
24
  # Citation