Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
## SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights
|
| 2 |
|
| 3 |
> [SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights](link)
|
|
@@ -118,5 +128,4 @@ title={SuperCorrect: Supervising and Correcting Language Models with Error-Drive
|
|
| 118 |
|
| 119 |
## Acknowledgements
|
| 120 |
|
| 121 |
-
Our SuperCorrect is a two-stage fine-tuning model which based on several extraordinary open-source models like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math), [DeepSeek-Math](https://github.com/deepseek-ai/DeepSeek-Math), [Llama3-Series](https://github.com/meta-llama/llama3). Our evaluation method is based on the code base of outstanding works like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math) and [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). We also want to express our gratitude for amazing works such as [BoT](https://github.com/YangLing0818/buffer-of-thought-llm) which provides the idea of thought template.
|
| 122 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
metrics:
|
| 6 |
+
- accuracy
|
| 7 |
+
base_model:
|
| 8 |
+
- Qwen/Qwen2.5-Math-7B-Instruct
|
| 9 |
+
library_name: transformers
|
| 10 |
+
---
|
| 11 |
## SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights
|
| 12 |
|
| 13 |
> [SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights](link)
|
|
|
|
| 128 |
|
| 129 |
## Acknowledgements
|
| 130 |
|
| 131 |
+
Our SuperCorrect is a two-stage fine-tuning model which based on several extraordinary open-source models like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math), [DeepSeek-Math](https://github.com/deepseek-ai/DeepSeek-Math), [Llama3-Series](https://github.com/meta-llama/llama3). Our evaluation method is based on the code base of outstanding works like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math) and [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). We also want to express our gratitude for amazing works such as [BoT](https://github.com/YangLing0818/buffer-of-thought-llm) which provides the idea of thought template.
|
|
|