Update README.md
Browse files
README.md
CHANGED
|
@@ -69,6 +69,18 @@ generated_ids = [
|
|
| 69 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 70 |
```
|
| 71 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
## Citation
|
| 73 |
|
| 74 |
If you find our work helpful, feel free to give us a cite.
|
|
|
|
| 69 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 70 |
```
|
| 71 |
|
| 72 |
+
## Evaluation
|
| 73 |
+
|
| 74 |
+
We briefly compare Qwen2-1.5B-Instruct with Qwen1.5-1.8B-Chat. The results are as follows:
|
| 75 |
+
|
| 76 |
+
| Datasets | Qwen1.5-0.5B-Chat | **Qwen2-0.5B-Instruct** | Qwen1.5-1.8B-Chat | **Qwen2-1.5B-Instruct** |
|
| 77 |
+
| :--- | :---: | :---: | :---: | :---: |
|
| 78 |
+
| MMLU | 35.0 | **37.9** | 43.7 | **52.4** |
|
| 79 |
+
| HumanEval | 9.1 | **17.1** | 25.0 | **37.8** |
|
| 80 |
+
| GSM8K | 11.3 | **40.1** | 35.3 | **61.6** |
|
| 81 |
+
| C-Eval | 37.2 | **45.2** | 55.3 | **63.8** |
|
| 82 |
+
| IFEval (Prompt Strict-Acc.) | 14.6 | **20.0** | 16.8 | **29.0** |
|
| 83 |
+
|
| 84 |
## Citation
|
| 85 |
|
| 86 |
If you find our work helpful, feel free to give us a cite.
|