Update README.md
Browse files
README.md
CHANGED
|
@@ -10,6 +10,8 @@ The extrapolated (ExPO) model based on [`princeton-nlp/Llama-3-Instruct-8B-SimPO
|
|
| 10 |
|
| 11 |
Specifically, we obtain this model by extrapolating **(alpha = 0.3)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.
|
| 12 |
|
|
|
|
|
|
|
| 13 |
## Evaluation Results
|
| 14 |
|
| 15 |
Evaluation results on the **AlpacaEval 2.0** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_alpaca)):
|
|
|
|
| 10 |
|
| 11 |
Specifically, we obtain this model by extrapolating **(alpha = 0.3)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.
|
| 12 |
|
| 13 |
+
This model achieves the **40.6%** win rate and **45.8%** LC win rate on **AlpacaEval 2.0**.
|
| 14 |
+
|
| 15 |
## Evaluation Results
|
| 16 |
|
| 17 |
Evaluation results on the **AlpacaEval 2.0** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_alpaca)):
|