Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
|
@@ -53,3 +53,17 @@ The following hyperparameters were used during training:
|
|
| 53 |
- Pytorch 2.0.1+cu117
|
| 54 |
- Datasets 2.14.5
|
| 55 |
- Tokenizers 0.13.3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
- Pytorch 2.0.1+cu117
|
| 54 |
- Datasets 2.14.5
|
| 55 |
- Tokenizers 0.13.3
|
| 56 |
+
|
| 57 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 58 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_euclaise__falcon_1b_stage2)
|
| 59 |
+
|
| 60 |
+
| Metric | Value |
|
| 61 |
+
|-----------------------|---------------------------|
|
| 62 |
+
| Avg. | 32.9 |
|
| 63 |
+
| ARC (25-shot) | 35.49 |
|
| 64 |
+
| HellaSwag (10-shot) | 65.56 |
|
| 65 |
+
| MMLU (5-shot) | 23.83 |
|
| 66 |
+
| TruthfulQA (0-shot) | 38.32 |
|
| 67 |
+
| Winogrande (5-shot) | 62.35 |
|
| 68 |
+
| GSM8K (5-shot) | 0.0 |
|
| 69 |
+
| DROP (3-shot) | 4.72 |
|