Update README.md
Browse files
README.md
CHANGED
|
@@ -12,10 +12,10 @@ Lightweight LoRA adapters trained on the EdinburghNLP/XSum dataset for abstracti
|
|
| 12 |
|
| 13 |
The adapters fine-tune Mistral-7B-Instruct-v0.3 using ~50k training and 2k validation samples prepared via a summarization prompt.
|
| 14 |
|
| 15 |
-
Training used the following parameters:
|
| 16 |
-
r=8,
|
| 17 |
-
lora_alpha=16,
|
| 18 |
-
lora_dropout=0.1,
|
| 19 |
-
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"]
|
| 20 |
|
| 21 |
Adapters can be merged into the base model for inference using PeftModel.merge_and_unload().
|
|
|
|
| 12 |
|
| 13 |
The adapters fine-tune Mistral-7B-Instruct-v0.3 using ~50k training and 2k validation samples prepared via a summarization prompt.
|
| 14 |
|
| 15 |
+
Training used the following parameters:
|
| 16 |
+
r=8,
|
| 17 |
+
lora_alpha=16,
|
| 18 |
+
lora_dropout=0.1,
|
| 19 |
+
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"]
|
| 20 |
|
| 21 |
Adapters can be merged into the base model for inference using PeftModel.merge_and_unload().
|