File size: 593 Bytes
962eed1
 
 
 
 
 
 
 
327d486
 
4876673
327d486
4876673
327d486
713f60c
 
 
 
 
327d486
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
license: mit
datasets:
- EdinburghNLP/xsum
metrics:
- accuracy
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
---

Lightweight LoRA adapter trained on the EdinburghNLP/XSum dataset for abstractive news summarization.

The adapter fine-tunes Mistral-7B-Instruct-v0.3 using ~50k training and 2k validation samples prepared via a summarization prompt.

Training used the following parameters:  
r=8,  
lora_alpha=16,  
lora_dropout=0.1,  
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"]  

Adapters can be merged into the base model for inference using PeftModel.merge_and_unload().