Lightweight LoRA adapter trained on the EdinburghNLP/XSum dataset for abstractive news summarization.

The adapter fine-tunes Mistral-7B-Instruct-v0.3 using ~50k training and 2k validation samples prepared via a summarization prompt.

Training used the following parameters:
r=8,
lora_alpha=16,
lora_dropout=0.1,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"]

Adapters can be merged into the base model for inference using PeftModel.merge_and_unload().

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BacemKarray/Mistral-7B-Instruct-v0.3-XSum-LoRA

Finetuned
(327)
this model

Dataset used to train BacemKarray/Mistral-7B-Instruct-v0.3-XSum-LoRA