Yagna Assistant v3
Fine-tuned Qwen2.5-0.5B model trained on Yagna's professional background.
Model Details
- Base Model: Qwen2.5-0.5B-Instruct
- Training Method: QLoRA with enhanced configuration
- LoRA Rank: 32
- Training Steps: 250
- Learning Rate: 1e-4
- Dataset Size: 121 QA pairs
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Yagna1/yagna-assistant-v3")
tokenizer = AutoTokenizer.from_pretrained("Yagna1/yagna-assistant-v3")
# Ask a question
prompt = "What is your educational background?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.3)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Performance
Model achieves ~90% accuracy on test suite.
- Downloads last month
- 5