LoRA Adapter for SFT

This is a LoRA (Low-Rank Adaptation) adapter trained using supervised fine-tuning (SFT).

Base Model

  • Base Model: unsloth/Qwen3-8B
  • Adapter Type: LoRA
  • Task: Supervised Fine-Tuning

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3-8B")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-8B")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "thejaminator/aligned_all_claude_10000-20251008")

Training Details

This adapter was trained using supervised fine-tuning on conversation data to improve the model's ability to follow instructions and generate helpful responses.

Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for thejaminator/aligned_all_claude_10000-20251008

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
unsloth/Qwen3-8B
Adapter
(14)
this model