gemma-2-9b-linkscout-v1
This model is a fine-tuned version of unsloth/gemma-2-9b-it-bnb-4bit for fact-checking and misinformation detection.
Model Description
- Base Model: unsloth/gemma-2-9b-it-bnb-4bit
- Fine-tuned for: Fact-checking and bias detection
- Training Method: LoRA/QLoRA with Unsloth
- Merged: Yes (adapter merged with base model)
Usage
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="Adi-Evolve/gemma-2-9b-linkscout-v1",
max_seq_length=2048,
dtype=None,
load_in_4bit=True,
)
FastLanguageModel.for_inference(model)
# Generate analysis
prompt = """<start_of_turn>user
Analyze this article for misinformation: [ARTICLE TEXT]
<end_of_turn>
<start_of_turn>model
"""
inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
- Merged from LoRA adapter
- Original base: unsloth/gemma-2-9b-it-bnb-4bit
Limitations
This model should be used as a tool to assist in fact-checking, not as a sole source of truth.
- Downloads last month
- 61