Phi-2 LoRA Adapter β NARS Reasoning (A/B/C Inference)
This repository contains a LoRA adapter fine-tuned on microsoft/phi-2 to perform structured reasoning in the style of Non-Axiomatic Reasoning (NARS).
The model learns to read a set of premises and a claim, then decide whether the claim is True (A), False (B), or Uncertain (C).
Model Summary
| Field | Value |
|---|---|
| Base model | microsoft/phi-2 |
| Adapter type | LoRA (PEFT) |
| Quantization | 4-bit (NF4), bfloat16 compute |
| Target modules | q_proj, k_proj, v_proj, o_proj |
| LoRA config | r=16, alpha=32, dropout=0.05 |
| Dataset | MinaGabriel/NARS-Reasoning-v0.1 |
| Task | 3-way reasoning classification (A/B/C) |
| Prompt masking | Prompts masked, answer-only supervision |
Usage
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
BASE = "microsoft/phi-2"
ADAPTER = "MinaGabriel/phi2-2.7b-lora-nars-adapter"
device = "cuda" if torch.cuda.is_available() else "cpu"
tok = AutoTokenizer.from_pretrained(BASE, use_fast=True)
if tok.pad_token is None:
tok.pad_token = tok.eos_token
base = AutoModelForCausalLM.from_pretrained(BASE, device_map="auto", torch_dtype=torch.float16)
model = PeftModel.from_pretrained(base, ADAPTER).eval().to(device)
PROMPT = """Premises:
{context}
Claim:
{question}
Choose one option and output its LETTER only.
A) True
B) False
C) Uncertain
Answer:"""
def build_prompt(example):
return PROMPT.format(context=example["context"], question=example["question"])
prompt = build_prompt({
"context": "All mammals are warm-blooded. All whales are mammals. Moby is a whale.",
"question": "Moby is warm-blooded."
})
print("Prompt:\n", prompt)
inputs = tok(prompt, return_tensors="pt").to(device)
out = model.generate(**inputs, max_new_tokens=1)
print("Model Output:\n", tok.decode(out[0], skip_special_tokens=True)) #Answer: A
- Downloads last month
- 43
Model tree for MinaGabriel/phi2-2.7b-lora-nars-adapter
Base model
microsoft/phi-2