sareena commited on
Commit
b429379
·
verified ·
1 Parent(s): 9d71bd3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -74,6 +74,7 @@ This setup preserved general reasoning ability while improving spatial accuracy.
74
 
75
  ```python
76
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
77
 
78
  model = AutoModelForCausalLM.from_pretrained("sareena/spatial_lora_mistral")
79
  tokenizer = AutoTokenizer.from_pretrained("sareena/spatial_lora_mistral")
@@ -84,10 +85,19 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
84
 
85
  ```
86
 
87
-
88
  # Prompt Format
 
 
 
 
 
89
 
90
  # Expected Output Format
 
 
 
 
 
91
 
92
  # Limitations
93
 
 
74
 
75
  ```python
76
  from transformers import AutoTokenizer, AutoModelForCausalLM
77
+ import torch
78
 
79
  model = AutoModelForCausalLM.from_pretrained("sareena/spatial_lora_mistral")
80
  tokenizer = AutoTokenizer.from_pretrained("sareena/spatial_lora_mistral")
 
85
 
86
  ```
87
 
 
88
  # Prompt Format
89
+ The model is trained on instruction-style input with a spatial reasoning question:
90
+
91
+ ```text
92
+ Q: The couch is to the left of the table. The lamp is on the couch. Where is the lamp in relation to the table?
93
+ ```
94
 
95
  # Expected Output Format
96
+ The output is a short, natural language spatial answer:
97
+
98
+ ```text
99
+ A: left
100
+ ```
101
 
102
  # Limitations
103