Update README.md
Browse files
README.md
CHANGED
|
@@ -233,9 +233,10 @@ model_inputs = tokenizer([text], return_tensors="pt")
|
|
| 233 |
```
|
| 234 |
|
| 235 |
## Usage Guidelines
|
| 236 |
-
1. Use the model’s default chat template, which already includes a system prompt.
|
| 237 |
2. We recommend setting temperature to `0.6`.
|
| 238 |
-
3. We ensure the model starts with `Here are my reasoning steps:\n` during all our evaluations. This is implemented in the default chat template.
|
|
|
|
| 239 |
|
| 240 |
---
|
| 241 |
|
|
|
|
| 233 |
```
|
| 234 |
|
| 235 |
## Usage Guidelines
|
| 236 |
+
1. Use the model’s default chat template, which already includes a system prompt.
|
| 237 |
2. We recommend setting temperature to `0.6`.
|
| 238 |
+
3. We ensure the model starts with `Here are my reasoning steps:\n` during all our evaluations. This is implemented in the default chat template.
|
| 239 |
+
4. For multi-turn conversations, intermediate turns (historical model outputs) are expected to contain only the final response, without reasoning steps.
|
| 240 |
|
| 241 |
---
|
| 242 |
|