PyTorch
qwen2
File size: 1,650 Bytes
ee0309d
 
 
 
 
 
4288e51
32f61e0
 
 
a30b106
32f61e0
 
a30b106
32f61e0
90423ba
 
a30b106
4288e51
 
 
 
 
 
 
 
 
 
 
 
 
 
865fc71
4288e51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
license: apache-2.0
datasets:
- ariaattarml/verified-reasoning-cot-gpqa-mmlu-pro
base_model:
- Qwen/Qwen2.5-72B-Instruct
---
# S0 Series Models

## Overview
The S0 v0.1 model by [TensorStax](https://tensorstax.com) is an early preview line of open reasoning models, specifically trained using process supervision on synthetic data. These models are designed to excel at general reasoning tasks while maintaining transparency in their thought processes.

## Future Specialized Releases
We plan to expand the S0 series with specialized variants optimized for specific domains:

- Code Generation
- Query Generation
- Long horizon agency


```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "ariaattarml/TensorStax-72B-S0-0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map="auto",
)

alpaca_prompt = """
### Instruction:
{}

### Input:
{}

### Response:
{}"""

system_prompt = """
You are an intelligent reasoning AI assistant
"""
user_prompt = """
As of 2017, how many of the world's 1-year-old children today have been vaccinated against some disease?

Options: ['30%', '60%', '10%', '90%', '80%', '40%', '100%', '50%', 'N/A', 'N/A']
"""

inputs = tokenizer(
[
    alpaca_prompt.format(
        system_prompt,
        user_prompt,
        "",
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)

_ = model.generate(
    **inputs,
    streamer = text_streamer,
    max_new_tokens = 8000,
    temperature = 1.0
)
```