Safetensors
sailvl
custom_code
Blue-skyyy commited on
Commit
4ea39ff
·
verified ·
1 Parent(s): c752e53

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,11 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ assets/figures/data.png filter=lfs diff=lfs merge=lfs -text
37
+ assets/figures/framework.png filter=lfs diff=lfs merge=lfs -text
38
+ assets/figures/mtpt_scaling.png filter=lfs diff=lfs merge=lfs -text
39
+ assets/figures/performance.png filter=lfs diff=lfs merge=lfs -text
40
+ assets/figures/performance_rl.png filter=lfs diff=lfs merge=lfs -text
41
+ assets/figures/performance_table_2b.png filter=lfs diff=lfs merge=lfs -text
42
+ assets/figures/performance_table_8b.png filter=lfs diff=lfs merge=lfs -text
43
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,187 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # SAIL-VL2
6
+
7
+ <div align="center">
8
+ <img src="assets/logo/logo_with_name.jpeg" width="80%" alt="SAIL-VL2 Logo">
9
+ </div>
10
+
11
+ <font size=3><div align='center' >
12
+ [[📖 Technique Report]()]
13
+ [[🤗 SAIL-VL2-2B](https://huggingface.co/BytedanceDouyinContent/SAIL-VL2-2B)]
14
+ [[🤗 SAIL-VL2-8B](https://huggingface.co/BytedanceDouyinContent/SAIL-VL2-8B)]
15
+ [[🤗 SAIL-VL2-2B-Thinking](https://huggingface.co/BytedanceDouyinContent/SAIL-VL2-2B-Thinking)]
16
+ [[🤗 SAIL-VL2-8B-Thinking](https://huggingface.co/BytedanceDouyinContent/SAIL-VL2-8B-Thinking)]
17
+ [[💻 Github](https://github.com/BytedanceDouyinContent/SAIL-VL2)]
18
+ </div></font>
19
+
20
+ We are very excited to introduce **SAIL-VL2** 🚀, a state-of-the-art visual language model that significantly outperforms existing models in various visual language tasks.
21
+
22
+ ## 🔥 Updates
23
+
24
+ - **`2025.09.08`** 🌟 **SAIL-VL2 Technical Report** is now available at [arxiv]().
25
+
26
+
27
+ ## 🌟 Highlights
28
+ - SAIL-VL2 is powerful, efficient, and achieves top results under 2B parameters.
29
+ - SAIL-VL2-Thinking boosts complex reasoning, matching larger models.
30
+ - SAIL-VL2 excels in fine-grained visual tasks beyond similar-scale models.
31
+
32
+ <div align="center">
33
+ <img src="assets/figures/performance.png" width="100%" alt="SAIL-VL2 Performance">
34
+ </div>
35
+
36
+ ## Model Architecture:
37
+
38
+ | Architecture | ViT | LLM | Adapter | Token Merge | Resolution |
39
+ | --- | --- | --- | --- | --- | --- |
40
+ | [🤗SAIL-VL2-2B](https://huggingface.co/BytedanceDouyinContent/SAIL-VL2-2B) | [🤗SAILViT-Huge](https://huggingface.co/BytedanceDouyinContent/SAILViT-Huge-600M-448px) | [🤗Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | 2-layer MLP | 2x2 | 448x448xN |
41
+ | [🤗SAIL-VL2-8B](https://huggingface.co/BytedanceDouyinContent/SAIL-VL2-8B) | [🤗SAILViT-Huge](https://huggingface.co/BytedanceDouyinContent/SAILViT-Huge-600M-448px) | [🤗Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) | 2-layer MLP | 2x2 | 448x448xN |
42
+ | [🤗SAIL-VL2-2B-Thinking](https://huggingface.co/BytedanceDouyinContent/SAIL-VL2-2B-Thinking) | [🤗SAILViT-Huge](https://huggingface.co/BytedanceDouyinContent/SAILViT-Huge-600M-448px) | [🤗Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | 2-layer MLP | 2x2 | 448x448xN |
43
+ | [🤗SAIL-VL2-8B-Thinking](https://huggingface.co/BytedanceDouyinContent/SAIL-VL2-8B-Thinking) | [🤗SAILViT-Huge](https://huggingface.co/BytedanceDouyinContent/SAILViT-Huge-600M-448px) | [🤗Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) | 2-layer MLP | 2x2 | 448x448xN |
44
+
45
+
46
+ ## 🎬 Quick Start
47
+
48
+
49
+ ```python
50
+ import torch
51
+ from transformers import AutoTokenizer, AutoModel, AutoProcessor
52
+ from PIL import Image
53
+
54
+
55
+ model_path = "your model path"
56
+
57
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
58
+ processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
59
+ device = torch.cuda.current_device()
60
+ model = AutoModel.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16,).to(device)
61
+
62
+ print("##### with images")
63
+ cot_prompt = r"You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in \boxed{}."
64
+ messages = [
65
+ {"role": "user", "content": [{"type": "image", "image": 'image_path'},
66
+ {"type": "text", "text": "describe the image" + cot_prompt}]}
67
+ ]
68
+ text = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
69
+
70
+ image_path = 'your image path'
71
+ image = Image.open(image_path)
72
+ inputs = processor(images=image, text=text, return_tensors="pt", padding=True, truncation=True).to(model.device).to(torch.bfloat16)
73
+
74
+ generated_ids = model.generate(**inputs, max_new_tokens=512)
75
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
76
+ response = response.split('<|im_end|>')[0].strip()
77
+ print(response)
78
+
79
+
80
+ print("##### without images")
81
+ cot_prompt = r"You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in \boxed{}."
82
+ messages = [
83
+ {
84
+ "role": "user",
85
+ "content": [{"type": "text", "text": "中国的首都是哪里?" + cot_prompt}]
86
+ }
87
+ ]
88
+ text = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
89
+ inputs = processor(images=None, text=text, return_tensors="pt", padding=True, truncation=True).to(model.device).to(torch.bfloat16)
90
+ generated_ids = model.generate(**inputs, max_new_tokens=512)
91
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
92
+ response = response.split('<|im_end|>')[0].strip()
93
+ print(response)
94
+
95
+ ```
96
+
97
+ ## 👀 Introduction
98
+ - **SAIL-VL2 is powerful yet efficient:** With training on 776B tokens, SAIL-VL2 has verified its effectiveness across 106 datasets, achieving state-of-the-art results on a broad spectrum of influential benchmarks under the 2B-parameter scale. Remarkably, even without specialized prompting, the base SAIL-VL2 model delivers highly competitive performance on challenging reasoning benchmarks such as MMMU and MathVista, demonstrating strong out-of-the-box capabilities.
99
+
100
+ - **SAIL-VL2 as a deep thinker:** Many real-world tasks demand sophisticated reasoning and multi-step thought processes, which remain challenging for standard LVMs. To address this, we develop SAIL-VL2-Thinking, a specialized variant trained with advanced Chain-of-Thought (CoT) and reinforcement learning (RL) strategies. This design substantially improves performance on complex reasoning benchmarks, often matching or even surpassing models with far larger parameter scales, thereby setting a new standard for efficient architectures in high-level reasoning.
101
+
102
+ - **SAIL-VL2 perceives with clarity:** Fine-grained visual understanding is a critical challenge for multimodal models. SAIL-VL2 delivers high-fidelity perception in tasks such as OCR, high-resolution document layout analysis, and complex chart interpretation, achieving detailed visual grounding beyond models of similar scale.
103
+
104
+
105
+ <div align="center">
106
+ <img src="assets/figures/framework.png" width="100%" alt="SAIL-VL2 Framework">
107
+ <i> Overview of the SAIL-VL2 framework. The architecture is composed of a vision encoder that aligns visual inputs into the representation space of the LLM. A lightweight adapter further transforms visual embeddings into tokenized representations, which are jointly processed with linguistic embeddings for multimodal reasoning and prediction. SAIL-VL2 accommodates multiple LLM backbones, ensuring flexibility and scalability across model configurations.</i>
108
+ </div>
109
+
110
+
111
+ ## 📚 Training Strategy
112
+ ### 🌟 Data construction
113
+
114
+ <div align="center">
115
+ <img src="assets/figures/data.png" width="100%" alt="SAIL-VL2 Data">
116
+ <i> Data construction pipeline for SAIL-VL2 training. High-quality multimodal corpora are construted by curating and filtering open-source datasets, and generating synthetic data, with both components systematically organized to meet the requirements of different training stages.</i>
117
+ </div>
118
+
119
+ ### 🌟 Pre-Train
120
+
121
+ - **Basic Multimodal Pre-Training:** develops SAIL-VL2’s multimodal alignment via SAIL-ViT, LLM and a random MLP adapter, using 64M samples, AdaLRS and 2048 batch size.
122
+
123
+ - **Multi-task Pre-Training:** strengthens SAIL-VL2’s visual and instruction-following abilities, unfreezes all params, adds instruction-tuning data, uses 180M samples, and skips AdaLRS.
124
+
125
+ <div align="center">
126
+ <img src="assets/figures/mtpt_scaling.png" width="100%" alt="SAIL-VL2 MTPT Scaling">
127
+ <i> Scaling curves of SAIL-VL2-2B during the multi-task pre-training stage. Results are reported on overall benchmarks, natural-scene VQA datasets, and OCR VQA tasks. ’BMK Score’ denotes the average benchmark score.</i>
128
+ </div>
129
+
130
+
131
+ ### 🌟 Post-Train
132
+
133
+ - **Basic Supervised Fine-Tuning:** 4 phases; Model Soup merges homogeneous models.
134
+
135
+ - **LongCoT Supervised Fine-tuning:** enhances the model’s step-by-step reasoning capabilities for complex problems.
136
+
137
+ - **RL with Verifiable Rewards:** refines the model by optimizing it against a reward system focused on two primary objectives: the correctness of the final answer and adherence to the specified output format
138
+
139
+ - **Think-Fusion Supervised Fine-tuning:** enhances the model’s reasoning capabilities while maintaining its broad general understanding.
140
+
141
+ - **RL with a Mixed Reward System:** enhances the model’s reasoning capabilities through a RL stage
142
+
143
+ ## 📈 Experimental Results
144
+ ### 🌟 Performance of 2B series
145
+ <div align="center">
146
+ <img src="assets/figures/performance_table_2b.png" width="100%" alt="SAIL-VL2 Performance">
147
+ <i> Overall comparison of the SAIL-VL2 series and existing open-source MLLMs (<4B).</i>
148
+ </div>
149
+
150
+ ### 🌟 Performance of 8B series
151
+ <div align="center">
152
+ <img src="assets/figures/performance_table_8b.png" width="100%" alt="SAIL-VL2 Performance">
153
+ <i> Overall comparison of the SAIL-VL2 series with existing open-source 8B MLLMs and closed-source models.</i>
154
+ </div>
155
+
156
+ ### 🌟 Performance of Thinking-mode models
157
+ <div align="center">
158
+ <img src="assets/figures/performance_rl.png" width="100%" alt="SAIL-VL2 Performance">
159
+ <i> Evaluation results on OpenCompass multimodal reasoning benchmarks.</i>
160
+ </div>
161
+
162
+ ## 🙏 Acknowledge
163
+
164
+ Our model is built upon numerous outstanding open-source projects, and we are grateful for their contributions. We extend special thanks to the InternVL team, Qwen team, and Apple team for their great base models, and to the BAAI team (Infinity-MM), MAmmoTH-VL team(MAmmoTH-VL-Insturction-12M) for their generous release of data, and to the OpenCompass team for their valuable benchmarks.
165
+
166
+ ## ✒️ Citation
167
+
168
+ If you find our work helpful for your research, please consider citing our work.
169
+
170
+ ```
171
+ @article{dong2025scalable,
172
+ title={Scalable vision language model training via high quality data curation},
173
+ author={Dong, Hongyuan and Kang, Zijian and Yin, Weijie and Liang, Xiao and Feng, Chao and Ran, Jiao},
174
+ journal={arXiv preprint arXiv:2501.05952},
175
+ year={2025}
176
+ }
177
+ ```
178
+
179
+
180
+ ## 📜 License
181
+
182
+ This project is licensed under [Apache License 2.0](LICENSE).
183
+
184
+ ## 📧Contact
185
+
186
+ If you have any question, please feel free to contact us: [email protected]
187
+
added_tokens.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</box>": 151677,
3
+ "</img>": 151670,
4
+ "</quad>": 151673,
5
+ "</ref>": 151675,
6
+ "</think>": 151668,
7
+ "</tool_call>": 151658,
8
+ "</tool_response>": 151666,
9
+ "<IMG_CONTEXT>": 151671,
10
+ "<box>": 151676,
11
+ "<img>": 151669,
12
+ "<quad>": 151672,
13
+ "<ref>": 151674,
14
+ "<think>": 151667,
15
+ "<tool_call>": 151657,
16
+ "<tool_response>": 151665,
17
+ "<|box_end|>": 151649,
18
+ "<|box_start|>": 151648,
19
+ "<|endoftext|>": 151643,
20
+ "<|file_sep|>": 151664,
21
+ "<|fim_middle|>": 151660,
22
+ "<|fim_pad|>": 151662,
23
+ "<|fim_prefix|>": 151659,
24
+ "<|fim_suffix|>": 151661,
25
+ "<|im_end|>": 151645,
26
+ "<|im_start|>": 151644,
27
+ "<|image_pad|>": 151655,
28
+ "<|object_ref_end|>": 151647,
29
+ "<|object_ref_start|>": 151646,
30
+ "<|quad_end|>": 151651,
31
+ "<|quad_start|>": 151650,
32
+ "<|repo_name|>": 151663,
33
+ "<|video_pad|>": 151656,
34
+ "<|vision_end|>": 151653,
35
+ "<|vision_pad|>": 151654,
36
+ "<|vision_start|>": 151652
37
+ }
assets/figures/data.png ADDED

Git LFS Details

  • SHA256: 680767d798554f57e283d4ffb8a1c0b233339b5613a99dabc4800230cdf9b86c
  • Pointer size: 131 Bytes
  • Size of remote file: 601 kB
assets/figures/framework.png ADDED

Git LFS Details

  • SHA256: c45451e53665181fbeea2d4136cf5e358383bd1ab5e73b5038e4a85fb9fb2e9b
  • Pointer size: 132 Bytes
  • Size of remote file: 1.07 MB
assets/figures/mtpt_scaling.png ADDED

Git LFS Details

  • SHA256: 0a8699710333e3c75b4aaf3fc02b1ead69f76536c9dd9aa3e8a50b6eaebceb3d
  • Pointer size: 131 Bytes
  • Size of remote file: 413 kB
assets/figures/performance.png ADDED

Git LFS Details

  • SHA256: fa3d1d8873c41466ac41a218b1ada9cdb74797108c5fcb92a1730f11edcdfcf4
  • Pointer size: 132 Bytes
  • Size of remote file: 1.21 MB
assets/figures/performance_rl.png ADDED

Git LFS Details

  • SHA256: 2194d36c86c6930dd60cf9733bf2145a7973e02a593a38974ef444c04d82e1f5
  • Pointer size: 131 Bytes
  • Size of remote file: 412 kB
assets/figures/performance_table_2b.png ADDED

Git LFS Details

  • SHA256: 1fc940b73598a41e5d9c7f7e987c8292054bb3ed038817683e2b916c8179d25e
  • Pointer size: 131 Bytes
  • Size of remote file: 583 kB
assets/figures/performance_table_8b.png ADDED

Git LFS Details

  • SHA256: b5c5a23aae2a58648f3f0f822f30404547563e32e1208892010d674aa39de1da
  • Pointer size: 131 Bytes
  • Size of remote file: 595 kB
assets/logo/logo_with_name.jpeg ADDED
chat_template.jinja ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- for message in messages -%}
2
+ {%- if loop.first and messages[0]['role'] != 'system' -%}
3
+ {{'<|im_start|>system\n你是由抖音内容理解组开发的多模态大模型,英文名叫UniVL, 是一个有用无害的人工智能助手。<|im_end|>'}}
4
+ {%- endif -%}
5
+ {%- if message['role'] == 'system' -%}
6
+ {{'<|im_system|>'}}
7
+ {%- endif -%}
8
+ {%- if message['role'] == 'user' -%}
9
+ {{'<|im_start|>user\n'}}
10
+ {%- endif -%}
11
+ {%- if message['role'] == 'assistant' -%}
12
+ {{'<|im_assistant|>'}}
13
+ {%- endif -%}
14
+
15
+ {%- if message['content'] is string -%}
16
+ {{- message['content'] + '<|im_end|>' -}}
17
+ {%- else -%}
18
+ {%- for content in message['content'] -%}
19
+ {%- if content['type'] == 'image' or 'image' in content or 'image_url' in content -%}
20
+ {{'<image>\n'}}
21
+ {%- else -%}
22
+ {{content['text']}}
23
+ {%- endif -%}
24
+ {%- endfor -%}
25
+ {{'<|im_end|>'}}
26
+ {%- endif -%}
27
+ {%- endfor -%}
28
+ {%- if add_generation_prompt -%}
29
+ {{'<|im_start|>assistant\n'}}
30
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_commit_hash": null,
3
+ "architectures": [
4
+ "SAILVLModel"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_sailvl.SAILVLConfig",
8
+ "AutoModel": "modeling_sailvl.SAILVLModel",
9
+ "AutoModelForCausalLM": "modeling_sailvl.SAILVLModel"
10
+ },
11
+ "downsample_ratio": 0.5,
12
+ "dynamic_image_size": true,
13
+ "force_image_size": 448,
14
+ "llm_config": {
15
+ "_name_or_path": "/tmp/huggingface_cache/Qwen3-1.7B-Instruct",
16
+ "add_cross_attention": false,
17
+ "architectures": [
18
+ "Qwen3ForCausalLM"
19
+ ],
20
+ "attn_implementation": "flash_attention_2",
21
+ "attention_bias": false,
22
+ "attention_dropout": 0.0,
23
+ "bad_words_ids": null,
24
+ "begin_suppress_tokens": null,
25
+ "bos_token_id": 151643,
26
+ "chunk_size_feed_forward": 0,
27
+ "cross_attention_hidden_size": null,
28
+ "decoder_start_token_id": null,
29
+ "diversity_penalty": 0.0,
30
+ "do_sample": false,
31
+ "early_stopping": false,
32
+ "encoder_no_repeat_ngram_size": 0,
33
+ "eos_token_id": 151645,
34
+ "exponential_decay_length_penalty": null,
35
+ "finetuning_task": null,
36
+ "forced_bos_token_id": null,
37
+ "forced_eos_token_id": null,
38
+ "head_dim": 128,
39
+ "hidden_act": "silu",
40
+ "hidden_size": 2048,
41
+ "id2label": {
42
+ "0": "LABEL_0",
43
+ "1": "LABEL_1"
44
+ },
45
+ "initializer_range": 0.02,
46
+ "intermediate_size": 6144,
47
+ "is_decoder": false,
48
+ "is_encoder_decoder": false,
49
+ "label2id": {
50
+ "LABEL_0": 0,
51
+ "LABEL_1": 1
52
+ },
53
+ "length_penalty": 1.0,
54
+ "max_length": 20,
55
+ "max_position_embeddings": 40960,
56
+ "max_window_layers": 28,
57
+ "min_length": 0,
58
+ "model_type": "qwen3",
59
+ "no_repeat_ngram_size": 0,
60
+ "num_attention_heads": 16,
61
+ "num_beam_groups": 1,
62
+ "num_beams": 1,
63
+ "num_hidden_layers": 28,
64
+ "num_key_value_heads": 8,
65
+ "num_return_sequences": 1,
66
+ "output_attentions": false,
67
+ "output_hidden_states": false,
68
+ "output_scores": false,
69
+ "pad_token_id": 151643,
70
+ "prefix": null,
71
+ "problem_type": null,
72
+ "pruned_heads": {},
73
+ "remove_invalid_values": false,
74
+ "repetition_penalty": 1.0,
75
+ "return_dict": true,
76
+ "return_dict_in_generate": false,
77
+ "rms_norm_eps": 1e-06,
78
+ "rope_scaling": null,
79
+ "rope_theta": 1000000,
80
+ "sep_token_id": null,
81
+ "sliding_window": null,
82
+ "suppress_tokens": null,
83
+ "task_specific_params": null,
84
+ "temperature": 1.0,
85
+ "tf_legacy_loss": false,
86
+ "tie_encoder_decoder": false,
87
+ "tie_word_embeddings": true,
88
+ "tokenizer_class": null,
89
+ "top_k": 50,
90
+ "top_p": 1.0,
91
+ "torch_dtype": "bfloat16",
92
+ "torchscript": false,
93
+ "transformers_version": "4.51.0",
94
+ "typical_p": 1.0,
95
+ "use_bfloat16": false,
96
+ "use_cache": true,
97
+ "use_sliding_window": false,
98
+ "vocab_size": 151936
99
+ },
100
+ "max_dynamic_patch": 12,
101
+ "min_dynamic_patch": 1,
102
+ "model_type": "sailvl",
103
+ "pad2square": false,
104
+ "ps_version": "v2",
105
+ "select_layer": -1,
106
+ "template": "univl-chat",
107
+ "torch_dtype": "bfloat16",
108
+ "transformers_version": null,
109
+ "use_backbone_lora": 0,
110
+ "use_llm_lora": 0,
111
+ "use_thumbnail": true,
112
+ "vision_config": {
113
+ "add_cross_attention": false,
114
+ "architectures": [
115
+ "SAILViTModel"
116
+ ],
117
+ "attention_dropout": 0.0,
118
+ "auto_map": {
119
+ "AutoConfig": "configuration_sailvit.SAILViTConfig",
120
+ "AutoModel": "modeling_sailvit.SAILViTModel"
121
+ },
122
+ "bad_words_ids": null,
123
+ "begin_suppress_tokens": null,
124
+ "bos_token_id": null,
125
+ "chunk_size_feed_forward": 0,
126
+ "cross_attention_hidden_size": null,
127
+ "decoder_start_token_id": null,
128
+ "diversity_penalty": 0.0,
129
+ "do_sample": false,
130
+ "early_stopping": false,
131
+ "encoder_no_repeat_ngram_size": 0,
132
+ "eos_token_id": null,
133
+ "exponential_decay_length_penalty": null,
134
+ "finetuning_task": null,
135
+ "forced_bos_token_id": null,
136
+ "forced_eos_token_id": null,
137
+ "hidden_size": 1536,
138
+ "id2label": {
139
+ "0": "LABEL_0",
140
+ "1": "LABEL_1"
141
+ },
142
+ "image_size": 448,
143
+ "intermediate_size": 4096,
144
+ "is_decoder": false,
145
+ "is_encoder_decoder": false,
146
+ "label2id": {
147
+ "LABEL_0": 0,
148
+ "LABEL_1": 1
149
+ },
150
+ "length_penalty": 1.0,
151
+ "max_length": 20,
152
+ "min_length": 0,
153
+ "model_type": "sailvit",
154
+ "no_repeat_ngram_size": 0,
155
+ "num_attention_heads": 12,
156
+ "num_beam_groups": 1,
157
+ "num_beams": 1,
158
+ "num_channels": 3,
159
+ "num_hidden_layers": 24,
160
+ "num_return_sequences": 1,
161
+ "output_attentions": false,
162
+ "output_hidden_states": false,
163
+ "output_scores": false,
164
+ "pad_token_id": null,
165
+ "patch_size": 14,
166
+ "prefix": null,
167
+ "problem_type": null,
168
+ "projection_dropout": 0.0,
169
+ "pruned_heads": {},
170
+ "qkv_bias": false,
171
+ "remove_invalid_values": false,
172
+ "repetition_penalty": 1.0,
173
+ "return_dict": true,
174
+ "return_dict_in_generate": false,
175
+ "rms_norm_eps": 1e-05,
176
+ "sep_token_id": null,
177
+ "suppress_tokens": null,
178
+ "task_specific_params": null,
179
+ "temperature": 1.0,
180
+ "tf_legacy_loss": false,
181
+ "tie_encoder_decoder": false,
182
+ "tie_word_embeddings": true,
183
+ "tokenizer_class": null,
184
+ "top_k": 50,
185
+ "top_p": 1.0,
186
+ "torch_dtype": "bfloat16",
187
+ "torchscript": false,
188
+ "transformers_version": "4.51.0",
189
+ "typical_p": 1.0,
190
+ "use_bfloat16": false,
191
+ "use_bias": false
192
+ }
193
+ }
configuration_qwen3.py ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The Qwen team, Alibaba Group and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Qwen3 model configuration"""
16
+
17
+ from transformers.configuration_utils import PretrainedConfig
18
+ from transformers.modeling_rope_utils import rope_config_validation
19
+ from transformers.utils import logging
20
+
21
+
22
+ logger = logging.get_logger(__name__)
23
+
24
+
25
+ class Qwen3Config(PretrainedConfig):
26
+ r"""
27
+ This is the configuration class to store the configuration of a [`Qwen3Model`]. It is used to instantiate a
28
+ Qwen3 model according to the specified arguments, defining the model architecture. Instantiating a configuration
29
+ with the defaults will yield a similar configuration to that of
30
+ Qwen3-8B [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
31
+
32
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
33
+ documentation from [`PretrainedConfig`] for more information.
34
+
35
+
36
+ Args:
37
+ vocab_size (`int`, *optional*, defaults to 151936):
38
+ Vocabulary size of the Qwen3 model. Defines the number of different tokens that can be represented by the
39
+ `inputs_ids` passed when calling [`Qwen3Model`]
40
+ hidden_size (`int`, *optional*, defaults to 4096):
41
+ Dimension of the hidden representations.
42
+ intermediate_size (`int`, *optional*, defaults to 22016):
43
+ Dimension of the MLP representations.
44
+ num_hidden_layers (`int`, *optional*, defaults to 32):
45
+ Number of hidden layers in the Transformer encoder.
46
+ num_attention_heads (`int`, *optional*, defaults to 32):
47
+ Number of attention heads for each attention layer in the Transformer encoder.
48
+ num_key_value_heads (`int`, *optional*, defaults to 32):
49
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
50
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
51
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
52
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
53
+ by meanpooling all the original heads within that group. For more details checkout [this
54
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `32`.
55
+ head_dim (`int`, *optional*, defaults to 128):
56
+ The attention head dimension.
57
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
58
+ The non-linear activation function (function or string) in the decoder.
59
+ max_position_embeddings (`int`, *optional*, defaults to 32768):
60
+ The maximum sequence length that this model might ever be used with.
61
+ initializer_range (`float`, *optional*, defaults to 0.02):
62
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
63
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
64
+ The epsilon used by the rms normalization layers.
65
+ use_cache (`bool`, *optional*, defaults to `True`):
66
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
67
+ relevant if `config.is_decoder=True`.
68
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
69
+ Whether the model's input and output word embeddings should be tied.
70
+ rope_theta (`float`, *optional*, defaults to 10000.0):
71
+ The base period of the RoPE embeddings.
72
+ rope_scaling (`Dict`, *optional*):
73
+ Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
74
+ and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
75
+ accordingly.
76
+ Expected contents:
77
+ `rope_type` (`str`):
78
+ The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
79
+ 'llama3'], with 'default' being the original RoPE implementation.
80
+ `factor` (`float`, *optional*):
81
+ Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
82
+ most scaling types, a `factor` of x will enable the model to handle sequences of length x *
83
+ original maximum pre-trained length.
84
+ `original_max_position_embeddings` (`int`, *optional*):
85
+ Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
86
+ pretraining.
87
+ `attention_factor` (`float`, *optional*):
88
+ Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
89
+ computation. If unspecified, it defaults to value recommended by the implementation, using the
90
+ `factor` field to infer the suggested value.
91
+ `beta_fast` (`float`, *optional*):
92
+ Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
93
+ ramp function. If unspecified, it defaults to 32.
94
+ `beta_slow` (`float`, *optional*):
95
+ Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
96
+ ramp function. If unspecified, it defaults to 1.
97
+ `short_factor` (`List[float]`, *optional*):
98
+ Only used with 'longrope'. The scaling factor to be applied to short contexts (<
99
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
100
+ size divided by the number of attention heads divided by 2
101
+ `long_factor` (`List[float]`, *optional*):
102
+ Only used with 'longrope'. The scaling factor to be applied to long contexts (<
103
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
104
+ size divided by the number of attention heads divided by 2
105
+ `low_freq_factor` (`float`, *optional*):
106
+ Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
107
+ `high_freq_factor` (`float`, *optional*):
108
+ Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
109
+ attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
110
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
111
+ use_sliding_window (`bool`, *optional*, defaults to `False`):
112
+ Whether to use sliding window attention.
113
+ sliding_window (`int`, *optional*, defaults to 4096):
114
+ Sliding window attention (SWA) window size. If not specified, will default to `4096`.
115
+ max_window_layers (`int`, *optional*, defaults to 28):
116
+ The number of layers that use SWA (Sliding Window Attention). The bottom layers use SWA while the top use full attention.
117
+ attention_dropout (`float`, *optional*, defaults to 0.0):
118
+ The dropout ratio for the attention probabilities.
119
+
120
+ ```python
121
+ >>> from transformers import Qwen3Model, Qwen3Config
122
+
123
+ >>> # Initializing a Qwen3 style configuration
124
+ >>> configuration = Qwen3Config()
125
+
126
+ >>> # Initializing a model from the Qwen3-8B style configuration
127
+ >>> model = Qwen3Model(configuration)
128
+
129
+ >>> # Accessing the model configuration
130
+ >>> configuration = model.config
131
+ ```"""
132
+
133
+ model_type = "qwen3"
134
+ keys_to_ignore_at_inference = ["past_key_values"]
135
+
136
+ # Default tensor parallel plan for base model `Qwen3`
137
+ base_model_tp_plan = {
138
+ "layers.*.self_attn.q_proj": "colwise",
139
+ "layers.*.self_attn.k_proj": "colwise",
140
+ "layers.*.self_attn.v_proj": "colwise",
141
+ "layers.*.self_attn.o_proj": "rowwise",
142
+ "layers.*.mlp.gate_proj": "colwise",
143
+ "layers.*.mlp.up_proj": "colwise",
144
+ "layers.*.mlp.down_proj": "rowwise",
145
+ }
146
+ base_model_pp_plan = {
147
+ "embed_tokens": (["input_ids"], ["inputs_embeds"]),
148
+ "layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
149
+ "norm": (["hidden_states"], ["hidden_states"]),
150
+ }
151
+
152
+ def __init__(
153
+ self,
154
+ vocab_size=151936,
155
+ hidden_size=4096,
156
+ intermediate_size=22016,
157
+ num_hidden_layers=32,
158
+ num_attention_heads=32,
159
+ num_key_value_heads=32,
160
+ head_dim=128,
161
+ hidden_act="silu",
162
+ max_position_embeddings=32768,
163
+ initializer_range=0.02,
164
+ rms_norm_eps=1e-6,
165
+ use_cache=True,
166
+ tie_word_embeddings=False,
167
+ rope_theta=10000.0,
168
+ rope_scaling=None,
169
+ attention_bias=False,
170
+ use_sliding_window=False,
171
+ sliding_window=4096,
172
+ max_window_layers=28,
173
+ attention_dropout=0.0,
174
+ **kwargs,
175
+ ):
176
+ self.vocab_size = vocab_size
177
+ self.max_position_embeddings = max_position_embeddings
178
+ self.hidden_size = hidden_size
179
+ self.intermediate_size = intermediate_size
180
+ self.num_hidden_layers = num_hidden_layers
181
+ self.num_attention_heads = num_attention_heads
182
+ self.use_sliding_window = use_sliding_window
183
+ self.sliding_window = sliding_window # we check `use_sliding_window` in the modeling code
184
+ self.max_window_layers = max_window_layers
185
+
186
+ # for backward compatibility
187
+ if num_key_value_heads is None:
188
+ num_key_value_heads = num_attention_heads
189
+
190
+ self.num_key_value_heads = num_key_value_heads
191
+ self.head_dim = head_dim
192
+ self.hidden_act = hidden_act
193
+ self.initializer_range = initializer_range
194
+ self.rms_norm_eps = rms_norm_eps
195
+ self.use_cache = use_cache
196
+ self.rope_theta = rope_theta
197
+ self.rope_scaling = rope_scaling
198
+ self.attention_bias = attention_bias
199
+ self.attention_dropout = attention_dropout
200
+ # Validate the correctness of rotary position embeddings parameters
201
+ # BC: if there is a 'type' field, move it to 'rope_type'.
202
+ if self.rope_scaling is not None and "type" in self.rope_scaling:
203
+ self.rope_scaling["rope_type"] = self.rope_scaling["type"]
204
+ rope_config_validation(self)
205
+
206
+ super().__init__(
207
+ tie_word_embeddings=tie_word_embeddings,
208
+ **kwargs,
209
+ )
210
+
211
+
212
+ __all__ = ["Qwen3Config"]
configuration_sailvit.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # copied from https://huggingface.co/apple/aimv2-huge-patch14-448
2
+ from typing import Any
3
+
4
+ from transformers.configuration_utils import PretrainedConfig
5
+
6
+ __all__ = ["SAILViTConfig"]
7
+
8
+
9
+ class SAILViTConfig(PretrainedConfig):
10
+ """This is the configuration class to store the configuration of an [`SAIlViTModel`].
11
+ Instantiating a configuration with the defaults will yield a similar configuration
12
+ to that of the [apple/aimv2-large-patch14-224](https://huggingface.co/apple/aimv2-large-patch14-224).
13
+ Args:
14
+ hidden_size: Dimension of the hidden representations.
15
+ intermediate_size: Dimension of the SwiGLU representations.
16
+ num_hidden_layers: Number of hidden layers in the Transformer.
17
+ num_attention_heads: Number of attention heads for each attention layer
18
+ in the Transformer.
19
+ num_channels: Number of input channels.
20
+ image_size: Image size.
21
+ patch_size: Patch size.
22
+ rms_norm_eps: Epsilon value used for the RMS normalization layer.
23
+ attention_dropout: Dropout ratio for attention probabilities.
24
+ projection_dropout: Dropout ratio for the projection layer after the attention.
25
+ qkv_bias: Whether to add a bias to the queries, keys and values.
26
+ use_bias: Whether to add a bias in the feed-forward and projection layers.
27
+ kwargs: Keyword arguments for the [`PretrainedConfig`].
28
+ """
29
+
30
+ model_type: str = "sailvit"
31
+
32
+ def __init__(
33
+ self,
34
+ hidden_size: int = 1024,
35
+ intermediate_size: int = 2816,
36
+ num_hidden_layers: int = 24,
37
+ num_attention_heads: int = 8,
38
+ num_channels: int = 3,
39
+ image_size: int = 224,
40
+ patch_size: int = 14,
41
+ rms_norm_eps: float = 1e-5,
42
+ attention_dropout: float = 0.0,
43
+ projection_dropout: float = 0.0,
44
+ qkv_bias: bool = False,
45
+ use_bias: bool = False,
46
+ **kwargs: Any,
47
+ ):
48
+ super().__init__(**kwargs)
49
+ self.hidden_size = hidden_size
50
+ self.intermediate_size = intermediate_size
51
+ self.num_hidden_layers = num_hidden_layers
52
+ self.num_attention_heads = num_attention_heads
53
+ self.num_channels = num_channels
54
+ self.patch_size = patch_size
55
+ self.image_size = image_size
56
+ self.attention_dropout = attention_dropout
57
+ self.rms_norm_eps = rms_norm_eps
58
+
59
+ self.projection_dropout = projection_dropout
60
+ self.qkv_bias = qkv_bias
61
+ self.use_bias = use_bias
configuration_sailvl.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # --------------------------------------------------------
2
+ # adapted from internvl_chat/internvl/model/internvl_chat/configuration_internvl_chat.py
3
+ # --------------------------------------------------------
4
+
5
+ import copy
6
+
7
+ from transformers import LlamaConfig
8
+ from transformers.configuration_utils import PretrainedConfig
9
+ from transformers.utils import logging
10
+
11
+ from .configuration_sailvit import SAILViTConfig
12
+ # from .configuration_qwen2 import Qwen2Config
13
+ from .configuration_qwen3 import Qwen3Config
14
+
15
+ logger = logging.get_logger(__name__)
16
+
17
+
18
+ class SAILVLConfig(PretrainedConfig):
19
+ model_type = 'sailvl'
20
+ is_composition = True
21
+
22
+ def __init__(
23
+ self,
24
+ vision_config=None,
25
+ llm_config=None,
26
+ use_backbone_lora=0,
27
+ use_llm_lora=0,
28
+ pad2square=False,
29
+ select_layer=-4,
30
+ force_image_size=None,
31
+ downsample_ratio=0.5,
32
+ template=None,
33
+ dynamic_image_size=False,
34
+ use_thumbnail=False,
35
+ ps_version='v1',
36
+ min_dynamic_patch=1,
37
+ max_dynamic_patch=6,
38
+ **kwargs
39
+ ):
40
+ super().__init__(**kwargs)
41
+
42
+ if vision_config is None:
43
+ vision_config = {}
44
+ logger.info('vision_config is None. Initializing the SAILViTConfig with default values.')
45
+
46
+ if llm_config is None:
47
+ llm_config = {'architectures': ['Qwen2ForCausalLM']}
48
+ logger.info('llm_config is None. Initializing the LlamaConfig config with default values (`LlamaConfig`).')
49
+
50
+ self.vision_config = SAILViTConfig(**vision_config)
51
+ if llm_config['architectures'][0] == 'LlamaForCausalLM':
52
+ self.llm_config = LlamaConfig(**llm_config)
53
+ elif llm_config['architectures'][0] == 'Qwen3ForCausalLM':
54
+ self.llm_config = Qwen3Config(**llm_config)
55
+ else:
56
+ raise ValueError('Unsupported architecture: {}'.format(llm_config['architectures'][0]))
57
+
58
+ self.use_backbone_lora = use_backbone_lora
59
+ self.use_llm_lora = use_llm_lora
60
+ self.pad2square = pad2square
61
+ self.select_layer = select_layer
62
+ self.force_image_size = force_image_size
63
+ self.downsample_ratio = downsample_ratio
64
+ self.template = template
65
+ self.dynamic_image_size = dynamic_image_size
66
+ self.use_thumbnail = use_thumbnail
67
+ self.ps_version = ps_version # pixel shuffle version
68
+ self.min_dynamic_patch = min_dynamic_patch
69
+ self.max_dynamic_patch = max_dynamic_patch
70
+
71
+ logger.info(f'vision_select_layer: {self.select_layer}')
72
+ logger.info(f'ps_version: {self.ps_version}')
73
+ logger.info(f'min_dynamic_patch: {self.min_dynamic_patch}')
74
+ logger.info(f'max_dynamic_patch: {self.max_dynamic_patch}')
75
+
76
+ def to_dict(self):
77
+ """
78
+ Serializes this instance to a Python dictionary. Override the default [`~PretrainedConfig.to_dict`].
79
+
80
+ Returns:
81
+ `Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance,
82
+ """
83
+ output = copy.deepcopy(self.__dict__)
84
+
85
+ output['vision_config'] = self.vision_config.to_dict()
86
+ output['llm_config'] = self.llm_config.to_dict()
87
+ output['model_type'] = self.__class__.model_type
88
+ output['use_backbone_lora'] = self.use_backbone_lora
89
+ output['use_llm_lora'] = self.use_llm_lora
90
+ output['pad2square'] = self.pad2square
91
+ output['select_layer'] = self.select_layer
92
+ output['force_image_size'] = self.force_image_size
93
+ output['downsample_ratio'] = self.downsample_ratio
94
+ output['template'] = self.template
95
+ output['dynamic_image_size'] = self.dynamic_image_size
96
+ output['use_thumbnail'] = self.use_thumbnail
97
+ output['ps_version'] = self.ps_version
98
+ output['min_dynamic_patch'] = self.min_dynamic_patch
99
+ output['max_dynamic_patch'] = self.max_dynamic_patch
100
+
101
+ return output
conversation.py ADDED
@@ -0,0 +1,424 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ------------------------------------------------------------
2
+ # adapted from InternVL/internvl_chat/internvl/conversation.py
3
+ # ------------------------------------------------------------
4
+
5
+ import dataclasses
6
+ from enum import IntEnum, auto
7
+ from typing import Any, Dict, List, Tuple, Union
8
+
9
+
10
+ class SeparatorStyle(IntEnum):
11
+ """Separator styles."""
12
+
13
+ ADD_COLON_SINGLE = auto()
14
+ ADD_COLON_TWO = auto()
15
+ ADD_COLON_SPACE_SINGLE = auto()
16
+ NO_COLON_SINGLE = auto()
17
+ NO_COLON_TWO = auto()
18
+ ADD_NEW_LINE_SINGLE = auto()
19
+ LLAMA2 = auto()
20
+ CHATGLM = auto()
21
+ CHATML = auto()
22
+ CHATINTERN = auto()
23
+ DOLLY = auto()
24
+ RWKV = auto()
25
+ PHOENIX = auto()
26
+ ROBIN = auto()
27
+ FALCON_CHAT = auto()
28
+ CHATGLM3 = auto()
29
+ INTERNVL_ZH = auto()
30
+ MPT = auto()
31
+
32
+
33
+ @dataclasses.dataclass
34
+ class Conversation:
35
+ """A class that manages prompt templates and keeps all conversation history."""
36
+
37
+ # The name of this template
38
+ name: str
39
+ # The template of the system prompt
40
+ system_template: str = '{system_message}'
41
+ # The system message
42
+ system_message: str = ''
43
+ # The names of two roles
44
+ roles: Tuple[str] = ('USER', 'ASSISTANT')
45
+ # All messages. Each item is (role, message).
46
+ messages: List[List[str]] = ()
47
+ # The number of few shot examples
48
+ offset: int = 0
49
+ # The separator style and configurations
50
+ sep_style: SeparatorStyle = SeparatorStyle.ADD_COLON_SINGLE
51
+ sep: str = '\n'
52
+ sep2: str = None
53
+ # Stop criteria (the default one is EOS token)
54
+ stop_str: Union[str, List[str]] = None
55
+ # Stops generation if meeting any token in this list
56
+ stop_token_ids: List[int] = None
57
+
58
+ def get_prompt(self) -> str:
59
+ """Get the prompt for generation."""
60
+ system_prompt = self.system_template.format(system_message=self.system_message)
61
+ if self.sep_style == SeparatorStyle.ADD_COLON_SINGLE:
62
+ ret = system_prompt + self.sep
63
+ for role, message in self.messages:
64
+ if message:
65
+ ret += role + ': ' + message + self.sep
66
+ else:
67
+ ret += role + ':'
68
+ return ret
69
+ elif self.sep_style == SeparatorStyle.ADD_COLON_TWO:
70
+ seps = [self.sep, self.sep2]
71
+ ret = system_prompt + seps[0]
72
+ for i, (role, message) in enumerate(self.messages):
73
+ if message:
74
+ ret += role + ': ' + message + seps[i % 2]
75
+ else:
76
+ ret += role + ':'
77
+ return ret
78
+ elif self.sep_style == SeparatorStyle.ADD_COLON_SPACE_SINGLE:
79
+ ret = system_prompt + self.sep
80
+ for role, message in self.messages:
81
+ if message:
82
+ ret += role + ': ' + message + self.sep
83
+ else:
84
+ ret += role + ': ' # must be end with a space
85
+ return ret
86
+ elif self.sep_style == SeparatorStyle.ADD_NEW_LINE_SINGLE:
87
+ ret = '' if system_prompt == '' else system_prompt + self.sep
88
+ for role, message in self.messages:
89
+ if message:
90
+ ret += role + '\n' + message + self.sep
91
+ else:
92
+ ret += role + '\n'
93
+ return ret
94
+ elif self.sep_style == SeparatorStyle.NO_COLON_SINGLE:
95
+ ret = system_prompt
96
+ for role, message in self.messages:
97
+ if message:
98
+ ret += role + message + self.sep
99
+ else:
100
+ ret += role
101
+ return ret
102
+ elif self.sep_style == SeparatorStyle.NO_COLON_TWO:
103
+ seps = [self.sep, self.sep2]
104
+ ret = system_prompt
105
+ for i, (role, message) in enumerate(self.messages):
106
+ if message:
107
+ ret += role + message + seps[i % 2]
108
+ else:
109
+ ret += role
110
+ return ret
111
+ elif self.sep_style == SeparatorStyle.RWKV:
112
+ ret = system_prompt
113
+ for i, (role, message) in enumerate(self.messages):
114
+ if message:
115
+ ret += (
116
+ role
117
+ + ': '
118
+ + message.replace('\r\n', '\n').replace('\n\n', '\n')
119
+ )
120
+ ret += '\n\n'
121
+ else:
122
+ ret += role + ':'
123
+ return ret
124
+ elif self.sep_style == SeparatorStyle.LLAMA2:
125
+ seps = [self.sep, self.sep2]
126
+ if self.system_message:
127
+ ret = system_prompt
128
+ else:
129
+ ret = '[INST] '
130
+ for i, (role, message) in enumerate(self.messages):
131
+ tag = self.roles[i % 2]
132
+ if message:
133
+ if i == 0:
134
+ ret += message + ' '
135
+ else:
136
+ ret += tag + ' ' + message + seps[i % 2]
137
+ else:
138
+ ret += tag
139
+ return ret
140
+ elif self.sep_style == SeparatorStyle.CHATGLM:
141
+ # source: https://huggingface.co/THUDM/chatglm-6b/blob/1d240ba371910e9282298d4592532d7f0f3e9f3e/modeling_chatglm.py#L1302-L1308
142
+ # source2: https://huggingface.co/THUDM/chatglm2-6b/blob/e186c891cf64310ac66ef10a87e6635fa6c2a579/modeling_chatglm.py#L926
143
+ round_add_n = 1 if self.name == 'chatglm2' else 0
144
+ if system_prompt:
145
+ ret = system_prompt + self.sep
146
+ else:
147
+ ret = ''
148
+
149
+ for i, (role, message) in enumerate(self.messages):
150
+ if i % 2 == 0:
151
+ ret += f'[Round {i//2 + round_add_n}]{self.sep}'
152
+
153
+ if message:
154
+ ret += f'{role}:{message}{self.sep}'
155
+ else:
156
+ ret += f'{role}:'
157
+ return ret
158
+ elif self.sep_style == SeparatorStyle.CHATML:
159
+ ret = '' if system_prompt == '' else system_prompt + self.sep + '\n'
160
+ for role, message in self.messages:
161
+ if message:
162
+ ret += role + '\n' + message + self.sep + '\n'
163
+ else:
164
+ ret += role + '\n'
165
+ return ret
166
+ elif self.sep_style == SeparatorStyle.CHATGLM3:
167
+ ret = ''
168
+ if self.system_message:
169
+ ret += system_prompt
170
+ for role, message in self.messages:
171
+ if message:
172
+ ret += role + '\n' + ' ' + message
173
+ else:
174
+ ret += role
175
+ return ret
176
+ elif self.sep_style == SeparatorStyle.CHATINTERN:
177
+ # source: https://huggingface.co/internlm/internlm-chat-7b-8k/blob/bd546fa984b4b0b86958f56bf37f94aa75ab8831/modeling_internlm.py#L771
178
+ seps = [self.sep, self.sep2]
179
+ ret = system_prompt
180
+ for i, (role, message) in enumerate(self.messages):
181
+ # if i % 2 == 0:
182
+ # ret += "<s>"
183
+ if message:
184
+ ret += role + ':' + message + seps[i % 2] + '\n'
185
+ else:
186
+ ret += role + ':'
187
+ return ret
188
+ elif self.sep_style == SeparatorStyle.DOLLY:
189
+ seps = [self.sep, self.sep2]
190
+ ret = system_prompt
191
+ for i, (role, message) in enumerate(self.messages):
192
+ if message:
193
+ ret += role + ':\n' + message + seps[i % 2]
194
+ if i % 2 == 1:
195
+ ret += '\n\n'
196
+ else:
197
+ ret += role + ':\n'
198
+ return ret
199
+ elif self.sep_style == SeparatorStyle.PHOENIX:
200
+ ret = system_prompt
201
+ for role, message in self.messages:
202
+ if message:
203
+ ret += role + ': ' + '<s>' + message + '</s>'
204
+ else:
205
+ ret += role + ': ' + '<s>'
206
+ return ret
207
+ elif self.sep_style == SeparatorStyle.ROBIN:
208
+ ret = system_prompt + self.sep
209
+ for role, message in self.messages:
210
+ if message:
211
+ ret += role + ':\n' + message + self.sep
212
+ else:
213
+ ret += role + ':\n'
214
+ return ret
215
+ elif self.sep_style == SeparatorStyle.FALCON_CHAT:
216
+ ret = ''
217
+ if self.system_message:
218
+ ret += system_prompt + self.sep
219
+ for role, message in self.messages:
220
+ if message:
221
+ ret += role + ': ' + message + self.sep
222
+ else:
223
+ ret += role + ':'
224
+
225
+ return ret
226
+ elif self.sep_style == SeparatorStyle.INTERNVL_ZH:
227
+ seps = [self.sep, self.sep2]
228
+ ret = self.system_message + seps[0]
229
+ for i, (role, message) in enumerate(self.messages):
230
+ if message:
231
+ ret += role + ': ' + message + seps[i % 2]
232
+ else:
233
+ ret += role + ':'
234
+ return ret
235
+ elif self.sep_style == SeparatorStyle.MPT:
236
+ ret = system_prompt + self.sep if system_prompt else ''
237
+ for role, message in self.messages:
238
+ if message:
239
+ if type(message) is tuple:
240
+ message, _, _ = message
241
+ ret += role + message + self.sep
242
+ else:
243
+ ret += role
244
+ return ret
245
+ else:
246
+ raise ValueError(f'Invalid style: {self.sep_style}')
247
+
248
+ def set_system_message(self, system_message: str):
249
+ """Set the system message."""
250
+ self.system_message = system_message
251
+
252
+ def append_message(self, role: str, message: str):
253
+ """Append a new message."""
254
+ self.messages.append([role, message])
255
+
256
+ def update_last_message(self, message: str):
257
+ """Update the last output.
258
+
259
+ The last message is typically set to be None when constructing the prompt,
260
+ so we need to update it in-place after getting the response from a model.
261
+ """
262
+ self.messages[-1][1] = message
263
+
264
+ def to_gradio_chatbot(self):
265
+ """Convert the conversation to gradio chatbot format."""
266
+ ret = []
267
+ for i, (role, msg) in enumerate(self.messages[self.offset :]):
268
+ if i % 2 == 0:
269
+ ret.append([msg, None])
270
+ else:
271
+ ret[-1][-1] = msg
272
+ return ret
273
+
274
+ def to_openai_api_messages(self):
275
+ """Convert the conversation to OpenAI chat completion format."""
276
+ ret = [{'role': 'system', 'content': self.system_message}]
277
+
278
+ for i, (_, msg) in enumerate(self.messages[self.offset :]):
279
+ if i % 2 == 0:
280
+ ret.append({'role': 'user', 'content': msg})
281
+ else:
282
+ if msg is not None:
283
+ ret.append({'role': 'assistant', 'content': msg})
284
+ return ret
285
+
286
+ def copy(self):
287
+ return Conversation(
288
+ name=self.name,
289
+ system_template=self.system_template,
290
+ system_message=self.system_message,
291
+ roles=self.roles,
292
+ messages=[[x, y] for x, y in self.messages],
293
+ offset=self.offset,
294
+ sep_style=self.sep_style,
295
+ sep=self.sep,
296
+ sep2=self.sep2,
297
+ stop_str=self.stop_str,
298
+ stop_token_ids=self.stop_token_ids,
299
+ )
300
+
301
+ def dict(self):
302
+ return {
303
+ 'template_name': self.name,
304
+ 'system_message': self.system_message,
305
+ 'roles': self.roles,
306
+ 'messages': self.messages,
307
+ 'offset': self.offset,
308
+ }
309
+
310
+
311
+ # A global registry for all conversation templates
312
+ conv_templates: Dict[str, Conversation] = {}
313
+
314
+
315
+ def register_conv_template(template: Conversation, override: bool = False):
316
+ """Register a new conversation template."""
317
+ if not override:
318
+ assert (
319
+ template.name not in conv_templates
320
+ ), f'{template.name} has been registered.'
321
+
322
+ conv_templates[template.name] = template
323
+
324
+
325
+ def get_conv_template(name: str) -> Conversation:
326
+ """Get a conversation template."""
327
+ return conv_templates[name].copy()
328
+
329
+
330
+ # Both Hermes-2 and internlm2-chat are chatml-format conversation templates. The difference
331
+ # is that during training, the preprocessing function for the Hermes-2 template doesn't add
332
+ # <s> at the beginning of the tokenized sequence, while the internlm2-chat template does.
333
+ # Therefore, they are completely equivalent during inference.
334
+ register_conv_template(
335
+ Conversation(
336
+ name='Hermes-2',
337
+ system_template='<|im_start|>system\n{system_message}',
338
+ # note: The new system prompt was not used here to avoid changes in benchmark performance.
339
+ # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。',
340
+ system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
341
+ roles=('<|im_start|>user\n', '<|im_start|>assistant\n'),
342
+ sep_style=SeparatorStyle.MPT,
343
+ sep='<|im_end|>',
344
+ stop_token_ids=[
345
+ 2,
346
+ 6,
347
+ 7,
348
+ 8,
349
+ ],
350
+ stop_str='<|endoftext|>',
351
+ )
352
+ )
353
+
354
+
355
+ register_conv_template(
356
+ Conversation(
357
+ name='univl-chat',
358
+ system_template='<|im_start|>system\n{system_message}',
359
+ # note: The new system prompt was not used here to avoid changes in benchmark performance.
360
+ system_message='你是由抖音内容理解组开发的多模态大模型,英文名叫UniVL, 是一个有用无害的人工智能助手。',
361
+ roles=('<|im_start|>user\n', '<|im_start|>assistant\n'),
362
+ sep_style=SeparatorStyle.MPT,
363
+ sep='<|im_end|>',
364
+ stop_token_ids=[
365
+ 2,
366
+ 92543,
367
+ 92542
368
+ ]
369
+ )
370
+ )
371
+
372
+
373
+ register_conv_template(
374
+ Conversation(
375
+ name='internlm2-chat',
376
+ system_template='<|im_start|>system\n{system_message}',
377
+ # note: The new system prompt was not used here to avoid changes in benchmark performance.
378
+ # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。',
379
+ system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
380
+ roles=('<|im_start|>user\n', '<|im_start|>assistant\n'),
381
+ sep_style=SeparatorStyle.MPT,
382
+ sep='<|im_end|>',
383
+ stop_token_ids=[
384
+ 2,
385
+ 92543,
386
+ 92542
387
+ ]
388
+ )
389
+ )
390
+
391
+ register_conv_template(
392
+ Conversation(
393
+ name='internlm2-plain',
394
+ system_template='',
395
+ system_message='',
396
+ roles=('<|im_start|>user\n', '<|im_start|>assistant\n'),
397
+ sep_style=SeparatorStyle.MPT,
398
+ sep='<|im_end|>',
399
+ stop_token_ids=[
400
+ 2,
401
+ 92543,
402
+ 92542
403
+ ]
404
+ )
405
+ )
406
+
407
+
408
+ register_conv_template(
409
+ Conversation(
410
+ name='phi3-chat',
411
+ system_template='<|system|>\n{system_message}',
412
+ # note: The new system prompt was not used here to avoid changes in benchmark performance.
413
+ # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。',
414
+ system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
415
+ roles=('<|user|>\n', '<|assistant|>\n'),
416
+ sep_style=SeparatorStyle.MPT,
417
+ sep='<|end|>',
418
+ stop_token_ids=[
419
+ 2,
420
+ 32000,
421
+ 32007
422
+ ]
423
+ )
424
+ )
generation_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "transformers_version": "4.51.0"
4
+ }
image_processing_sailvl.py ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Image processor class for KimiVL."""
2
+
3
+ import math
4
+ import numpy as np
5
+ from PIL import Image
6
+ from typing import Optional, Union
7
+
8
+ import torch
9
+
10
+ import torchvision.transforms as T
11
+ from torchvision.transforms.functional import InterpolationMode
12
+ from transformers.image_utils import ImageInput, make_list_of_images, valid_images
13
+ from transformers.image_processing_utils import BaseImageProcessor, BatchFeature
14
+ from transformers.utils import TensorType
15
+
16
+ IMAGENET_MEAN = (0.485, 0.456, 0.406)
17
+ IMAGENET_STD = (0.229, 0.224, 0.225)
18
+
19
+ def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
20
+ best_ratio_diff = float('inf')
21
+ best_ratio = (1, 1)
22
+ area = width * height
23
+ for ratio in target_ratios:
24
+ target_aspect_ratio = ratio[0] / ratio[1]
25
+ ratio_diff = abs(aspect_ratio - target_aspect_ratio)
26
+ if ratio_diff < best_ratio_diff:
27
+ best_ratio_diff = ratio_diff
28
+ best_ratio = ratio
29
+ elif ratio_diff == best_ratio_diff:
30
+ if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
31
+ best_ratio = ratio
32
+ return best_ratio
33
+
34
+
35
+ def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
36
+ orig_width, orig_height = image.size
37
+ aspect_ratio = orig_width / orig_height
38
+
39
+ # calculate the existing image aspect ratio
40
+ target_ratios = set(
41
+ (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
42
+ i * j <= max_num and i * j >= min_num)
43
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
44
+
45
+ # find the closest aspect ratio to the target
46
+ target_aspect_ratio = find_closest_aspect_ratio(
47
+ aspect_ratio, target_ratios, orig_width, orig_height, image_size)
48
+
49
+ # calculate the target width and height
50
+ target_width = image_size * target_aspect_ratio[0]
51
+ target_height = image_size * target_aspect_ratio[1]
52
+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
53
+
54
+ # resize the image
55
+ resized_img = image.resize((target_width, target_height))
56
+ processed_images = []
57
+ for i in range(blocks):
58
+ box = (
59
+ (i % (target_width // image_size)) * image_size,
60
+ (i // (target_width // image_size)) * image_size,
61
+ ((i % (target_width // image_size)) + 1) * image_size,
62
+ ((i // (target_width // image_size)) + 1) * image_size
63
+ )
64
+ # split the image
65
+ split_img = resized_img.crop(box)
66
+ processed_images.append(split_img)
67
+ assert len(processed_images) == blocks
68
+ if use_thumbnail and len(processed_images) != 1:
69
+ thumbnail_img = image.resize((image_size, image_size))
70
+ processed_images.append(thumbnail_img)
71
+ return processed_images
72
+
73
+ def dynamic_preprocess_msac1(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
74
+ orig_width, orig_height = image.size
75
+ aspect_ratio = orig_width / orig_height
76
+
77
+ # calculate the existing image aspect ratio
78
+ target_ratios = set(
79
+ (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
80
+ i * j <= max_num and i * j >= min_num)
81
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
82
+
83
+ # find the closest aspect ratio to the target
84
+ target_aspect_ratio = find_closest_aspect_ratio(
85
+ aspect_ratio, target_ratios, orig_width, orig_height, image_size)
86
+
87
+ # calculate the target width and height
88
+ target_width = image_size * target_aspect_ratio[0]
89
+ target_height = image_size * target_aspect_ratio[1]
90
+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
91
+
92
+ # resize the image
93
+ resized_img = image.resize((target_width, target_height))
94
+ processed_images = []
95
+ for i in range(blocks):
96
+ box = (
97
+ (i % (target_width // image_size)) * image_size,
98
+ (i // (target_width // image_size)) * image_size,
99
+ ((i % (target_width // image_size)) + 1) * image_size,
100
+ ((i // (target_width // image_size)) + 1) * image_size
101
+ )
102
+ # split the image
103
+ split_img = resized_img.crop(box)
104
+ processed_images.append(split_img)
105
+ assert len(processed_images) == blocks
106
+ if use_thumbnail and len(processed_images) != 1:
107
+ thumbnail_img = image.resize((image_size, image_size))
108
+ processed_images.append(thumbnail_img)
109
+ return processed_images, target_aspect_ratio
110
+
111
+ def dynamic_preprocess_msac2(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False, prior_aspect_ratio=None):
112
+ orig_width, orig_height = image.size
113
+ aspect_ratio = orig_width / orig_height
114
+
115
+ # calculate the existing image aspect ratio
116
+ target_ratios = set(
117
+ (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
118
+ i * j <= max_num and i * j >= min_num)
119
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
120
+
121
+ new_target_ratios = []
122
+ if prior_aspect_ratio is not None:
123
+ for i in target_ratios:
124
+ if prior_aspect_ratio[0]%i[0] != 0 or prior_aspect_ratio[1]%i[1] != 0:
125
+ new_target_ratios.append(i)
126
+ else:
127
+ continue
128
+
129
+ # find the closest aspect ratio to the target
130
+ target_aspect_ratio = find_closest_aspect_ratio(
131
+ aspect_ratio, new_target_ratios, orig_width, orig_height, image_size)
132
+
133
+ # calculate the target width and height
134
+ target_width = image_size * target_aspect_ratio[0]
135
+ target_height = image_size * target_aspect_ratio[1]
136
+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
137
+
138
+ # resize the image
139
+ resized_img = image.resize((target_width, target_height))
140
+ processed_images = []
141
+ for i in range(blocks):
142
+ box = (
143
+ (i % (target_width // image_size)) * image_size,
144
+ (i // (target_width // image_size)) * image_size,
145
+ ((i % (target_width // image_size)) + 1) * image_size,
146
+ ((i // (target_width // image_size)) + 1) * image_size
147
+ )
148
+ # split the image
149
+ split_img = resized_img.crop(box)
150
+ processed_images.append(split_img)
151
+ assert len(processed_images) == blocks
152
+ if use_thumbnail and len(processed_images) != 1:
153
+ thumbnail_img = image.resize((image_size, image_size))
154
+ processed_images.append(thumbnail_img)
155
+ return processed_images
156
+
157
+
158
+ class SAILVLImageProcessor(BaseImageProcessor):
159
+ model_type = "sailvl"
160
+
161
+ def __init__(
162
+ self,
163
+ patch_size: int = 14,
164
+ image_mean: tuple[float, float, float] = IMAGENET_MEAN,
165
+ image_std: tuple[float, float, float] = IMAGENET_STD,
166
+ max_dynamic_patch: int = 10,
167
+ image_size: int = 448,
168
+ use_msac: bool = False,
169
+
170
+ **kwargs,
171
+ ):
172
+ super().__init__(**kwargs)
173
+ self.patch_size = patch_size
174
+ self.image_mean = image_mean
175
+ self.image_std = image_std
176
+ self.max_dynamic_patch = max_dynamic_patch
177
+ self.image_size = image_size
178
+ self.use_msac = use_msac
179
+
180
+ def build_transform(self, input_size):
181
+ MEAN, STD = self.image_mean, self.image_std
182
+ transform = T.Compose([
183
+ T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
184
+ T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
185
+ T.ToTensor(),
186
+ T.Normalize(mean=MEAN, std=STD)
187
+ ])
188
+ return transform
189
+
190
+ def load_image(self, image, input_size=448, max_num=6, upscale=False):
191
+ # image = Image.open(image_file).convert('RGB')
192
+ if upscale:
193
+ image = image.resize((image.width * 2, image.height * 2), Image.BILINEAR)
194
+ transform = self.build_transform(input_size=input_size)
195
+ images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
196
+ pixel_values = [transform(image) for image in images]
197
+ pixel_values = torch.stack(pixel_values)
198
+ return pixel_values
199
+
200
+ def load_image_msac(self, image, input_size=448, max_num=6, upscale=False):
201
+ # image = Image.open(image_file).convert('RGB')
202
+ if upscale:
203
+ image = image.resize((image.width * 2, image.height * 2), Image.BILINEAR)
204
+ transform = self.build_transform(input_size=input_size)
205
+ images,target_aspect_ratio = dynamic_preprocess_msac1(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
206
+ images = images[:-1] + dynamic_preprocess_msac2(image,max_num=max_num,image_size=input_size,use_thumbnail=False,prior_aspect_ratio=target_aspect_ratio) + images[-1:]
207
+
208
+ pixel_values = [transform(image) for image in images]
209
+ pixel_values = torch.stack(pixel_values)
210
+ return pixel_values
211
+
212
+ def preprocess(
213
+ self,
214
+ images: ImageInput,
215
+ return_tensors: Optional[Union[str, TensorType]] = None,
216
+ ) -> BatchFeature:
217
+ images = make_list_of_images(images)
218
+
219
+ if not valid_images(images):
220
+ raise ValueError(
221
+ "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, "
222
+ "torch.Tensor, tf.Tensor or jax.ndarray."
223
+ )
224
+ # print('图片个数:',len(images))
225
+ image_num = len(images)
226
+ if image_num > 1:
227
+ # image_path = [x['value'] for x in message if x['type'] == 'image']
228
+ num_patches_list = []
229
+ pixel_values_list = []
230
+ for image_idx, image_pil in enumerate(images):
231
+ upscale_flag = False
232
+ curr_pixel_values = self.load_image(
233
+ image_pil, max_num=self.max_dynamic_patch, upscale=upscale_flag, input_size=self.image_size).cuda().to(torch.bfloat16)
234
+ num_patches_list.append(curr_pixel_values.size(0))
235
+ pixel_values_list.append(curr_pixel_values)
236
+ pixel_values = torch.cat(pixel_values_list, dim=0)
237
+
238
+ elif image_num == 1:
239
+ # image_path = [x['value'] for x in message if x['type'] == 'image'][0]
240
+ image_pil = images[0]
241
+ upscale_flag = False
242
+ if self.use_msac:
243
+ pixel_values = self.load_image_msac(
244
+ image_pil, max_num=self.max_dynamic_patch, upscale=upscale_flag, input_size=self.image_size).cuda().to(torch.bfloat16)
245
+ else:
246
+ pixel_values = self.load_image(
247
+ image_pil, max_num=self.max_dynamic_patch, upscale=upscale_flag, input_size=self.image_size).cuda().to(torch.bfloat16)
248
+ num_patches_list = [pixel_values.size(0)]
249
+ else:
250
+ pixel_values = None
251
+ num_patches_list = None
252
+
253
+ # pixel_values, image_grid_hws = [], []
254
+ # for image in images:
255
+ # patches, image_grid_hw = self._preprocess(image)
256
+ # pixel_values.append(patches)
257
+ # image_grid_hws.append(image_grid_hw)
258
+ # pixel_values = torch.concat(pixel_values, dim=0)
259
+ # image_grid_hws = np.array(image_grid_hws)
260
+ data = {"pixel_values": pixel_values, "num_patches_list": num_patches_list}
261
+
262
+ return BatchFeature(data=data, tensor_type=return_tensors)
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
modeling_qwen3.py ADDED
@@ -0,0 +1,1247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from src/transformers/models/qwen3/modular_qwen3.py.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the modular. If any change should be done, please apply the change to the
5
+ # modular_qwen3.py file directly. One of our CI enforces this.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+ # coding=utf-8
8
+ # Copyright 2025 The Qwen team, Alibaba Group and the HuggingFace Inc. team. All rights reserved.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+
22
+ from functools import partial
23
+ from typing import Callable, Optional, Tuple, Union
24
+
25
+ import torch
26
+ from torch import nn
27
+
28
+ from transformers.activations import ACT2FN
29
+ from transformers.cache_utils import Cache, DynamicCache, SlidingWindowCache, StaticCache
30
+ from transformers.generation import GenerationMixin
31
+ from transformers.modeling_attn_mask_utils import AttentionMaskConverter
32
+ from transformers.modeling_flash_attention_utils import FlashAttentionKwargs
33
+ from transformers.modeling_outputs import (
34
+ BaseModelOutputWithPast,
35
+ CausalLMOutputWithPast,
36
+ QuestionAnsweringModelOutput,
37
+ SequenceClassifierOutputWithPast,
38
+ TokenClassifierOutput,
39
+ )
40
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS, dynamic_rope_update
41
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
42
+ from transformers.processing_utils import Unpack
43
+ from transformers.utils import (
44
+ LossKwargs,
45
+ add_code_sample_docstrings,
46
+ add_start_docstrings,
47
+ add_start_docstrings_to_model_forward,
48
+ can_return_tuple,
49
+ logging,
50
+ replace_return_docstrings,
51
+ )
52
+ from transformers.utils.deprecation import deprecate_kwarg
53
+ from .configuration_qwen3 import Qwen3Config
54
+ # from .spec_sdpa_attention import spec_sdpa_attention_forward
55
+
56
+ from typing import Optional, Tuple
57
+
58
+ import torch
59
+ try:
60
+ import torch_npu
61
+ except:
62
+ print('Using N* GPU...')
63
+ import math
64
+
65
+
66
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
67
+ """
68
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
69
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
70
+ """
71
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
72
+ if n_rep == 1:
73
+ return hidden_states
74
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
75
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
76
+
77
+
78
+ def spec_sdpa_attention_forward(
79
+ module: torch.nn.Module,
80
+ query: torch.Tensor,
81
+ key: torch.Tensor,
82
+ value: torch.Tensor,
83
+ attention_mask: Optional[torch.Tensor],
84
+ dropout: float = 0.0,
85
+ scaling: Optional[float] = None,
86
+ is_causal: Optional[bool] = None,
87
+ **kwargs,
88
+ ) -> Tuple[torch.Tensor, None]:
89
+ if hasattr(module, "num_key_value_groups"):
90
+ key = repeat_kv(key, module.num_key_value_groups)
91
+ value = repeat_kv(value, module.num_key_value_groups)
92
+
93
+ causal_mask = attention_mask
94
+ if attention_mask is not None and causal_mask.ndim == 4:
95
+ causal_mask = causal_mask[:, :, :, : key.shape[-2]]
96
+
97
+ # SDPA with memory-efficient backend is bugged with non-contiguous inputs and custom attn_mask for some torch versions
98
+ # Reference: https://github.com/pytorch/pytorch/issues/112577.
99
+ query = query.contiguous()
100
+ key = key.contiguous()
101
+ value = value.contiguous()
102
+
103
+ # We dispatch to SDPA's Flash Attention or Efficient kernels via this `is_causal` if statement instead of an inline conditional assignment
104
+ # in SDPA to support both torch.compile's dynamic shapes and full graph options. An inline conditional prevents dynamic shapes from compiling.
105
+ # Note that it is important to check first for the shape, otherwise compile will fail with `argument 'is_causal' must be bool, not SymBool`
106
+ if is_causal is None:
107
+ is_causal = query.shape[2] > 1 and causal_mask is None
108
+
109
+ # Shapes (e.g. query.shape[2]) are tensors during jit tracing, resulting in `is_causal` being a tensor.
110
+ # We convert it to a bool for the SDPA kernel that only accepts bools.
111
+
112
+ if attention_mask is None:
113
+ atten_mask_npu = torch.triu(torch.ones([query.size(-2),
114
+ query.size(-2)]), diagonal=1).bool().to(query.device)
115
+ elif attention_mask.dtype == torch.bool:
116
+ atten_mask_npu = torch.logical_not(attention_mask.bool()).to(attention_mask.device) # atten_mask需要取反
117
+ else:
118
+ atten_mask_npu = attention_mask.bool().to(attention_mask.device)
119
+
120
+ if torch.jit.is_tracing() and isinstance(is_causal, torch.Tensor):
121
+ is_causal = is_causal.item()
122
+
123
+ # attn_output = torch.nn.functional.scaled_dot_product_attention(
124
+ # query,
125
+ # key,
126
+ # value,
127
+ # attn_mask=causal_mask,
128
+ # dropout_p=dropout,
129
+ # scale=scaling,
130
+ # is_causal=is_causal,
131
+ # )
132
+
133
+ head_num = query.shape[1]
134
+ attn_output = torch_npu.npu_fusion_attention(
135
+ query, key, value, head_num, input_layout="BNSD",
136
+ pse=None,
137
+ atten_mask=atten_mask_npu,
138
+ scale=1.0 / math.sqrt(query.shape[-1]),
139
+ pre_tockens=2147483647,
140
+ next_tockens=2147483647,
141
+ keep_prob=1
142
+ )[0]
143
+
144
+ attn_output = attn_output.transpose(1, 2).contiguous()
145
+
146
+ return attn_output, None
147
+
148
+
149
+ logger = logging.get_logger(__name__)
150
+
151
+ _CHECKPOINT_FOR_DOC = "Qwen/Qwen3-8B"
152
+ _CONFIG_FOR_DOC = "Qwen3Config"
153
+
154
+
155
+ class Qwen3RMSNorm(nn.Module):
156
+ def __init__(self, hidden_size, eps=1e-6):
157
+ """
158
+ Qwen3RMSNorm is equivalent to T5LayerNorm
159
+ """
160
+ super().__init__()
161
+ self.weight = nn.Parameter(torch.ones(hidden_size))
162
+ self.variance_epsilon = eps
163
+
164
+ def forward(self, hidden_states):
165
+ input_dtype = hidden_states.dtype
166
+ hidden_states = hidden_states.to(torch.float32)
167
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
168
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
169
+ return self.weight * hidden_states.to(input_dtype)
170
+
171
+ def extra_repr(self):
172
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
173
+
174
+
175
+ class Qwen3MLP(nn.Module):
176
+ def __init__(self, config):
177
+ super().__init__()
178
+ self.config = config
179
+ self.hidden_size = config.hidden_size
180
+ self.intermediate_size = config.intermediate_size
181
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
182
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
183
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
184
+ self.act_fn = ACT2FN[config.hidden_act]
185
+
186
+ def forward(self, x):
187
+ down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
188
+ return down_proj
189
+
190
+
191
+ def rotate_half(x):
192
+ """Rotates half the hidden dims of the input."""
193
+ x1 = x[..., : x.shape[-1] // 2]
194
+ x2 = x[..., x.shape[-1] // 2 :]
195
+ return torch.cat((-x2, x1), dim=-1)
196
+
197
+
198
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
199
+ """Applies Rotary Position Embedding to the query and key tensors.
200
+
201
+ Args:
202
+ q (`torch.Tensor`): The query tensor.
203
+ k (`torch.Tensor`): The key tensor.
204
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
205
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
206
+ position_ids (`torch.Tensor`, *optional*):
207
+ Deprecated and unused.
208
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
209
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
210
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
211
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
212
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
213
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
214
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
215
+ Returns:
216
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
217
+ """
218
+ cos = cos.unsqueeze(unsqueeze_dim)
219
+ sin = sin.unsqueeze(unsqueeze_dim)
220
+ q_embed = (q * cos) + (rotate_half(q) * sin)
221
+ k_embed = (k * cos) + (rotate_half(k) * sin)
222
+ return q_embed, k_embed
223
+
224
+
225
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
226
+ """
227
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
228
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
229
+ """
230
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
231
+ if n_rep == 1:
232
+ return hidden_states
233
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
234
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
235
+
236
+
237
+ def eager_attention_forward(
238
+ module: nn.Module,
239
+ query: torch.Tensor,
240
+ key: torch.Tensor,
241
+ value: torch.Tensor,
242
+ attention_mask: Optional[torch.Tensor],
243
+ scaling: float,
244
+ dropout: float = 0.0,
245
+ **kwargs,
246
+ ):
247
+ key_states = repeat_kv(key, module.num_key_value_groups)
248
+ value_states = repeat_kv(value, module.num_key_value_groups)
249
+
250
+ attn_weights = torch.matmul(query, key_states.transpose(2, 3)) * scaling
251
+ if attention_mask is not None:
252
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
253
+ attn_weights = attn_weights + causal_mask
254
+
255
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
256
+ attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training)
257
+ attn_output = torch.matmul(attn_weights, value_states)
258
+ attn_output = attn_output.transpose(1, 2).contiguous()
259
+
260
+ return attn_output, attn_weights
261
+
262
+
263
+ class Qwen3Attention(nn.Module):
264
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
265
+
266
+ def __init__(self, config: Qwen3Config, layer_idx: int):
267
+ super().__init__()
268
+ self.config = config
269
+ self.layer_idx = layer_idx
270
+ self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
271
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
272
+ self.scaling = self.head_dim**-0.5
273
+ self.attention_dropout = config.attention_dropout
274
+ self.is_causal = True
275
+
276
+ self.q_proj = nn.Linear(
277
+ config.hidden_size, config.num_attention_heads * self.head_dim, bias=config.attention_bias
278
+ )
279
+ self.k_proj = nn.Linear(
280
+ config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
281
+ )
282
+ self.v_proj = nn.Linear(
283
+ config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
284
+ )
285
+ self.o_proj = nn.Linear(
286
+ config.num_attention_heads * self.head_dim, config.hidden_size, bias=config.attention_bias
287
+ )
288
+ self.q_norm = Qwen3RMSNorm(self.head_dim, eps=config.rms_norm_eps) # unlike olmo, only on the head dim!
289
+ self.k_norm = Qwen3RMSNorm(self.head_dim, eps=config.rms_norm_eps) # thus post q_norm does not need reshape
290
+ self.sliding_window = config.sliding_window
291
+ if not (
292
+ self.config.use_sliding_window
293
+ and getattr(self.config, "sliding_window", None) is not None
294
+ and self.layer_idx >= self.config.max_window_layers
295
+ ):
296
+ self.sliding_window = None
297
+
298
+ def forward(
299
+ self,
300
+ hidden_states: torch.Tensor,
301
+ position_embeddings: Tuple[torch.Tensor, torch.Tensor],
302
+ attention_mask: Optional[torch.Tensor],
303
+ past_key_value: Optional[Cache] = None,
304
+ cache_position: Optional[torch.LongTensor] = None,
305
+ **kwargs: Unpack[FlashAttentionKwargs],
306
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
307
+ input_shape = hidden_states.shape[:-1]
308
+ hidden_shape = (*input_shape, -1, self.head_dim)
309
+
310
+ query_states = self.q_norm(self.q_proj(hidden_states).view(hidden_shape)).transpose(1, 2)
311
+ key_states = self.k_norm(self.k_proj(hidden_states).view(hidden_shape)).transpose(1, 2)
312
+ value_states = self.v_proj(hidden_states).view(hidden_shape).transpose(1, 2)
313
+
314
+ cos, sin = position_embeddings
315
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
316
+
317
+ if past_key_value is not None:
318
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
319
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
320
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
321
+
322
+ attention_interface: Callable = eager_attention_forward
323
+ if self.config._attn_implementation != "eager":
324
+ if self.config._attn_implementation == "sdpa" and kwargs.get("output_attentions", False):
325
+ logger.warning_once(
326
+ "`torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to "
327
+ 'eager attention. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
328
+ )
329
+ else:
330
+ attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
331
+
332
+ attn_output, attn_weights = attention_interface(
333
+ self,
334
+ query_states,
335
+ key_states,
336
+ value_states,
337
+ attention_mask,
338
+ dropout=0.0 if not self.training else self.attention_dropout,
339
+ scaling=self.scaling,
340
+ sliding_window=self.sliding_window, # diff with Llama
341
+ **kwargs,
342
+ )
343
+
344
+ attn_output = attn_output.reshape(*input_shape, -1).contiguous()
345
+ attn_output = self.o_proj(attn_output)
346
+ return attn_output, attn_weights
347
+
348
+
349
+ class Qwen3DecoderLayer(nn.Module):
350
+ def __init__(self, config: Qwen3Config, layer_idx: int):
351
+ super().__init__()
352
+ self.hidden_size = config.hidden_size
353
+ self.self_attn = Qwen3Attention(config=config, layer_idx=layer_idx)
354
+ self.mlp = Qwen3MLP(config)
355
+ self.input_layernorm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
356
+ self.post_attention_layernorm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
357
+ if (
358
+ config.sliding_window and config._attn_implementation != "flash_attention_2"
359
+ ): # diff with Llama is this warning
360
+ logger.warning_once(
361
+ f"Sliding Window Attention is enabled but not implemented for `{config._attn_implementation}`; "
362
+ "unexpected results may be encountered."
363
+ )
364
+
365
+ def forward(
366
+ self,
367
+ hidden_states: torch.Tensor,
368
+ attention_mask: Optional[torch.Tensor] = None,
369
+ position_ids: Optional[torch.LongTensor] = None,
370
+ past_key_value: Optional[Cache] = None,
371
+ output_attentions: Optional[bool] = False,
372
+ use_cache: Optional[bool] = False,
373
+ cache_position: Optional[torch.LongTensor] = None,
374
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
375
+ **kwargs: Unpack[FlashAttentionKwargs],
376
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
377
+ residual = hidden_states
378
+
379
+ hidden_states = self.input_layernorm(hidden_states)
380
+
381
+ # Self Attention
382
+ hidden_states, self_attn_weights = self.self_attn(
383
+ hidden_states=hidden_states,
384
+ attention_mask=attention_mask,
385
+ position_ids=position_ids,
386
+ past_key_value=past_key_value,
387
+ output_attentions=output_attentions,
388
+ use_cache=use_cache,
389
+ cache_position=cache_position,
390
+ position_embeddings=position_embeddings,
391
+ **kwargs,
392
+ )
393
+ hidden_states = residual + hidden_states
394
+
395
+ # Fully Connected
396
+ residual = hidden_states
397
+ hidden_states = self.post_attention_layernorm(hidden_states)
398
+ hidden_states = self.mlp(hidden_states)
399
+ hidden_states = residual + hidden_states
400
+
401
+ outputs = (hidden_states,)
402
+ if output_attentions:
403
+ outputs += (self_attn_weights,)
404
+
405
+ return outputs
406
+
407
+
408
+ class Qwen3RotaryEmbedding(nn.Module):
409
+ def __init__(self, config: Qwen3Config, device=None):
410
+ super().__init__()
411
+ # BC: "rope_type" was originally "type"
412
+ if hasattr(config, "rope_scaling") and config.rope_scaling is not None:
413
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
414
+ else:
415
+ self.rope_type = "default"
416
+ self.max_seq_len_cached = config.max_position_embeddings
417
+ self.original_max_seq_len = config.max_position_embeddings
418
+
419
+ self.config = config
420
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
421
+
422
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device)
423
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
424
+ self.original_inv_freq = self.inv_freq
425
+
426
+ @torch.no_grad()
427
+ @dynamic_rope_update # power user: used with advanced RoPE types (e.g. dynamic rope)
428
+ def forward(self, x, position_ids):
429
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
430
+ position_ids_expanded = position_ids[:, None, :].float()
431
+
432
+ device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
433
+ with torch.autocast(device_type=device_type, enabled=False): # Force float32
434
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
435
+ emb = torch.cat((freqs, freqs), dim=-1)
436
+ cos = emb.cos() * self.attention_scaling
437
+ sin = emb.sin() * self.attention_scaling
438
+
439
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
440
+
441
+
442
+ QWEN3_START_DOCSTRING = r"""
443
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
444
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
445
+ etc.)
446
+
447
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
448
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
449
+ and behavior.
450
+
451
+ Parameters:
452
+ config ([`Qwen3Config`]):
453
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
454
+ load the weights associated with the model, only the configuration. Check out the
455
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
456
+ """
457
+
458
+
459
+ @add_start_docstrings(
460
+ "The bare Qwen3 Model outputting raw hidden-states without any specific head on top.",
461
+ QWEN3_START_DOCSTRING,
462
+ )
463
+ class Qwen3PreTrainedModel(PreTrainedModel):
464
+ config_class = Qwen3Config
465
+ base_model_prefix = "model"
466
+ supports_gradient_checkpointing = True
467
+ _no_split_modules = ["Qwen3DecoderLayer"]
468
+ _skip_keys_device_placement = ["past_key_values"]
469
+ _supports_flash_attn_2 = True
470
+ _supports_sdpa = True
471
+ _supports_flex_attn = True
472
+ _supports_cache_class = True
473
+ _supports_quantized_cache = True
474
+ _supports_static_cache = True
475
+ _supports_attention_backend = True
476
+
477
+ def _init_weights(self, module):
478
+ std = self.config.initializer_range
479
+ if isinstance(module, nn.Linear):
480
+ module.weight.data.normal_(mean=0.0, std=std)
481
+ if module.bias is not None:
482
+ module.bias.data.zero_()
483
+ elif isinstance(module, nn.Embedding):
484
+ module.weight.data.normal_(mean=0.0, std=std)
485
+ if module.padding_idx is not None:
486
+ module.weight.data[module.padding_idx].zero_()
487
+
488
+
489
+ QWEN3_INPUTS_DOCSTRING = r"""
490
+ Args:
491
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
492
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
493
+ it.
494
+
495
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
496
+ [`PreTrainedTokenizer.__call__`] for details.
497
+
498
+ [What are input IDs?](../glossary#input-ids)
499
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
500
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
501
+
502
+ - 1 for tokens that are **not masked**,
503
+ - 0 for tokens that are **masked**.
504
+
505
+ [What are attention masks?](../glossary#attention-mask)
506
+
507
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
508
+ [`PreTrainedTokenizer.__call__`] for details.
509
+
510
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
511
+ `past_key_values`).
512
+
513
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
514
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
515
+ information on the default strategy.
516
+
517
+ - 1 indicates the head is **not masked**,
518
+ - 0 indicates the head is **masked**.
519
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
520
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
521
+ config.n_positions - 1]`.
522
+
523
+ [What are position IDs?](../glossary#position-ids)
524
+ past_key_values (`Cache`, *optional*):
525
+ Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
526
+ blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values`
527
+ returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`.
528
+
529
+ It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache).
530
+
531
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
532
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
533
+ of shape `(batch_size, sequence_length)`.
534
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
535
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
536
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
537
+ model's internal embedding lookup matrix.
538
+ use_cache (`bool`, *optional*):
539
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
540
+ `past_key_values`).
541
+ output_attentions (`bool`, *optional*):
542
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
543
+ tensors for more detail.
544
+ output_hidden_states (`bool`, *optional*):
545
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
546
+ more detail.
547
+ return_dict (`bool`, *optional*):
548
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
549
+ cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*):
550
+ Indices depicting the position of the input sequence tokens in the sequence. Contrarily to `position_ids`,
551
+ this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
552
+ the complete sequence length.
553
+ """
554
+
555
+
556
+ @add_start_docstrings(
557
+ "The bare Qwen3 Model outputting raw hidden-states without any specific head on top.",
558
+ QWEN3_START_DOCSTRING,
559
+ )
560
+ class Qwen3Model(Qwen3PreTrainedModel):
561
+ """
562
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Qwen3DecoderLayer`]
563
+
564
+ Args:
565
+ config: Qwen3Config
566
+ """
567
+
568
+ def __init__(self, config: Qwen3Config):
569
+ super().__init__(config)
570
+ self.padding_idx = config.pad_token_id
571
+ self.vocab_size = config.vocab_size
572
+
573
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
574
+ self.layers = nn.ModuleList(
575
+ [Qwen3DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
576
+ )
577
+ self.norm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
578
+ self.rotary_emb = Qwen3RotaryEmbedding(config=config)
579
+ self.gradient_checkpointing = False
580
+
581
+ # Initialize weights and apply final processing
582
+ self.post_init()
583
+
584
+ def get_input_embeddings(self):
585
+ return self.embed_tokens
586
+
587
+ def set_input_embeddings(self, value):
588
+ self.embed_tokens = value
589
+
590
+ @can_return_tuple
591
+ @add_start_docstrings_to_model_forward(QWEN3_INPUTS_DOCSTRING)
592
+ def forward(
593
+ self,
594
+ input_ids: Optional[torch.LongTensor] = None,
595
+ attention_mask: Optional[torch.Tensor] = None,
596
+ position_ids: Optional[torch.LongTensor] = None,
597
+ past_key_values: Optional[Cache] = None,
598
+ inputs_embeds: Optional[torch.FloatTensor] = None,
599
+ use_cache: Optional[bool] = None,
600
+ output_attentions: Optional[bool] = None,
601
+ output_hidden_states: Optional[bool] = None,
602
+ cache_position: Optional[torch.LongTensor] = None,
603
+ **flash_attn_kwargs: Unpack[FlashAttentionKwargs],
604
+ ) -> BaseModelOutputWithPast:
605
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
606
+ output_hidden_states = (
607
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
608
+ )
609
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
610
+
611
+ if (input_ids is None) ^ (inputs_embeds is not None):
612
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
613
+
614
+ if self.gradient_checkpointing and self.training and use_cache:
615
+ logger.warning_once(
616
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
617
+ )
618
+ use_cache = False
619
+
620
+ # TODO (joao): remove this exception in v4.56 -- it exists for users that try to pass a legacy cache
621
+ if not isinstance(past_key_values, (type(None), Cache)):
622
+ raise ValueError("The `past_key_values` should be either a `Cache` object or `None`.")
623
+
624
+ if inputs_embeds is None:
625
+ inputs_embeds = self.embed_tokens(input_ids)
626
+
627
+ if use_cache and past_key_values is None:
628
+ past_key_values = DynamicCache()
629
+
630
+ if cache_position is None:
631
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
632
+ cache_position = torch.arange(
633
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
634
+ )
635
+
636
+ if position_ids is None:
637
+ position_ids = cache_position.unsqueeze(0)
638
+
639
+ causal_mask = self._update_causal_mask(
640
+ attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions
641
+ )
642
+
643
+ hidden_states = inputs_embeds
644
+
645
+ # create position embeddings to be shared across the decoder layers
646
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
647
+
648
+ # decoder layers
649
+ all_hidden_states = () if output_hidden_states else None
650
+ all_self_attns = () if output_attentions else None
651
+
652
+ for decoder_layer in self.layers[: self.config.num_hidden_layers]:
653
+ if output_hidden_states:
654
+ all_hidden_states += (hidden_states,)
655
+
656
+ if self.gradient_checkpointing and self.training:
657
+ layer_outputs = self._gradient_checkpointing_func(
658
+ partial(decoder_layer.__call__, **flash_attn_kwargs),
659
+ hidden_states,
660
+ causal_mask,
661
+ position_ids,
662
+ past_key_values,
663
+ output_attentions,
664
+ use_cache,
665
+ cache_position,
666
+ position_embeddings,
667
+ )
668
+ else:
669
+ layer_outputs = decoder_layer(
670
+ hidden_states,
671
+ attention_mask=causal_mask,
672
+ position_ids=position_ids,
673
+ past_key_value=past_key_values,
674
+ output_attentions=output_attentions,
675
+ use_cache=use_cache,
676
+ cache_position=cache_position,
677
+ position_embeddings=position_embeddings,
678
+ **flash_attn_kwargs,
679
+ )
680
+
681
+ hidden_states = layer_outputs[0]
682
+
683
+ if output_attentions:
684
+ all_self_attns += (layer_outputs[1],)
685
+
686
+ hidden_states = self.norm(hidden_states)
687
+
688
+ # add hidden states from the last decoder layer
689
+ if output_hidden_states:
690
+ all_hidden_states += (hidden_states,)
691
+
692
+ return BaseModelOutputWithPast(
693
+ last_hidden_state=hidden_states,
694
+ past_key_values=past_key_values if use_cache else None,
695
+ hidden_states=all_hidden_states,
696
+ attentions=all_self_attns,
697
+ )
698
+
699
+ def _update_causal_mask(
700
+ self,
701
+ attention_mask: torch.Tensor,
702
+ input_tensor: torch.Tensor,
703
+ cache_position: torch.Tensor,
704
+ past_key_values: Cache,
705
+ output_attentions: bool = False,
706
+ ):
707
+ if self.config._attn_implementation == "flash_attention_2":
708
+ if attention_mask is not None and past_key_values is not None:
709
+ is_padding_right = attention_mask[:, -1].sum().item() != input_tensor.size()[0]
710
+ if is_padding_right:
711
+ raise ValueError(
712
+ "You are attempting to perform batched generation with padding_side='right'"
713
+ " this may lead to unexpected behaviour for Flash Attention version of Qwen3. Make sure to "
714
+ " call `tokenizer.padding_side = 'left'` before tokenizing the input. "
715
+ )
716
+ if attention_mask is not None and 0.0 in attention_mask:
717
+ return attention_mask
718
+ return None
719
+
720
+ # For SDPA, when possible, we will rely on its `is_causal` argument instead of its `attn_mask` argument, in
721
+ # order to dispatch on Flash Attention 2. This feature is not compatible with static cache, as SDPA will fail
722
+ # to infer the attention mask.
723
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
724
+ using_static_cache = isinstance(past_key_values, StaticCache)
725
+ using_sliding_window_cache = isinstance(past_key_values, SlidingWindowCache)
726
+
727
+ # When output attentions is True, sdpa implementation's forward method calls the eager implementation's forward
728
+ if (
729
+ self.config._attn_implementation == "sdpa"
730
+ and not (using_static_cache or using_sliding_window_cache)
731
+ and not output_attentions
732
+ ):
733
+ if AttentionMaskConverter._ignore_causal_mask_sdpa(
734
+ attention_mask,
735
+ inputs_embeds=input_tensor,
736
+ past_key_values_length=past_seen_tokens,
737
+ sliding_window=self.config.sliding_window,
738
+ is_training=self.training,
739
+ ):
740
+ return None
741
+
742
+ dtype, device = input_tensor.dtype, input_tensor.device
743
+ min_dtype = torch.finfo(dtype).min
744
+ sequence_length = input_tensor.shape[1]
745
+ # SlidingWindowCache or StaticCache
746
+ if using_sliding_window_cache or using_static_cache:
747
+ target_length = past_key_values.get_max_cache_shape()
748
+ # DynamicCache or no cache
749
+ else:
750
+ target_length = (
751
+ attention_mask.shape[-1]
752
+ if isinstance(attention_mask, torch.Tensor)
753
+ else past_seen_tokens + sequence_length + 1
754
+ )
755
+
756
+ # In case the provided `attention` mask is 2D, we generate a causal mask here (4D).
757
+ causal_mask = self._prepare_4d_causal_attention_mask_with_cache_position(
758
+ attention_mask,
759
+ sequence_length=sequence_length,
760
+ target_length=target_length,
761
+ dtype=dtype,
762
+ device=device,
763
+ cache_position=cache_position,
764
+ batch_size=input_tensor.shape[0],
765
+ config=self.config,
766
+ past_key_values=past_key_values,
767
+ )
768
+
769
+ if (
770
+ self.config._attn_implementation == "sdpa"
771
+ and attention_mask is not None
772
+ and attention_mask.device.type in ["cuda", "xpu"]
773
+ and not output_attentions
774
+ ):
775
+ # Attend to all tokens in fully masked rows in the causal_mask, for example the relevant first rows when
776
+ # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path.
777
+ # Details: https://github.com/pytorch/pytorch/issues/110213
778
+ causal_mask = AttentionMaskConverter._unmask_unattended(causal_mask, min_dtype)
779
+
780
+ return causal_mask
781
+
782
+ @staticmethod
783
+ def _prepare_4d_causal_attention_mask_with_cache_position(
784
+ attention_mask: torch.Tensor,
785
+ sequence_length: int,
786
+ target_length: int,
787
+ dtype: torch.dtype,
788
+ device: torch.device,
789
+ cache_position: torch.Tensor,
790
+ batch_size: int,
791
+ config: Qwen3Config,
792
+ past_key_values: Cache,
793
+ ):
794
+ """
795
+ Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
796
+ `(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
797
+
798
+ Args:
799
+ attention_mask (`torch.Tensor`):
800
+ A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape `(batch_size, 1, query_length, key_value_length)`.
801
+ sequence_length (`int`):
802
+ The sequence length being processed.
803
+ target_length (`int`):
804
+ The target length: when generating with static cache, the mask should be as long as the static cache, to account for the 0 padding, the part of the cache that is not filled yet.
805
+ dtype (`torch.dtype`):
806
+ The dtype to use for the 4D attention mask.
807
+ device (`torch.device`):
808
+ The device to place the 4D attention mask on.
809
+ cache_position (`torch.Tensor`):
810
+ Indices depicting the position of the input sequence tokens in the sequence.
811
+ batch_size (`torch.Tensor`):
812
+ Batch size.
813
+ config (`Qwen3Config`):
814
+ The model's configuration class
815
+ past_key_values (`Cache`):
816
+ The cache class that is being used currently to generate
817
+ """
818
+ if attention_mask is not None and attention_mask.dim() == 4:
819
+ # In this case we assume that the mask comes already in inverted form and requires no inversion or slicing.
820
+ causal_mask = attention_mask
821
+ else:
822
+ min_dtype = torch.finfo(dtype).min
823
+ causal_mask = torch.full(
824
+ (sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device
825
+ )
826
+ diagonal_attend_mask = torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
827
+ if config.sliding_window is not None:
828
+ # if we have sliding window, we should not attend to tokens beyond sliding window length, so we mask them out also
829
+ # the check is needed to verify is current checkpoint was trained with sliding window or not
830
+ if not isinstance(past_key_values, SlidingWindowCache) or sequence_length > target_length:
831
+ sliding_attend_mask = torch.arange(target_length, device=device) <= (
832
+ cache_position.reshape(-1, 1) - config.sliding_window
833
+ )
834
+ diagonal_attend_mask.bitwise_or_(sliding_attend_mask)
835
+ causal_mask *= diagonal_attend_mask
836
+ causal_mask = causal_mask[None, None, :, :].expand(batch_size, 1, -1, -1)
837
+ if attention_mask is not None:
838
+ causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
839
+ if attention_mask.shape[-1] > target_length:
840
+ attention_mask = attention_mask[:, :target_length]
841
+ mask_length = attention_mask.shape[-1]
842
+ padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :].to(
843
+ causal_mask.device
844
+ )
845
+ padding_mask = padding_mask == 0
846
+ causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
847
+ padding_mask, min_dtype
848
+ )
849
+ return causal_mask
850
+
851
+
852
+ class KwargsForCausalLM(FlashAttentionKwargs, LossKwargs): ...
853
+
854
+
855
+ class Qwen3ForCausalLM(Qwen3PreTrainedModel, GenerationMixin):
856
+ _tied_weights_keys = ["lm_head.weight"]
857
+ _tp_plan = {"lm_head": "colwise_rep"}
858
+ _pp_plan = {"lm_head": (["hidden_states"], ["logits"])}
859
+
860
+ def __init__(self, config):
861
+ super().__init__(config)
862
+ self.model = Qwen3Model(config)
863
+ self.vocab_size = config.vocab_size
864
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
865
+
866
+ if config._attn_implementation == 'sdpa':
867
+ ALL_ATTENTION_FUNCTIONS['sdpa'] = spec_sdpa_attention_forward
868
+ print("use specific sdpa attention")
869
+ # Initialize weights and apply final processing
870
+ self.post_init()
871
+
872
+ def get_input_embeddings(self):
873
+ return self.model.embed_tokens
874
+
875
+ def set_input_embeddings(self, value):
876
+ self.model.embed_tokens = value
877
+
878
+ def get_output_embeddings(self):
879
+ return self.lm_head
880
+
881
+ def set_output_embeddings(self, new_embeddings):
882
+ self.lm_head = new_embeddings
883
+
884
+ def set_decoder(self, decoder):
885
+ self.model = decoder
886
+
887
+ def get_decoder(self):
888
+ return self.model
889
+
890
+ @can_return_tuple
891
+ @deprecate_kwarg("num_logits_to_keep", version="4.50", new_name="logits_to_keep")
892
+ @add_start_docstrings_to_model_forward(QWEN3_INPUTS_DOCSTRING)
893
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
894
+ def forward(
895
+ self,
896
+ input_ids: Optional[torch.LongTensor] = None,
897
+ attention_mask: Optional[torch.Tensor] = None,
898
+ position_ids: Optional[torch.LongTensor] = None,
899
+ past_key_values: Optional[Cache] = None,
900
+ inputs_embeds: Optional[torch.FloatTensor] = None,
901
+ labels: Optional[torch.LongTensor] = None,
902
+ use_cache: Optional[bool] = None,
903
+ output_attentions: Optional[bool] = None,
904
+ output_hidden_states: Optional[bool] = None,
905
+ cache_position: Optional[torch.LongTensor] = None,
906
+ logits_to_keep: Union[int, torch.Tensor] = 0,
907
+ **kwargs: Unpack[KwargsForCausalLM],
908
+ ) -> CausalLMOutputWithPast:
909
+ r"""
910
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
911
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
912
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
913
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
914
+
915
+ logits_to_keep (`int` or `torch.Tensor`, *optional*):
916
+ If an `int`, compute logits for the last `logits_to_keep` tokens. If `0`, calculate logits for all
917
+ `input_ids` (special case). Only last token logits are needed for generation, and calculating them only for that
918
+ token can save memory, which becomes pretty significant for long sequences or large vocabulary size.
919
+ If a `torch.Tensor`, must be 1D corresponding to the indices to keep in the sequence length dimension.
920
+ This is useful when using packed tensor format (single dimension for batch and sequence length).
921
+
922
+ Returns:
923
+
924
+ Example:
925
+
926
+ ```python
927
+ >>> from transformers import AutoTokenizer, Qwen3ForCausalLM
928
+
929
+ >>> model = Qwen3ForCausalLM.from_pretrained("Qwen/Qwen3-8B")
930
+ >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")
931
+
932
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
933
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
934
+
935
+ >>> # Generate
936
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
937
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
938
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
939
+ ```"""
940
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
941
+ output_hidden_states = (
942
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
943
+ )
944
+
945
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
946
+ outputs: BaseModelOutputWithPast = self.model(
947
+ input_ids=input_ids,
948
+ attention_mask=attention_mask,
949
+ position_ids=position_ids,
950
+ past_key_values=past_key_values,
951
+ inputs_embeds=inputs_embeds,
952
+ use_cache=use_cache,
953
+ output_attentions=output_attentions,
954
+ output_hidden_states=output_hidden_states,
955
+ cache_position=cache_position,
956
+ **kwargs,
957
+ )
958
+
959
+ hidden_states = outputs.last_hidden_state
960
+ # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
961
+ slice_indices = slice(-logits_to_keep, None) if isinstance(logits_to_keep, int) else logits_to_keep
962
+ logits = self.lm_head(hidden_states[:, slice_indices, :])
963
+
964
+ loss = None
965
+ if labels is not None:
966
+ loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs)
967
+
968
+ return CausalLMOutputWithPast(
969
+ loss=loss,
970
+ logits=logits,
971
+ past_key_values=outputs.past_key_values,
972
+ hidden_states=outputs.hidden_states,
973
+ attentions=outputs.attentions,
974
+ )
975
+
976
+
977
+ @add_start_docstrings(
978
+ """
979
+ The Qwen3 Model transformer with a sequence classification head on top (linear layer).
980
+
981
+ [`Qwen3ForSequenceClassification`] uses the last token in order to do the classification, as other causal models
982
+ (e.g. GPT-2) do.
983
+
984
+ Since it does classification on the last token, it requires to know the position of the last token. If a
985
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
986
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
987
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
988
+ each row of the batch).
989
+ """,
990
+ QWEN3_START_DOCSTRING,
991
+ )
992
+ class Qwen3ForSequenceClassification(Qwen3PreTrainedModel):
993
+ def __init__(self, config):
994
+ super().__init__(config)
995
+ self.num_labels = config.num_labels
996
+ self.model = Qwen3Model(config)
997
+ self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
998
+
999
+ # Initialize weights and apply final processing
1000
+ self.post_init()
1001
+
1002
+ def get_input_embeddings(self):
1003
+ return self.model.embed_tokens
1004
+
1005
+ def set_input_embeddings(self, value):
1006
+ self.model.embed_tokens = value
1007
+
1008
+ @can_return_tuple
1009
+ @add_start_docstrings_to_model_forward(QWEN3_INPUTS_DOCSTRING)
1010
+ def forward(
1011
+ self,
1012
+ input_ids: Optional[torch.LongTensor] = None,
1013
+ attention_mask: Optional[torch.Tensor] = None,
1014
+ position_ids: Optional[torch.LongTensor] = None,
1015
+ past_key_values: Optional[Cache] = None,
1016
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1017
+ labels: Optional[torch.LongTensor] = None,
1018
+ use_cache: Optional[bool] = None,
1019
+ output_attentions: Optional[bool] = None,
1020
+ output_hidden_states: Optional[bool] = None,
1021
+ ) -> SequenceClassifierOutputWithPast:
1022
+ r"""
1023
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1024
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1025
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1026
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1027
+ """
1028
+
1029
+ transformer_outputs: BaseModelOutputWithPast = self.model(
1030
+ input_ids,
1031
+ attention_mask=attention_mask,
1032
+ position_ids=position_ids,
1033
+ past_key_values=past_key_values,
1034
+ inputs_embeds=inputs_embeds,
1035
+ use_cache=use_cache,
1036
+ output_attentions=output_attentions,
1037
+ output_hidden_states=output_hidden_states,
1038
+ )
1039
+ hidden_states = transformer_outputs.last_hidden_state
1040
+ logits = self.score(hidden_states)
1041
+
1042
+ if input_ids is not None:
1043
+ batch_size = input_ids.shape[0]
1044
+ else:
1045
+ batch_size = inputs_embeds.shape[0]
1046
+
1047
+ if self.config.pad_token_id is None and batch_size != 1:
1048
+ raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
1049
+ if self.config.pad_token_id is None:
1050
+ last_non_pad_token = -1
1051
+ elif input_ids is not None:
1052
+ # To handle both left- and right- padding, we take the rightmost token that is not equal to pad_token_id
1053
+ non_pad_mask = (input_ids != self.config.pad_token_id).to(logits.device, torch.int32)
1054
+ token_indices = torch.arange(input_ids.shape[-1], device=logits.device, dtype=torch.int32)
1055
+ last_non_pad_token = (token_indices * non_pad_mask).argmax(-1)
1056
+ else:
1057
+ last_non_pad_token = -1
1058
+ logger.warning_once(
1059
+ f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be "
1060
+ "unexpected if using padding tokens in conjunction with `inputs_embeds.`"
1061
+ )
1062
+
1063
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), last_non_pad_token]
1064
+
1065
+ loss = None
1066
+ if labels is not None:
1067
+ loss = self.loss_function(logits=logits, labels=labels, pooled_logits=pooled_logits, config=self.config)
1068
+
1069
+ return SequenceClassifierOutputWithPast(
1070
+ loss=loss,
1071
+ logits=pooled_logits,
1072
+ past_key_values=transformer_outputs.past_key_values,
1073
+ hidden_states=transformer_outputs.hidden_states,
1074
+ attentions=transformer_outputs.attentions,
1075
+ )
1076
+
1077
+
1078
+ @add_start_docstrings(
1079
+ """
1080
+ The Qwen3 Model transformer with a token classification head on top (a linear layer on top of the hidden-states
1081
+ output) e.g. for Named-Entity-Recognition (NER) tasks.
1082
+ """,
1083
+ QWEN3_START_DOCSTRING,
1084
+ )
1085
+ class Qwen3ForTokenClassification(Qwen3PreTrainedModel):
1086
+ def __init__(self, config):
1087
+ super().__init__(config)
1088
+ self.num_labels = config.num_labels
1089
+ self.model = Qwen3Model(config)
1090
+ if getattr(config, "classifier_dropout", None) is not None:
1091
+ classifier_dropout = config.classifier_dropout
1092
+ elif getattr(config, "hidden_dropout", None) is not None:
1093
+ classifier_dropout = config.hidden_dropout
1094
+ else:
1095
+ classifier_dropout = 0.1
1096
+ self.dropout = nn.Dropout(classifier_dropout)
1097
+ self.score = nn.Linear(config.hidden_size, config.num_labels)
1098
+
1099
+ # Initialize weights and apply final processing
1100
+ self.post_init()
1101
+
1102
+ def get_input_embeddings(self):
1103
+ return self.model.embed_tokens
1104
+
1105
+ def set_input_embeddings(self, value):
1106
+ self.model.embed_tokens = value
1107
+
1108
+ @can_return_tuple
1109
+ @add_start_docstrings_to_model_forward(QWEN3_INPUTS_DOCSTRING)
1110
+ @add_code_sample_docstrings(
1111
+ checkpoint=_CHECKPOINT_FOR_DOC,
1112
+ output_type=TokenClassifierOutput,
1113
+ config_class=_CONFIG_FOR_DOC,
1114
+ )
1115
+ def forward(
1116
+ self,
1117
+ input_ids: Optional[torch.LongTensor] = None,
1118
+ attention_mask: Optional[torch.Tensor] = None,
1119
+ position_ids: Optional[torch.LongTensor] = None,
1120
+ past_key_values: Optional[Cache] = None,
1121
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1122
+ labels: Optional[torch.LongTensor] = None,
1123
+ use_cache: Optional[bool] = None,
1124
+ output_attentions: Optional[bool] = None,
1125
+ output_hidden_states: Optional[bool] = None,
1126
+ ) -> TokenClassifierOutput:
1127
+ r"""
1128
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1129
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1130
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1131
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1132
+ """
1133
+
1134
+ outputs: BaseModelOutputWithPast = self.model(
1135
+ input_ids,
1136
+ attention_mask=attention_mask,
1137
+ position_ids=position_ids,
1138
+ past_key_values=past_key_values,
1139
+ inputs_embeds=inputs_embeds,
1140
+ use_cache=use_cache,
1141
+ output_attentions=output_attentions,
1142
+ output_hidden_states=output_hidden_states,
1143
+ )
1144
+ sequence_output = outputs.last_hidden_state
1145
+ sequence_output = self.dropout(sequence_output)
1146
+ logits = self.score(sequence_output)
1147
+
1148
+ loss = None
1149
+ if labels is not None:
1150
+ loss = self.loss_function(logits, labels, self.config)
1151
+
1152
+ return TokenClassifierOutput(
1153
+ loss=loss,
1154
+ logits=logits,
1155
+ hidden_states=outputs.hidden_states,
1156
+ attentions=outputs.attentions,
1157
+ )
1158
+
1159
+
1160
+ @add_start_docstrings(
1161
+ """
1162
+ The Qwen3 Model transformer with a span classification head on top for extractive question-answering tasks like
1163
+ SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
1164
+ """,
1165
+ QWEN3_START_DOCSTRING,
1166
+ )
1167
+ class Qwen3ForQuestionAnswering(Qwen3PreTrainedModel):
1168
+ base_model_prefix = "transformer"
1169
+
1170
+ def __init__(self, config):
1171
+ super().__init__(config)
1172
+ self.transformer = Qwen3Model(config)
1173
+ self.qa_outputs = nn.Linear(config.hidden_size, 2)
1174
+
1175
+ # Initialize weights and apply final processing
1176
+ self.post_init()
1177
+
1178
+ def get_input_embeddings(self):
1179
+ return self.transformer.embed_tokens
1180
+
1181
+ def set_input_embeddings(self, value):
1182
+ self.transformer.embed_tokens = value
1183
+
1184
+ @can_return_tuple
1185
+ @add_start_docstrings_to_model_forward(QWEN3_INPUTS_DOCSTRING)
1186
+ def forward(
1187
+ self,
1188
+ input_ids: Optional[torch.LongTensor] = None,
1189
+ attention_mask: Optional[torch.FloatTensor] = None,
1190
+ position_ids: Optional[torch.LongTensor] = None,
1191
+ past_key_values: Optional[Cache] = None,
1192
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1193
+ start_positions: Optional[torch.LongTensor] = None,
1194
+ end_positions: Optional[torch.LongTensor] = None,
1195
+ output_attentions: Optional[bool] = None,
1196
+ output_hidden_states: Optional[bool] = None,
1197
+ **kwargs,
1198
+ ) -> QuestionAnsweringModelOutput:
1199
+ r"""
1200
+ start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1201
+ Labels for position (index) of the start of the labelled span for computing the token classification loss.
1202
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1203
+ are not taken into account for computing the loss.
1204
+ end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1205
+ Labels for position (index) of the end of the labelled span for computing the token classification loss.
1206
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1207
+ are not taken into account for computing the loss.
1208
+ """
1209
+
1210
+ outputs: BaseModelOutputWithPast = self.transformer(
1211
+ input_ids,
1212
+ attention_mask=attention_mask,
1213
+ position_ids=position_ids,
1214
+ past_key_values=past_key_values,
1215
+ inputs_embeds=inputs_embeds,
1216
+ output_attentions=output_attentions,
1217
+ output_hidden_states=output_hidden_states,
1218
+ )
1219
+
1220
+ sequence_output = outputs.last_hidden_state
1221
+
1222
+ logits = self.qa_outputs(sequence_output)
1223
+ start_logits, end_logits = logits.split(1, dim=-1)
1224
+ start_logits = start_logits.squeeze(-1).contiguous()
1225
+ end_logits = end_logits.squeeze(-1).contiguous()
1226
+
1227
+ loss = None
1228
+ if start_positions is not None and end_positions is not None:
1229
+ loss = self.loss_function(start_logits, end_logits, start_positions, end_positions, **kwargs)
1230
+
1231
+ return QuestionAnsweringModelOutput(
1232
+ loss=loss,
1233
+ start_logits=start_logits,
1234
+ end_logits=end_logits,
1235
+ hidden_states=outputs.hidden_states,
1236
+ attentions=outputs.attentions,
1237
+ )
1238
+
1239
+
1240
+ __all__ = [
1241
+ "Qwen3ForCausalLM",
1242
+ "Qwen3ForQuestionAnswering",
1243
+ "Qwen3Model",
1244
+ "Qwen3PreTrainedModel",
1245
+ "Qwen3ForSequenceClassification",
1246
+ "Qwen3ForTokenClassification",
1247
+ ]
modeling_sailvit.py ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # adapted from https://huggingface.co/apple/aimv2-huge-patch14-448 (modification: add gradient checkpoint support)
2
+ from typing import Optional, Tuple, Union
3
+
4
+ import torch
5
+ from .configuration_sailvit import SAILViTConfig
6
+ from torch import nn
7
+ from torch.nn import functional as F
8
+ from transformers.modeling_outputs import BaseModelOutputWithNoAttention
9
+ from transformers.modeling_utils import PreTrainedModel
10
+
11
+ __all__ = ["SAILViTModel"]
12
+
13
+
14
+ class RMSNorm(nn.Module):
15
+ def __init__(self, dim: int, eps: float = 1e-6):
16
+ super().__init__()
17
+ self.weight = nn.Parameter(torch.ones(dim))
18
+ self.eps = eps
19
+
20
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
21
+ output = self._norm(x.float()).type_as(x)
22
+ return output * self.weight
23
+
24
+ def extra_repr(self) -> str:
25
+ return f"{tuple(self.weight.shape)}, eps={self.eps}"
26
+
27
+ def _norm(self, x: torch.Tensor) -> torch.Tensor:
28
+ return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
29
+
30
+
31
+ class SAILViTSwiGLUFFN(nn.Module):
32
+ def __init__(self, config: SAILViTConfig):
33
+ super().__init__()
34
+ hidden_features = config.intermediate_size
35
+ in_features = config.hidden_size
36
+ bias = config.use_bias
37
+
38
+ self.fc1 = nn.Linear(in_features, hidden_features, bias=bias)
39
+ self.fc2 = nn.Linear(hidden_features, in_features, bias=bias)
40
+ self.fc3 = nn.Linear(in_features, hidden_features, bias=bias)
41
+
42
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
43
+ x = F.silu(self.fc1(x)) * self.fc3(x)
44
+ x = self.fc2(x)
45
+ return x
46
+
47
+
48
+ class SAILViTPatchEmbed(nn.Module):
49
+ def __init__(self, config: SAILViTConfig):
50
+ super().__init__()
51
+ self.proj = nn.Conv2d(
52
+ config.num_channels,
53
+ config.hidden_size,
54
+ kernel_size=(config.patch_size, config.patch_size),
55
+ stride=(config.patch_size, config.patch_size),
56
+ )
57
+ self.norm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
58
+
59
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
60
+ x = self.proj(x).flatten(2).transpose(1, 2)
61
+ x = self.norm(x)
62
+ return x
63
+
64
+
65
+ class SAILViTPreprocessor(nn.Module):
66
+ def __init__(self, config: SAILViTConfig):
67
+ super().__init__()
68
+ num_patches = (config.image_size // config.patch_size) ** 2
69
+
70
+ self.patchifier = SAILViTPatchEmbed(config)
71
+ self.pos_embed = nn.Parameter(torch.zeros((1, num_patches, config.hidden_size)))
72
+
73
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
74
+ tokens = self.patchifier(x)
75
+ _, N, _ = tokens.shape
76
+ pos_embed = self.pos_embed.to(tokens.device)
77
+ tokens = tokens + pos_embed[:, :N]
78
+ return tokens
79
+
80
+
81
+ class SAILViTAttention(nn.Module):
82
+ def __init__(self, config: SAILViTConfig):
83
+ super().__init__()
84
+ dim = config.hidden_size
85
+
86
+ self.num_heads = config.num_attention_heads
87
+ self.qkv = nn.Linear(dim, dim * 3, bias=config.qkv_bias)
88
+ self.attn_drop = nn.Dropout(config.attention_dropout)
89
+ self.proj = nn.Linear(dim, dim, bias=config.use_bias)
90
+ self.proj_drop = nn.Dropout(config.projection_dropout)
91
+
92
+ def forward(
93
+ self, x: torch.Tensor, mask: Optional[torch.Tensor] = None
94
+ ) -> torch.Tensor:
95
+ B, N, C = x.shape
96
+ qkv = (
97
+ self.qkv(x)
98
+ .reshape(B, N, 3, self.num_heads, C // self.num_heads)
99
+ .permute(2, 0, 3, 1, 4)
100
+ )
101
+ q, k, v = qkv.unbind(0)
102
+
103
+ x = F.scaled_dot_product_attention(q, k, v, attn_mask=mask)
104
+ x = x.transpose(1, 2).contiguous().reshape(B, N, C)
105
+ x = self.proj(x)
106
+ x = self.proj_drop(x)
107
+ return x
108
+
109
+
110
+ class SAILViTBlock(nn.Module):
111
+ def __init__(self, config: SAILViTConfig):
112
+ super().__init__()
113
+ self.attn = SAILViTAttention(config)
114
+ self.norm_1 = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
115
+ self.mlp = SAILViTSwiGLUFFN(config)
116
+ self.norm_2 = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
117
+
118
+ def forward(
119
+ self, x: torch.Tensor, mask: Optional[torch.Tensor] = None
120
+ ) -> torch.Tensor:
121
+ x = x + self.attn(self.norm_1(x), mask)
122
+ x = x + self.mlp(self.norm_2(x))
123
+ return x
124
+
125
+
126
+ class SAILViTTransformer(nn.Module):
127
+ def __init__(self, config: SAILViTConfig):
128
+ super().__init__()
129
+ self.blocks = nn.ModuleList(
130
+ [SAILViTBlock(config) for _ in range(config.num_hidden_layers)]
131
+ )
132
+ self.post_trunk_norm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
133
+ self.gradient_checkpointing = False
134
+
135
+ def forward(
136
+ self,
137
+ tokens: torch.Tensor,
138
+ mask: Optional[torch.Tensor] = None,
139
+ output_hidden_states: bool = False,
140
+ ) -> Tuple[torch.Tensor, Optional[Tuple[torch.Tensor, ...]]]:
141
+ hidden_states = () if output_hidden_states else None
142
+ for block in self.blocks:
143
+ if self.gradient_checkpointing and self.training:
144
+ tokens = self._gradient_checkpointing_func(block.__call__, tokens, mask)
145
+ else:
146
+ tokens = block(tokens, mask)
147
+ if output_hidden_states:
148
+ hidden_states += (tokens,)
149
+ tokens = self.post_trunk_norm(tokens)
150
+ return tokens, hidden_states
151
+
152
+
153
+ class SAILViTPretrainedModel(PreTrainedModel):
154
+ config_class = SAILViTConfig
155
+ base_model_prefix = "sailvit"
156
+ supports_gradient_checkpointing = True
157
+ main_input_name = "pixel_values"
158
+ _no_split_modules = ["SAILViTPreprocessor", "SAILViTBlock"]
159
+ _supports_sdpa = True
160
+
161
+
162
+ class SAILViTModel(SAILViTPretrainedModel):
163
+ def __init__(self, config: SAILViTConfig):
164
+ super().__init__(config)
165
+ self.preprocessor = SAILViTPreprocessor(config)
166
+ self.trunk = SAILViTTransformer(config)
167
+
168
+ def forward(
169
+ self,
170
+ pixel_values: torch.Tensor,
171
+ mask: Optional[torch.Tensor] = None,
172
+ output_hidden_states: Optional[bool] = None,
173
+ return_dict: Optional[bool] = None,
174
+ ) -> Union[
175
+ Tuple[torch.Tensor],
176
+ Tuple[torch.Tensor, Tuple[torch.Tensor, ...]],
177
+ BaseModelOutputWithNoAttention,
178
+ ]:
179
+ if output_hidden_states is None:
180
+ output_hidden_states = self.config.output_hidden_states
181
+ if return_dict is None:
182
+ return_dict = self.config.use_return_dict
183
+
184
+ x = self.preprocessor(pixel_values)
185
+ x, hidden_states = self.trunk(
186
+ x, mask, output_hidden_states=output_hidden_states
187
+ )
188
+
189
+ if not return_dict:
190
+ res = (x,)
191
+ res += (hidden_states,) if output_hidden_states else ()
192
+ return res
193
+
194
+ return BaseModelOutputWithNoAttention(
195
+ last_hidden_state=x,
196
+ hidden_states=hidden_states,
197
+ )
198
+
modeling_sailvl.py ADDED
@@ -0,0 +1,349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # --------------------------------------------------------
2
+ # SAILVL
3
+ # Copyright (c) 2024 OpenGVLab
4
+ # Licensed under The MIT License [see LICENSE for details]
5
+ # --------------------------------------------------------
6
+ import warnings
7
+ from typing import Any, List, Optional, Tuple, Union
8
+
9
+ import torch.utils.checkpoint
10
+ import transformers
11
+ from torch import nn
12
+ from torch.nn import CrossEntropyLoss
13
+ from transformers import (AutoModel, GenerationConfig, LlamaForCausalLM,
14
+ LlamaTokenizer)
15
+ from transformers.modeling_outputs import CausalLMOutputWithPast
16
+ from transformers.modeling_utils import PreTrainedModel
17
+ from transformers.utils import ModelOutput, logging
18
+ # from .modeling_qwen2 import Qwen2ForCausalLM
19
+ from .modeling_qwen3 import Qwen3ForCausalLM
20
+
21
+ from .configuration_sailvl import SAILVLConfig
22
+ from .conversation import get_conv_template
23
+ from .modeling_sailvit import SAILViTModel
24
+
25
+ logger = logging.get_logger(__name__)
26
+
27
+
28
+ def version_cmp(v1, v2, op='eq'):
29
+ import operator
30
+
31
+ from packaging import version
32
+ op_func = getattr(operator, op)
33
+ return op_func(version.parse(v1), version.parse(v2))
34
+
35
+
36
+ class SAILVLModel(PreTrainedModel):
37
+ config_class = SAILVLConfig
38
+ main_input_name = 'pixel_values'
39
+ _supports_flash_attn_2 = True
40
+ _no_split_modules = ['InternVisionModel', 'LlamaDecoderLayer', 'InternLM2DecoderLayer']
41
+
42
+ def __init__(self, config: SAILVLConfig, vision_model=None, language_model=None):
43
+ super().__init__(config)
44
+
45
+ assert version_cmp(transformers.__version__, '4.36.2', 'ge')
46
+ image_size = config.force_image_size or config.vision_config.image_size
47
+ patch_size = config.vision_config.patch_size
48
+ self.patch_size = patch_size
49
+ self.select_layer = config.select_layer
50
+ self.template = config.template
51
+ self.num_image_token = int((image_size // patch_size) ** 2 * (config.downsample_ratio ** 2))
52
+ self.downsample_ratio = config.downsample_ratio
53
+ self.ps_version = config.ps_version
54
+
55
+ logger.info(f'num_image_token: {self.num_image_token}')
56
+ logger.info(f'ps_version: {self.ps_version}')
57
+ if vision_model is not None:
58
+ self.vision_model = vision_model
59
+ else:
60
+ self.vision_model = SAILViTModel(config.vision_config)
61
+ if language_model is not None:
62
+ self.language_model = language_model
63
+ self.config.llm_config = language_model.config
64
+ else:
65
+ if config.llm_config.architectures[0] == 'LlamaForCausalLM':
66
+ self.language_model = LlamaForCausalLM(config.llm_config)
67
+ elif config.llm_config.architectures[0] == 'Qwen3ForCausalLM':
68
+ self.language_model = Qwen3ForCausalLM(config.llm_config)
69
+ else:
70
+ raise NotImplementedError(f'{config.llm_config.architectures[0]} is not implemented.')
71
+
72
+ vit_hidden_size = config.vision_config.hidden_size
73
+ llm_hidden_size = config.llm_config.hidden_size
74
+
75
+ self.mlp1 = nn.Sequential(
76
+ nn.LayerNorm(vit_hidden_size * int(1 / self.downsample_ratio) ** 2),
77
+ nn.Linear(vit_hidden_size * int(1 / self.downsample_ratio) ** 2, llm_hidden_size),
78
+ nn.GELU(),
79
+ nn.Linear(llm_hidden_size, llm_hidden_size)
80
+ )
81
+
82
+ self.img_context_token_id = None
83
+ self.conv_template = get_conv_template(self.template)
84
+ self.system_message = self.conv_template.system_message
85
+
86
+ def forward(
87
+ self,
88
+ pixel_values: torch.FloatTensor,
89
+ input_ids: torch.LongTensor = None,
90
+ attention_mask: Optional[torch.Tensor] = None,
91
+ position_ids: Optional[torch.LongTensor] = None,
92
+ image_flags: Optional[torch.LongTensor] = None,
93
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
94
+ labels: Optional[torch.LongTensor] = None,
95
+ use_cache: Optional[bool] = None,
96
+ output_attentions: Optional[bool] = None,
97
+ output_hidden_states: Optional[bool] = None,
98
+ return_dict: Optional[bool] = None,
99
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
100
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
101
+
102
+ image_flags = image_flags.squeeze(-1)
103
+ input_embeds = self.language_model.get_input_embeddings()(input_ids)
104
+
105
+ vit_embeds = self.extract_feature(pixel_values)
106
+ vit_embeds = vit_embeds[image_flags == 1]
107
+ vit_batch_size = pixel_values.shape[0]
108
+
109
+ B, N, C = input_embeds.shape
110
+ input_embeds = input_embeds.reshape(B * N, C)
111
+
112
+ if torch.distributed.get_rank() == 0:
113
+ print(f'dynamic ViT batch size: {vit_batch_size}, images per sample: {vit_batch_size / B}, dynamic token length: {N}')
114
+
115
+ input_ids = input_ids.reshape(B * N)
116
+ selected = (input_ids == self.img_context_token_id)
117
+ try:
118
+ input_embeds[selected] = input_embeds[selected] * 0.0 + vit_embeds.reshape(-1, C)
119
+ except Exception as e:
120
+ vit_embeds = vit_embeds.reshape(-1, C)
121
+ print(f'warning: {e}, input_embeds[selected].shape={input_embeds[selected].shape}, '
122
+ f'vit_embeds.shape={vit_embeds.shape}')
123
+ n_token = selected.sum()
124
+ input_embeds[selected] = input_embeds[selected] * 0.0 + vit_embeds[:n_token]
125
+
126
+ input_embeds = input_embeds.reshape(B, N, C)
127
+
128
+ outputs = self.language_model(
129
+ inputs_embeds=input_embeds,
130
+ attention_mask=attention_mask,
131
+ position_ids=position_ids,
132
+ past_key_values=past_key_values,
133
+ use_cache=use_cache,
134
+ output_attentions=output_attentions,
135
+ output_hidden_states=output_hidden_states,
136
+ return_dict=return_dict,
137
+ )
138
+ logits = outputs.logits
139
+
140
+ loss = None
141
+ if labels is not None:
142
+ # Shift so that tokens < n predict n
143
+ shift_logits = logits[..., :-1, :].contiguous()
144
+ shift_labels = labels[..., 1:].contiguous()
145
+ # Flatten the tokens
146
+ loss_fct = CrossEntropyLoss()
147
+ shift_logits = shift_logits.view(-1, self.language_model.config.vocab_size)
148
+ shift_labels = shift_labels.view(-1)
149
+ # Enable model parallelism
150
+ shift_labels = shift_labels.to(shift_logits.device)
151
+ loss = loss_fct(shift_logits, shift_labels)
152
+
153
+ if not return_dict:
154
+ output = (logits,) + outputs[1:]
155
+ return (loss,) + output if loss is not None else output
156
+
157
+ return CausalLMOutputWithPast(
158
+ loss=loss,
159
+ logits=logits,
160
+ past_key_values=outputs.past_key_values,
161
+ hidden_states=outputs.hidden_states,
162
+ attentions=outputs.attentions,
163
+ )
164
+
165
+ def pixel_shuffle(self, x, scale_factor=0.5):
166
+ n, w, h, c = x.size()
167
+ # N, W, H, C --> N, W, H * scale, C // scale
168
+ x = x.reshape(n, w, int(h * scale_factor), int(c / scale_factor))
169
+ # N, W, H * scale, C // scale --> N, H * scale, W, C // scale
170
+ x = x.permute(0, 2, 1, 3).contiguous()
171
+ # N, H * scale, W, C // scale --> N, H * scale, W * scale, C // (scale ** 2)
172
+ x = x.view(n, int(h * scale_factor), int(w * scale_factor),
173
+ int(c / (scale_factor * scale_factor)))
174
+ if self.ps_version == 'v1':
175
+ warnings.warn("In ps_version 'v1', the height and width have not been swapped back, "
176
+ 'which results in a transposed image.')
177
+ else:
178
+ x = x.permute(0, 2, 1, 3).contiguous()
179
+ return x
180
+
181
+ def extract_feature(self, pixel_values):
182
+ if self.select_layer == -1:
183
+ vit_embeds = self.vision_model(
184
+ pixel_values=pixel_values,
185
+ output_hidden_states=False,
186
+ return_dict=True).last_hidden_state
187
+ else:
188
+ vit_embeds = self.vision_model(
189
+ pixel_values=pixel_values,
190
+ output_hidden_states=True,
191
+ return_dict=True).hidden_states[self.select_layer]
192
+ vit_embeds = vit_embeds
193
+
194
+ h = w = int(vit_embeds.shape[1] ** 0.5)
195
+ vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], h, w, -1)
196
+ vit_embeds = self.pixel_shuffle(vit_embeds, scale_factor=self.downsample_ratio)
197
+ vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], -1, vit_embeds.shape[-1])
198
+ vit_embeds = self.mlp1(vit_embeds)
199
+ return vit_embeds
200
+
201
+ def batch_chat(self, tokenizer, pixel_values, questions, generation_config, num_patches_list=None,
202
+ history=None, return_history=False, IMG_START_TOKEN='<img>', IMG_END_TOKEN='</img>',
203
+ IMG_CONTEXT_TOKEN='<IMG_CONTEXT>', verbose=False, image_counts=None):
204
+ if history is not None or return_history:
205
+ print('Now multi-turn chat is not supported in batch_chat.')
206
+ raise NotImplementedError
207
+
208
+ if image_counts is not None:
209
+ num_patches_list = image_counts
210
+ print('Warning: `image_counts` is deprecated. Please use `num_patches_list` instead.')
211
+
212
+ img_context_token_id = tokenizer.convert_tokens_to_ids(IMG_CONTEXT_TOKEN)
213
+ self.img_context_token_id = img_context_token_id
214
+
215
+ if verbose and pixel_values is not None:
216
+ image_bs = pixel_values.shape[0]
217
+ print(f'dynamic ViT batch size: {image_bs}')
218
+
219
+ queries = []
220
+ for idx, num_patches in enumerate(num_patches_list):
221
+ question = questions[idx]
222
+ if pixel_values is not None and '<image>' not in question:
223
+ question = '<image>\n' + question
224
+ template = get_conv_template(self.template)
225
+ template.append_message(template.roles[0], question)
226
+ template.append_message(template.roles[1], None)
227
+ query = template.get_prompt()
228
+
229
+ image_tokens = IMG_START_TOKEN + IMG_CONTEXT_TOKEN * self.num_image_token * num_patches + IMG_END_TOKEN
230
+ query = query.replace('<image>', image_tokens, 1)
231
+ queries.append(query)
232
+
233
+ tokenizer.padding_side = 'left'
234
+ model_inputs = tokenizer(queries, return_tensors='pt', padding=True)
235
+ input_ids = model_inputs['input_ids'].cuda()
236
+ attention_mask = model_inputs['attention_mask'].cuda()
237
+ eos_token_id = tokenizer.convert_tokens_to_ids(template.sep)
238
+ generation_config['eos_token_id'] = eos_token_id
239
+ generation_output = self.generate(
240
+ pixel_values=pixel_values,
241
+ input_ids=input_ids,
242
+ attention_mask=attention_mask,
243
+ **generation_config
244
+ )
245
+ responses = tokenizer.batch_decode(generation_output, skip_special_tokens=True)
246
+ responses = [response.split(template.sep)[0].strip() for response in responses]
247
+ return responses
248
+
249
+ def chat(self, tokenizer, pixel_values, question, generation_config, history=None, return_history=False,
250
+ num_patches_list=None, IMG_START_TOKEN='<img>', IMG_END_TOKEN='</img>', IMG_CONTEXT_TOKEN='<IMG_CONTEXT>',
251
+ verbose=False):
252
+
253
+ if history is None and pixel_values is not None and '<image>' not in question:
254
+ question = '<image>\n' + question
255
+
256
+ if num_patches_list is None:
257
+ num_patches_list = [pixel_values.shape[0]] if pixel_values is not None else []
258
+ assert pixel_values is None or len(pixel_values) == sum(num_patches_list)
259
+
260
+ img_context_token_id = tokenizer.convert_tokens_to_ids(IMG_CONTEXT_TOKEN)
261
+ self.img_context_token_id = img_context_token_id
262
+
263
+ template = get_conv_template(self.template)
264
+ template.system_message = self.system_message
265
+ eos_token_id = tokenizer.convert_tokens_to_ids(template.sep)
266
+
267
+ history = [] if history is None else history
268
+ for (old_question, old_answer) in history:
269
+ template.append_message(template.roles[0], old_question)
270
+ template.append_message(template.roles[1], old_answer)
271
+ template.append_message(template.roles[0], question)
272
+ template.append_message(template.roles[1], None)
273
+ query = template.get_prompt()
274
+
275
+ if verbose and pixel_values is not None:
276
+ image_bs = pixel_values.shape[0]
277
+ print(f'dynamic ViT batch size: {image_bs}')
278
+
279
+ for num_patches in num_patches_list:
280
+ image_tokens = IMG_START_TOKEN + IMG_CONTEXT_TOKEN * self.num_image_token * num_patches + IMG_END_TOKEN
281
+ query = query.replace('<image>', image_tokens, 1)
282
+
283
+ model_inputs = tokenizer(query, return_tensors='pt')
284
+ input_ids = model_inputs['input_ids'].cuda()
285
+ attention_mask = model_inputs['attention_mask'].cuda()
286
+ generation_config['eos_token_id'] = eos_token_id
287
+ generation_output = self.generate(
288
+ pixel_values=pixel_values,
289
+ input_ids=input_ids,
290
+ attention_mask=attention_mask,
291
+ **generation_config
292
+ )
293
+ response = tokenizer.batch_decode(generation_output, skip_special_tokens=True)[0]
294
+ response = response.split(template.sep)[0].strip()
295
+ history.append((question, response))
296
+ if return_history:
297
+ return response, history
298
+ else:
299
+ query_to_print = query.replace(IMG_CONTEXT_TOKEN, '')
300
+ query_to_print = query_to_print.replace(f'{IMG_START_TOKEN}{IMG_END_TOKEN}', '<image>')
301
+ if verbose:
302
+ print(query_to_print, response)
303
+ return response
304
+
305
+ @torch.no_grad()
306
+ def generate(
307
+ self,
308
+ pixel_values: Optional[torch.FloatTensor] = None,
309
+ input_ids: Optional[torch.FloatTensor] = None,
310
+ attention_mask: Optional[torch.LongTensor] = None,
311
+ visual_features: Optional[torch.FloatTensor] = None,
312
+ generation_config: Optional[GenerationConfig] = None,
313
+ output_hidden_states: Optional[bool] = None,
314
+ return_dict: Optional[bool] = None,
315
+ **generate_kwargs,
316
+ ) -> torch.LongTensor:
317
+ # img_context_token_id = tokenizer.convert_tokens_to_ids('<IMG_CONTEXT>')
318
+ # self.img_context_token_id = img_context_token_id
319
+ self.img_context_token_id = 151671
320
+
321
+ assert self.img_context_token_id is not None
322
+ if pixel_values is not None:
323
+ if visual_features is not None:
324
+ vit_embeds = visual_features
325
+ else:
326
+ vit_embeds = self.extract_feature(pixel_values)
327
+ input_embeds = self.language_model.get_input_embeddings()(input_ids)
328
+ B, N, C = input_embeds.shape
329
+ input_embeds = input_embeds.reshape(B * N, C)
330
+
331
+ input_ids = input_ids.reshape(B * N)
332
+ selected = (input_ids == self.img_context_token_id)
333
+ assert selected.sum() != 0
334
+ input_embeds[selected] = vit_embeds.reshape(-1, C).to(input_embeds.device)
335
+
336
+ input_embeds = input_embeds.reshape(B, N, C)
337
+ else:
338
+ input_embeds = self.language_model.get_input_embeddings()(input_ids)
339
+
340
+ outputs = self.language_model.generate(
341
+ inputs_embeds=input_embeds,
342
+ attention_mask=attention_mask,
343
+ generation_config=generation_config,
344
+ output_hidden_states=output_hidden_states,
345
+ use_cache=True,
346
+ **generate_kwargs,
347
+ )
348
+
349
+ return outputs
preprocessor_config.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoImageProcessor": "image_processing_sailvl.SAILVLImageProcessor",
4
+ "AutoProcessor": "processing_sailvl.SAILVLProcessor"
5
+ },
6
+ "max_dynamic_patch": 10,
7
+ "patch_size": 14,
8
+ "image_size": 448,
9
+ "use_msac": false,
10
+ "image_mean": [
11
+ 0.485,
12
+ 0.456,
13
+ 0.406
14
+ ],
15
+ "image_std": [
16
+ 0.229,
17
+ 0.224,
18
+ 0.225
19
+ ],
20
+ "pad_input": true
21
+ }
processing_sailvl.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2025 The Moonshot Team and HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # The code is based on the Qwen2VL processor (qwen2_vl/processing_qwen2_vl.py), but modified for KimiVL.
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+ """
18
+ Processor class for KimiVL.
19
+ """
20
+
21
+ from typing import List, Union
22
+
23
+ from transformers.feature_extraction_utils import BatchFeature
24
+ from transformers.image_utils import ImageInput
25
+ from transformers.processing_utils import ProcessingKwargs, ProcessorMixin, Unpack, _validate_images_text_input_order
26
+ from transformers.tokenization_utils_base import PreTokenizedInput, TextInput
27
+ from transformers.utils import logging
28
+ import torch
29
+
30
+
31
+ logger = logging.get_logger(__name__)
32
+
33
+
34
+ class SAILVLProcessorKwargs(ProcessingKwargs, total=False):
35
+ _defaults = {
36
+ "text_kwargs": {
37
+ "padding": False,
38
+ },
39
+ "images_kwargs": {},
40
+ }
41
+
42
+
43
+ class SAILVLProcessor(ProcessorMixin):
44
+ # r"""
45
+ # Constructs a KimiVL processor which wraps a KimiVL image processor and a tokenizer into a single processor.
46
+ # [`KimiVLProcessor`] offers all the functionalities of [`KimiVLImageProcessor`] and [`TikTokenTokenizer`]. See the
47
+ # [`~KimiVLProcessor.__call__`] and [`~KimiVLProcessor.decode`] for more information.
48
+ # Args:
49
+ # image_processor ([`KimiVLImageProcessor`], *optional*):
50
+ # The image processor is a required input.
51
+ # tokenizer ([`TikTokenTokenizer`], *optional*):
52
+ # The tokenizer is a required input.
53
+ # chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages
54
+ # in a chat into a tokenizable string.
55
+ # """
56
+
57
+ attributes = ["image_processor", "tokenizer"]
58
+ valid_kwargs = [ "chat_template"]
59
+ image_processor_class = "AutoImageProcessor"
60
+ tokenizer_class = "AutoTokenizer"
61
+
62
+ def __init__(
63
+ self,
64
+ image_processor=None,
65
+ tokenizer=None,
66
+ chat_template=None,
67
+ **kwargs,
68
+ ):
69
+ self.image_token = "<image>"
70
+ self.num_image_token = 256
71
+ super().__init__(image_processor, tokenizer, chat_template=chat_template)
72
+
73
+ def __call__(
74
+ self,
75
+ images: ImageInput = None,
76
+ text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
77
+ **kwargs: Unpack[SAILVLProcessorKwargs],
78
+ ) -> BatchFeature:
79
+ """
80
+ Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
81
+ and `kwargs` arguments to TikTokenTokenizer's [`~TikTokenTokenizer.__call__`] if `text` is not `None` to encode
82
+ the text. To prepare the image(s), this method forwards the `images` and `kwrags` arguments to
83
+ CLIPImageProcessor's [`~CLIPImageProcessor.__call__`] if `images` is not `None`. Please refer to the docstring
84
+ of the above two methods for more information.
85
+ Args:
86
+ images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
87
+ The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
88
+ tensor. Both channels-first and channels-last formats are supported.
89
+ text (`str`, `List[str]`, `List[List[str]]`):
90
+ The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
91
+ (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
92
+ `is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
93
+ return_tensors (`str` or [`~utils.TensorType`], *optional*):
94
+ If set, will return tensors of a particular framework. Acceptable values are:
95
+ - `'tf'`: Return TensorFlow `tf.constant` objects.
96
+ - `'pt'`: Return PyTorch `torch.Tensor` objects.
97
+ - `'np'`: Return NumPy `np.ndarray` objects.
98
+ - `'jax'`: Return JAX `jnp.ndarray` objects.
99
+ Returns:
100
+ [`BatchFeature`]: A [`BatchFeature`] with the following fields:
101
+ - **input_ids** -- List of token ids to be fed to a model. Returned when `text` is not `None`.
102
+ - **attention_mask** -- List of indices specifying which tokens should be attended to by the model (when
103
+ `return_attention_mask=True` or if *"attention_mask"* is in `self.model_input_names` and if `text` is not
104
+ `None`).
105
+ - **pixel_values** -- Pixel values to be fed to a model. Returned when `images` is not `None`.
106
+ """
107
+ if images is None and text is None:
108
+ raise ValueError("You have to specify at least one of `images` or `text`.")
109
+
110
+ # check if images and text inputs are reversed for BC
111
+ images, text = _validate_images_text_input_order(images, text)
112
+
113
+ output_kwargs = self._merge_kwargs(
114
+ SAILVLProcessorKwargs,
115
+ tokenizer_init_kwargs=self.tokenizer.init_kwargs,
116
+ **kwargs,
117
+ )
118
+ # print(output_kwargs)
119
+ if images is not None:
120
+ image_inputs = self.image_processor(images, **output_kwargs["images_kwargs"])
121
+
122
+ else:
123
+ image_inputs = {}
124
+
125
+ if isinstance(text, str):
126
+ text = [text]
127
+ elif not isinstance(text, list) and not isinstance(text[0], str):
128
+ raise ValueError("Invalid input text. Please provide a string, or a list of strings")
129
+
130
+ query = text[0]
131
+ if images is not None:
132
+ for num_patches in image_inputs['num_patches_list']:
133
+ image_tokens = '<img>' + '<IMG_CONTEXT>' * self.num_image_token * num_patches + '</img>'
134
+ query = query.replace('<image>', image_tokens, 1)
135
+ image_inputs.pop('num_patches_list')
136
+
137
+ model_inputs = self.tokenizer(query, return_tensors='pt')
138
+ input_ids = model_inputs['input_ids'].cuda()
139
+ attention_mask = model_inputs['attention_mask'].cuda()
140
+ stop_word = '<|im_end|>'
141
+ eos_token_id = self.tokenizer.convert_tokens_to_ids(stop_word)
142
+
143
+
144
+ text_inputs = {'input_ids': input_ids, 'attention_mask': attention_mask, 'eos_token_id': eos_token_id}
145
+ return BatchFeature(data={**text_inputs, **image_inputs})
146
+
147
+ def batch_decode(self, *args, **kwargs):
148
+ """
149
+ This method forwards all its arguments to LlamaTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please
150
+ refer to the docstring of this method for more information.
151
+ """
152
+ return self.tokenizer.batch_decode(*args, **kwargs)
153
+
154
+ def decode(self, *args, **kwargs):
155
+ """
156
+ This method forwards all its arguments to LlamaTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to
157
+ the docstring of this method for more information.
158
+ """
159
+ return self.tokenizer.decode(*args, **kwargs)
160
+
161
+ @property
162
+ def model_input_names(self):
163
+ tokenizer_input_names = self.tokenizer.model_input_names
164
+ image_processor_input_names = self.image_processor.model_input_names
165
+ return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
166
+
167
+
168
+ __all__ = ["SAILVLProcessorKwargs"]
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81bf64fd18d98deef69ea2152dc76af3ea92c09caf85966dda8630966db43eeb
3
+ size 5461342238
spec_sdpa_attention.py ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Tuple
2
+
3
+ import torch
4
+ try:
5
+ import torch_npu
6
+ except:
7
+ print('Using N* GPU...')
8
+ import math
9
+
10
+
11
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
12
+ """
13
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
14
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
15
+ """
16
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
17
+ if n_rep == 1:
18
+ return hidden_states
19
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
20
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
21
+
22
+
23
+ def spec_sdpa_attention_forward(
24
+ module: torch.nn.Module,
25
+ query: torch.Tensor,
26
+ key: torch.Tensor,
27
+ value: torch.Tensor,
28
+ attention_mask: Optional[torch.Tensor],
29
+ dropout: float = 0.0,
30
+ scaling: Optional[float] = None,
31
+ is_causal: Optional[bool] = None,
32
+ **kwargs,
33
+ ) -> Tuple[torch.Tensor, None]:
34
+ if hasattr(module, "num_key_value_groups"):
35
+ key = repeat_kv(key, module.num_key_value_groups)
36
+ value = repeat_kv(value, module.num_key_value_groups)
37
+
38
+ causal_mask = attention_mask
39
+ if attention_mask is not None and causal_mask.ndim == 4:
40
+ causal_mask = causal_mask[:, :, :, : key.shape[-2]]
41
+
42
+ # SDPA with memory-efficient backend is bugged with non-contiguous inputs and custom attn_mask for some torch versions
43
+ # Reference: https://github.com/pytorch/pytorch/issues/112577.
44
+ query = query.contiguous()
45
+ key = key.contiguous()
46
+ value = value.contiguous()
47
+
48
+ # We dispatch to SDPA's Flash Attention or Efficient kernels via this `is_causal` if statement instead of an inline conditional assignment
49
+ # in SDPA to support both torch.compile's dynamic shapes and full graph options. An inline conditional prevents dynamic shapes from compiling.
50
+ # Note that it is important to check first for the shape, otherwise compile will fail with `argument 'is_causal' must be bool, not SymBool`
51
+ if is_causal is None:
52
+ is_causal = query.shape[2] > 1 and causal_mask is None
53
+
54
+ # Shapes (e.g. query.shape[2]) are tensors during jit tracing, resulting in `is_causal` being a tensor.
55
+ # We convert it to a bool for the SDPA kernel that only accepts bools.
56
+
57
+ if attention_mask is None:
58
+ atten_mask_npu = torch.triu(torch.ones([query.size(-2),
59
+ query.size(-2)]), diagonal=1).bool().to(query.device)
60
+ elif attention_mask.dtype == torch.bool:
61
+ atten_mask_npu = torch.logical_not(attention_mask.bool()).to(attention_mask.device) # atten_mask需要取反
62
+ else:
63
+ atten_mask_npu = attention_mask.bool().to(attention_mask.device)
64
+
65
+ if torch.jit.is_tracing() and isinstance(is_causal, torch.Tensor):
66
+ is_causal = is_causal.item()
67
+
68
+ # attn_output = torch.nn.functional.scaled_dot_product_attention(
69
+ # query,
70
+ # key,
71
+ # value,
72
+ # attn_mask=causal_mask,
73
+ # dropout_p=dropout,
74
+ # scale=scaling,
75
+ # is_causal=is_causal,
76
+ # )
77
+
78
+ head_num = query.shape[1]
79
+ attn_output = torch_npu.npu_fusion_attention(
80
+ query, key, value, head_num, input_layout="BNSD",
81
+ pse=None,
82
+ atten_mask=atten_mask_npu,
83
+ scale=1.0 / math.sqrt(query.shape[-1]),
84
+ pre_tockens=2147483647,
85
+ next_tockens=2147483647,
86
+ keep_prob=1
87
+ )[0]
88
+
89
+ attn_output = attn_output.transpose(1, 2).contiguous()
90
+
91
+ return attn_output, None
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc7a4004b65007afc1409bed6f57ca23287da9c8889b79974ba6ccb077d30eb9
3
+ size 11424306
tokenizer_config.json ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ },
213
+ "151669": {
214
+ "content": "<img>",
215
+ "lstrip": false,
216
+ "normalized": false,
217
+ "rstrip": false,
218
+ "single_word": false,
219
+ "special": true
220
+ },
221
+ "151670": {
222
+ "content": "</img>",
223
+ "lstrip": false,
224
+ "normalized": false,
225
+ "rstrip": false,
226
+ "single_word": false,
227
+ "special": true
228
+ },
229
+ "151671": {
230
+ "content": "<IMG_CONTEXT>",
231
+ "lstrip": false,
232
+ "normalized": false,
233
+ "rstrip": false,
234
+ "single_word": false,
235
+ "special": true
236
+ },
237
+ "151672": {
238
+ "content": "<quad>",
239
+ "lstrip": false,
240
+ "normalized": false,
241
+ "rstrip": false,
242
+ "single_word": false,
243
+ "special": true
244
+ },
245
+ "151673": {
246
+ "content": "</quad>",
247
+ "lstrip": false,
248
+ "normalized": false,
249
+ "rstrip": false,
250
+ "single_word": false,
251
+ "special": true
252
+ },
253
+ "151674": {
254
+ "content": "<ref>",
255
+ "lstrip": false,
256
+ "normalized": false,
257
+ "rstrip": false,
258
+ "single_word": false,
259
+ "special": true
260
+ },
261
+ "151675": {
262
+ "content": "</ref>",
263
+ "lstrip": false,
264
+ "normalized": false,
265
+ "rstrip": false,
266
+ "single_word": false,
267
+ "special": true
268
+ },
269
+ "151676": {
270
+ "content": "<box>",
271
+ "lstrip": false,
272
+ "normalized": false,
273
+ "rstrip": false,
274
+ "single_word": false,
275
+ "special": true
276
+ },
277
+ "151677": {
278
+ "content": "</box>",
279
+ "lstrip": false,
280
+ "normalized": false,
281
+ "rstrip": false,
282
+ "single_word": false,
283
+ "special": true
284
+ }
285
+ },
286
+ "additional_special_tokens": [
287
+ "<|im_start|>",
288
+ "<|im_end|>",
289
+ "<|object_ref_start|>",
290
+ "<|object_ref_end|>",
291
+ "<|box_start|>",
292
+ "<|box_end|>",
293
+ "<|quad_start|>",
294
+ "<|quad_end|>",
295
+ "<|vision_start|>",
296
+ "<|vision_end|>",
297
+ "<|vision_pad|>",
298
+ "<|image_pad|>",
299
+ "<|video_pad|>"
300
+ ],
301
+ "bos_token": null,
302
+ "chat_template": "{%- for message in messages -%}{%- if loop.first and messages[0]['role'] != 'system' -%}{{'<|im_system|>system<|im_middle|>You are a helpful assistant<|im_end|>'}}{%- endif -%}{%- if message['role'] == 'system' -%}{{'<|im_system|>'}}{%- endif -%}{%- if message['role'] == 'user' -%}{{'<|im_user|>'}}{%- endif -%}{%- if message['role'] == 'assistant' -%}{{'<|im_assistant|>'}}{%- endif -%}{{- message['role'] -}}{{'<|im_middle|>'}}{%- if message['content'] is string -%}{{- message['content'] + '<|im_end|>' -}}{%- else -%}{%- for content in message['content'] -%}{%- if content['type'] == 'image' or 'image' in content or 'image_url' in content -%}{{'<|media_start|>image<|media_content|><|media_pad|><|media_end|>'}}{%- else -%}{{content['text']}}{%- endif -%}{%- endfor -%}{{'<|im_end|>'}}{%- endif -%}{%- endfor -%}{%- if add_generation_prompt -%}{{'<|im_assistant|>assistant<|im_middle|>'}}{%- endif -%}",
303
+ "clean_up_tokenization_spaces": false,
304
+ "eos_token": "<|im_end|>",
305
+ "errors": "replace",
306
+ "model_max_length": 131072,
307
+ "pad_token": "<|endoftext|>",
308
+ "split_special_tokens": false,
309
+ "tokenizer_class": "Qwen2Tokenizer",
310
+ "unk_token": null
311
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff