ZhihongShao commited on
Commit
9b04ba2
·
1 Parent(s): 7176629

update README.md

Browse files
LICENSE CHANGED
@@ -1,21 +1,202 @@
1
- MIT License
2
-
3
- Copyright (c) 2023 DeepSeek
4
-
5
- Permission is hereby granted, free of charge, to any person obtaining a copy
6
- of this software and associated documentation files (the "Software"), to deal
7
- in the Software without restriction, including without limitation the rights
8
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
- copies of the Software, and to permit persons to whom the Software is
10
- furnished to do so, subject to the following conditions:
11
-
12
- The above copyright notice and this permission notice shall be included in all
13
- copies or substantial portions of the Software.
14
-
15
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
- SOFTWARE.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ Apache License
3
+ Version 2.0, January 2004
4
+ http://www.apache.org/licenses/
5
+
6
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
+
8
+ 1. Definitions.
9
+
10
+ "License" shall mean the terms and conditions for use, reproduction,
11
+ and distribution as defined by Sections 1 through 9 of this document.
12
+
13
+ "Licensor" shall mean the copyright owner or entity authorized by
14
+ the copyright owner that is granting the License.
15
+
16
+ "Legal Entity" shall mean the union of the acting entity and all
17
+ other entities that control, are controlled by, or are under common
18
+ control with that entity. For the purposes of this definition,
19
+ "control" means (i) the power, direct or indirect, to cause the
20
+ direction or management of such entity, whether by contract or
21
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
+ outstanding shares, or (iii) beneficial ownership of such entity.
23
+
24
+ "You" (or "Your") shall mean an individual or Legal Entity
25
+ exercising permissions granted by this License.
26
+
27
+ "Source" form shall mean the preferred form for making modifications,
28
+ including but not limited to software source code, documentation
29
+ source, and configuration files.
30
+
31
+ "Object" form shall mean any form resulting from mechanical
32
+ transformation or translation of a Source form, including but
33
+ not limited to compiled object code, generated documentation,
34
+ and conversions to other media types.
35
+
36
+ "Work" shall mean the work of authorship, whether in Source or
37
+ Object form, made available under the License, as indicated by a
38
+ copyright notice that is included in or attached to the work
39
+ (an example is provided in the Appendix below).
40
+
41
+ "Derivative Works" shall mean any work, whether in Source or Object
42
+ form, that is based on (or derived from) the Work and for which the
43
+ editorial revisions, annotations, elaborations, or other modifications
44
+ represent, as a whole, an original work of authorship. For the purposes
45
+ of this License, Derivative Works shall not include works that remain
46
+ separable from, or merely link (or bind by name) to the interfaces of,
47
+ the Work and Derivative Works thereof.
48
+
49
+ "Contribution" shall mean any work of authorship, including
50
+ the original version of the Work and any modifications or additions
51
+ to that Work or Derivative Works thereof, that is intentionally
52
+ submitted to Licensor for inclusion in the Work by the copyright owner
53
+ or by an individual or Legal Entity authorized to submit on behalf of
54
+ the copyright owner. For the purposes of this definition, "submitted"
55
+ means any form of electronic, verbal, or written communication sent
56
+ to the Licensor or its representatives, including but not limited to
57
+ communication on electronic mailing lists, source code control systems,
58
+ and issue tracking systems that are managed by, or on behalf of, the
59
+ Licensor for the purpose of discussing and improving the Work, but
60
+ excluding communication that is conspicuously marked or otherwise
61
+ designated in writing by the copyright owner as "Not a Contribution."
62
+
63
+ "Contributor" shall mean Licensor and any individual or Legal Entity
64
+ on behalf of whom a Contribution has been received by Licensor and
65
+ subsequently incorporated within the Work.
66
+
67
+ 2. Grant of Copyright License. Subject to the terms and conditions of
68
+ this License, each Contributor hereby grants to You a perpetual,
69
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
+ copyright license to reproduce, prepare Derivative Works of,
71
+ publicly display, publicly perform, sublicense, and distribute the
72
+ Work and such Derivative Works in Source or Object form.
73
+
74
+ 3. Grant of Patent License. Subject to the terms and conditions of
75
+ this License, each Contributor hereby grants to You a perpetual,
76
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
+ (except as stated in this section) patent license to make, have made,
78
+ use, offer to sell, sell, import, and otherwise transfer the Work,
79
+ where such license applies only to those patent claims licensable
80
+ by such Contributor that are necessarily infringed by their
81
+ Contribution(s) alone or by combination of their Contribution(s)
82
+ with the Work to which such Contribution(s) was submitted. If You
83
+ institute patent litigation against any entity (including a
84
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
85
+ or a Contribution incorporated within the Work constitutes direct
86
+ or contributory patent infringement, then any patent licenses
87
+ granted to You under this License for that Work shall terminate
88
+ as of the date such litigation is filed.
89
+
90
+ 4. Redistribution. You may reproduce and distribute copies of the
91
+ Work or Derivative Works thereof in any medium, with or without
92
+ modifications, and in Source or Object form, provided that You
93
+ meet the following conditions:
94
+
95
+ (a) You must give any other recipients of the Work or
96
+ Derivative Works a copy of this License; and
97
+
98
+ (b) You must cause any modified files to carry prominent notices
99
+ stating that You changed the files; and
100
+
101
+ (c) You must retain, in the Source form of any Derivative Works
102
+ that You distribute, all copyright, patent, trademark, and
103
+ attribution notices from the Source form of the Work,
104
+ excluding those notices that do not pertain to any part of
105
+ the Derivative Works; and
106
+
107
+ (d) If the Work includes a "NOTICE" text file as part of its
108
+ distribution, then any Derivative Works that You distribute must
109
+ include a readable copy of the attribution notices contained
110
+ within such NOTICE file, excluding those notices that do not
111
+ pertain to any part of the Derivative Works, in at least one
112
+ of the following places: within a NOTICE text file distributed
113
+ as part of the Derivative Works; within the Source form or
114
+ documentation, if provided along with the Derivative Works; or,
115
+ within a display generated by the Derivative Works, if and
116
+ wherever such third-party notices normally appear. The contents
117
+ of the NOTICE file are for informational purposes only and
118
+ do not modify the License. You may add Your own attribution
119
+ notices within Derivative Works that You distribute, alongside
120
+ or as an addendum to the NOTICE text from the Work, provided
121
+ that such additional attribution notices cannot be construed
122
+ as modifying the License.
123
+
124
+ You may add Your own copyright statement to Your modifications and
125
+ may provide additional or different license terms and conditions
126
+ for use, reproduction, or distribution of Your modifications, or
127
+ for any such Derivative Works as a whole, provided Your use,
128
+ reproduction, and distribution of the Work otherwise complies with
129
+ the conditions stated in this License.
130
+
131
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
132
+ any Contribution intentionally submitted for inclusion in the Work
133
+ by You to the Licensor shall be under the terms and conditions of
134
+ this License, without any additional terms or conditions.
135
+ Notwithstanding the above, nothing herein shall supersede or modify
136
+ the terms of any separate license agreement you may have executed
137
+ with Licensor regarding such Contributions.
138
+
139
+ 6. Trademarks. This License does not grant permission to use the trade
140
+ names, trademarks, service marks, or product names of the Licensor,
141
+ except as required for reasonable and customary use in describing the
142
+ origin of the Work and reproducing the content of the NOTICE file.
143
+
144
+ 7. Disclaimer of Warranty. Unless required by applicable law or
145
+ agreed to in writing, Licensor provides the Work (and each
146
+ Contributor provides its Contributions) on an "AS IS" BASIS,
147
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
+ implied, including, without limitation, any warranties or conditions
149
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
+ PARTICULAR PURPOSE. You are solely responsible for determining the
151
+ appropriateness of using or redistributing the Work and assume any
152
+ risks associated with Your exercise of permissions under this License.
153
+
154
+ 8. Limitation of Liability. In no event and under no legal theory,
155
+ whether in tort (including negligence), contract, or otherwise,
156
+ unless required by applicable law (such as deliberate and grossly
157
+ negligent acts) or agreed to in writing, shall any Contributor be
158
+ liable to You for damages, including any direct, indirect, special,
159
+ incidental, or consequential damages of any character arising as a
160
+ result of this License or out of the use or inability to use the
161
+ Work (including but not limited to damages for loss of goodwill,
162
+ work stoppage, computer failure or malfunction, or any and all
163
+ other commercial damages or losses), even if such Contributor
164
+ has been advised of the possibility of such damages.
165
+
166
+ 9. Accepting Warranty or Additional Liability. While redistributing
167
+ the Work or Derivative Works thereof, You may choose to offer,
168
+ and charge a fee for, acceptance of support, warranty, indemnity,
169
+ or other liability obligations and/or rights consistent with this
170
+ License. However, in accepting such obligations, You may act only
171
+ on Your own behalf and on Your sole responsibility, not on behalf
172
+ of any other Contributor, and only if You agree to indemnify,
173
+ defend, and hold each Contributor harmless for any liability
174
+ incurred by, or claims asserted against, such Contributor by reason
175
+ of your accepting any such warranty or additional liability.
176
+
177
+ END OF TERMS AND CONDITIONS
178
+
179
+ APPENDIX: How to apply the Apache License to your work.
180
+
181
+ To apply the Apache License to your work, attach the following
182
+ boilerplate notice, with the fields enclosed by brackets "[]"
183
+ replaced with your own identifying information. (Don't include
184
+ the brackets!) The text should be enclosed in the appropriate
185
+ comment syntax for the file format. We also recommend that a
186
+ file or class name and description of purpose be included on the
187
+ same "printed page" as the copyright notice for easier
188
+ identification within third-party archives.
189
+
190
+ Copyright (c) 2023 DeepSeek
191
+
192
+ Licensed under the Apache License, Version 2.0 (the "License");
193
+ you may not use this file except in compliance with the License.
194
+ You may obtain a copy of the License at
195
+
196
+ http://www.apache.org/licenses/LICENSE-2.0
197
+
198
+ Unless required by applicable law or agreed to in writing, software
199
+ distributed under the License is distributed on an "AS IS" BASIS,
200
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
+ See the License for the specific language governing permissions and
202
+ limitations under the License.
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ base_model:
5
+ - deepseek-ai/DeepSeek-Math-V2
6
+ ---
7
+
8
+ <!-- markdownlint-disable first-line-h1 -->
9
+ <!-- markdownlint-disable html -->
10
+ <!-- markdownlint-disable no-duplicate-header -->
11
+
12
+ <div align="center">
13
+ <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
14
+ </div>
15
+ <hr>
16
+ <div align="center" style="line-height: 1;">
17
+ <a href="https://www.deepseek.com/"><img alt="Homepage"
18
+ src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true"/></a>
19
+ <a href="https://chat.deepseek.com/"><img alt="Chat"
20
+ src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white"/></a>
21
+ <a href="https://huggingface.co/deepseek-ai"><img alt="Hugging Face"
22
+ src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white"/></a>
23
+ <br>
24
+ <a href="https://discord.gg/Tc7c45Zzu5"><img alt="Discord"
25
+ src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da"/></a>
26
+ <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true"><img alt="Wechat"
27
+ src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white"/></a>
28
+ <a href="https://twitter.com/deepseek_ai"><img alt="Twitter Follow"
29
+ src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white"/></a>
30
+ <br>
31
+ <a href="LICENSE" style="margin: 2px;">
32
+ <img alt="License" src="https://img.shields.io/badge/License-Apache 2.0-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
33
+ </a>
34
+ <br>
35
+ </div>
36
+
37
+ # DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning
38
+
39
+ ## 1. Introduction
40
+
41
+ Large language models have made significant progress in mathematical reasoning, which serves as an important testbed for AI and could impact scientific research if further advanced.
42
+ By scaling reasoning with reinforcement learning that rewards correct final answers, LLMs have improved from poor performance to saturating quantitative reasoning competitions like AIME and HMMT in one year.
43
+ However, this approach faces fundamental limitations.
44
+ Pursuing higher final answer accuracy doesn't address a key issue: correct answers don't guarantee correct reasoning.
45
+ Moreover, many mathematical tasks like theorem proving require rigorous step-by-step derivation rather than numerical answers, making final answer rewards inapplicable.
46
+ To push the limits of deep reasoning, we believe it is necessary to verify the comprehensiveness and rigor of mathematical reasoning.
47
+ Self-verification is particularly important for scaling test-time compute, especially for open problems without known solutions.
48
+ Towards self-verifiable mathematical reasoning, we investigate how to train an accurate and faithful LLM-based verifier for theorem proving.
49
+ We then train a proof generator using the verifier as the reward model, and incentivize the generator to identify and resolve as many issues as possible in their own proofs before finalizing them.
50
+ To maintain the generation-verification gap as the generator becomes stronger, we propose to scale verification compute to automatically label new hard-to-verify proofs, creating training data to further improve the verifier.
51
+ Our resulting model, DeepSeekMath-V2, demonstrates strong theorem-proving capabilities, achieving gold-level scores on IMO 2025 and CMO 2024 and a near-perfect 118/120 on Putnam 2024 with scaled test-time compute.
52
+ While much work remains, these results suggest that self-verifiable mathematical reasoning is a feasible research direction that may help develop more capable mathematical AI systems.
53
+
54
+ ## 2. Evaluation Results
55
+
56
+ Below are evaluation results on [IMO-ProofBench](https://github.com/google-deepmind/superhuman/tree/main/imobench) (developed by the DeepMind team behind DeepThink IMO-Gold) and recent mathematics competitions including IMO 2025, CMO 2024, and Putnam 2024.
57
+
58
+ **IMO-ProofBench**
59
+
60
+ <p align="center">
61
+ <img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Math-V2/tree/main/figures/IMO-ProofBench.png?raw=true">
62
+ </p>
63
+
64
+
65
+ ---
66
+
67
+ **Mathematics Competitions**
68
+
69
+ <p align="center">
70
+ <img width=41%" src="https://github.com/deepseek-ai/DeepSeek-Math-V2/tree/main/figures/Competitions.png?raw=true">
71
+ </p>
72
+
73
+ ## 4. Quick Start
74
+
75
+ DeepSeekMath-V2 is built on top of DeepSeek-V3.2-Exp-Base.
76
+ For inference support, please refer to [the DeepSeek-V3.2-Exp github repository](https://github.com/deepseek-ai/DeepSeek-V3.2-Exp).
77
+
78
+ ## 6. License
79
+ This repository and the model weights are licensed under [the Apache License, Version 2.0 (Apache 2.0)](LICENSE).
80
+
81
+ ## 7. Citation
82
+
83
+ ```
84
+ @misc{deepseek-math-v2,
85
+ author = {Zhihong Shao, Yuxiang Luo, Chengda Lu, Z.Z. Ren, Jiewen Hu, Tian Ye, Zhibin Gou, Shirong Ma, Xiaokang Zhang},
86
+ title = {DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning},
87
+ year = {2025},
88
+ }
89
+ ```
90
+
91
+ ## 8. Contact
92
+
93
+ If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
inference/README.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inference code for DeepSeek models
2
+
3
+ First convert huggingface model weight files to the format of this project.
4
+ ```bash
5
+ python convert.py --hf-ckpt-path ${HF_CKPT_PATH} --save-path ${SAVE_PATH} --n-experts ${EXPERTS} --model-parallel ${MP}
6
+ ```
7
+
8
+ Then chat with DeepSeek model at will!
9
+ ```bash
10
+ torchrun --nproc-per-node ${MP} generate.py --ckpt-path ${SAVE_PATH} --config ${CONFIG} --interactive --temperature {T}
11
+ ```
12
+
13
+ Or batch inference from file.
14
+ ```bash
15
+ torchrun --nproc-per-node ${MP} generate.py --ckpt-path ${SAVE_PATH} --config ${CONFIG} --input-file ${FILE}
16
+ ```
17
+
18
+ Or multi nodes inference.
19
+ ```bash
20
+ torchrun --nnodes ${NODES} --nproc-per-node $((MP / NODES)) --node-rank $RANK --master-addr $ADDR generate.py --ckpt-path ${SAVE_PATH} --config ${CONFIG} --input-file ${FILE}
21
+ ```
inference/config_671B_v3.2.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "vocab_size": 129280,
3
+ "dim": 7168,
4
+ "inter_dim": 18432,
5
+ "moe_inter_dim": 2048,
6
+ "n_layers": 61,
7
+ "n_dense_layers": 3,
8
+ "n_heads": 128,
9
+ "n_routed_experts": 256,
10
+ "n_shared_experts": 1,
11
+ "n_activated_experts": 8,
12
+ "n_expert_groups": 8,
13
+ "n_limited_groups": 4,
14
+ "route_scale": 2.5,
15
+ "score_func": "sigmoid",
16
+ "q_lora_rank": 1536,
17
+ "kv_lora_rank": 512,
18
+ "qk_nope_head_dim": 128,
19
+ "qk_rope_head_dim": 64,
20
+ "v_head_dim": 128,
21
+ "dtype": "fp8",
22
+ "scale_fmt": "ue8m0",
23
+ "index_n_heads": 64,
24
+ "index_head_dim": 128,
25
+ "index_topk": 2048
26
+ }
inference/convert.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import shutil
3
+ from argparse import ArgumentParser
4
+ from glob import glob
5
+ from tqdm import tqdm, trange
6
+
7
+ import torch
8
+ from safetensors.torch import safe_open
9
+ from dist_writer import save_file
10
+
11
+
12
+ mapping = {
13
+ "embed_tokens": ("embed", 0),
14
+ "input_layernorm": ("attn_norm", None),
15
+ "post_attention_layernorm": ("ffn_norm", None),
16
+ "q_proj": ("wq", 0),
17
+ "q_a_proj": ("wq_a", None),
18
+ "q_a_layernorm": ("q_norm", None),
19
+ "q_b_proj": ("wq_b", 0),
20
+ "kv_a_proj_with_mqa": ("wkv_a", None),
21
+ "kv_a_layernorm": ("kv_norm", None),
22
+ "kv_b_proj": ("wkv_b", 0),
23
+ "o_proj": ("wo", 1),
24
+ "gate": ("gate", None),
25
+ "gate_proj": ("w1", 0),
26
+ "down_proj": ("w2", 1),
27
+ "up_proj": ("w3", 0),
28
+ "norm": ("norm", None),
29
+ "lm_head": ("head", 0),
30
+ "scale": ("scale", None),
31
+ "wq_b": ("wq_b", None),
32
+ "wk": ("wk", None),
33
+ "k_norm": ("k_norm", None),
34
+ "weights_proj": ("weights_proj", None),
35
+ }
36
+
37
+
38
+ def main(hf_ckpt_path, save_path, n_experts, mp):
39
+ """
40
+ Converts and saves model checkpoint files into a specified format.
41
+
42
+ Args:
43
+ hf_ckpt_path (str): Path to the directory containing the input checkpoint files.
44
+ save_path (str): Path to the directory where the converted checkpoint files will be saved.
45
+ n_experts (int): Total number of experts in the model.
46
+ mp (int): Model parallelism factor.
47
+
48
+ Returns:
49
+ None
50
+ """
51
+ torch.set_num_threads(8)
52
+ n_local_experts = n_experts // mp
53
+ state_dicts = [{} for _ in range(mp)]
54
+
55
+ for file_path in tqdm(glob(os.path.join(hf_ckpt_path, "*.safetensors"))):
56
+ with safe_open(file_path, framework="pt", device="cpu") as f:
57
+ for name in f.keys():
58
+ if "model.layers.61" in name:
59
+ continue
60
+ param: torch.Tensor = f.get_tensor(name)
61
+ if name.startswith("model."):
62
+ name = name[len("model."):]
63
+ name = name.replace("self_attn", "attn")
64
+ name = name.replace("mlp", "ffn")
65
+ name = name.replace("weight_scale_inv", "scale")
66
+ name = name.replace("e_score_correction_bias", "bias")
67
+ key = name.split(".")[-2]
68
+ assert key in mapping, f"Key {key} not found in mapping"
69
+ new_key, dim = mapping[key]
70
+ name = name.replace(key, new_key)
71
+ for i in range(mp):
72
+ new_param = param
73
+ if "experts" in name and "shared_experts" not in name:
74
+ idx = int(name.split(".")[-3])
75
+ if idx < i * n_local_experts or idx >= (i + 1) * n_local_experts:
76
+ continue
77
+ elif dim is not None:
78
+ assert param.size(dim) % mp == 0, f"Dimension {dim} must be divisible by {mp}"
79
+ shard_size = param.size(dim) // mp
80
+ new_param = param.narrow(dim, i * shard_size, shard_size).contiguous()
81
+ state_dicts[i][name] = new_param
82
+
83
+ os.makedirs(save_path, exist_ok=True)
84
+
85
+ for i in trange(mp):
86
+ save_file(state_dicts[i], os.path.join(save_path, f"model{i}-mp{mp}.safetensors"))
87
+
88
+ for file_path in glob(os.path.join(hf_ckpt_path, "*token*")):
89
+ new_file_path = os.path.join(save_path, os.path.basename(file_path))
90
+ shutil.copyfile(file_path, new_file_path)
91
+
92
+
93
+ if __name__ == "__main__":
94
+ parser = ArgumentParser()
95
+ parser.add_argument("--hf-ckpt-path", type=str, required=True)
96
+ parser.add_argument("--save-path", type=str, required=True)
97
+ parser.add_argument("--n-experts", type=int, required=True)
98
+ parser.add_argument("--model-parallel", type=int, required=True)
99
+ args = parser.parse_args()
100
+ assert args.n_experts % args.model_parallel == 0, "Number of experts must be divisible by model parallelism"
101
+ main(args.hf_ckpt_path, args.save_path, args.n_experts, args.model_parallel)
inference/dist_writer.py ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # copy from https://gitlab.deepseek.com/deepseek/hai-llm/-/blob/master/scripts/dist_safetensor_writer.py
2
+
3
+ import os
4
+ import math
5
+ import torch
6
+ from pathlib import Path
7
+ from datetime import timedelta
8
+ from multiprocessing.shared_memory import SharedMemory
9
+ from uuid import uuid4
10
+ import numpy as np
11
+ import time
12
+ import json
13
+ try:
14
+ from hf3fs_fuse.io import make_iovec, make_ioring, ioring, register_fd, deregister_fd, h3fio
15
+ except Exception:
16
+ pass
17
+
18
+
19
+ INT_LEN = 8
20
+ BYTE_ORDER = 'big'
21
+
22
+
23
+ def tensor_to_bytes(tensor: torch.Tensor) -> bytes:
24
+ if tensor.numel() == 0:
25
+ return b''
26
+ return tensor.view(torch.int8).numpy().data.cast('B')
27
+
28
+
29
+ except_fs = {'cpu'}
30
+ clusters = ['jd', 'hg']
31
+ hf3fs_paths = []
32
+ hf3fs_mount_points = []
33
+ for cluster in clusters:
34
+ hf3fs_paths += os.listdir(f'/hf3fs-{cluster}') if os.path.exists(f'/hf3fs-{cluster}') else []
35
+ hf3fs_mount_points += [os.path.join(f'/hf3fs-{cluster}', f) for f in hf3fs_paths if f not in except_fs]
36
+
37
+
38
+ def get_hf3fs_mount_point(file_path: str) -> str:
39
+ rp = os.path.realpath(Path(file_path).absolute())
40
+ return '/'.join(rp.split('/')[:3])
41
+
42
+ class DistWriter():
43
+ def __init__(self, max_ops=100<<10, write_buf_size=1<<29):
44
+ self.max_ops = max_ops
45
+ self.write_buf_size = write_buf_size
46
+ self.shm = SharedMemory(name=f'hf3fs-iovs-{uuid4()}', create=True, size=self.write_buf_size)
47
+ self._iov = {}
48
+ self._buf = {}
49
+ self._ior = {}
50
+ for hf3fs_mount_point in hf3fs_mount_points:
51
+ try:
52
+ iov = make_iovec(self.shm, hf3fs_mount_point, block_size=0, numa=-1)
53
+ buf = memoryview(iov.iov)
54
+ ior = make_ioring(hf3fs_mount_point, 100 << 10, for_read=False, io_depth=-1, numa=-1)
55
+ self._iov[hf3fs_mount_point] = iov
56
+ self._buf[hf3fs_mount_point] = buf
57
+ self._ior[hf3fs_mount_point] = ior
58
+ except Exception:
59
+ pass
60
+ self.shm.unlink()
61
+ self.fd_cache = {}
62
+
63
+ def _open(self, file_path):
64
+ if self.fd_cache.get(file_path) is None:
65
+ # os.makedirs(os.path.dirname(file_path), exist_ok=True)
66
+ hf3fs_mount_point = get_hf3fs_mount_point(file_path)
67
+ try:
68
+ fd = os.open(file_path, os.O_WRONLY | os.O_CREAT | os.O_SYNC)
69
+ except Exception: # 发现在 weka 上打开文件会 FileExistsError
70
+ fd = os.open(file_path, os.O_WRONLY | os.O_SYNC)
71
+ register_fd(fd)
72
+ self.fd_cache[file_path] = (fd, hf3fs_mount_point)
73
+ return self.fd_cache[file_path]
74
+
75
+ def _close_all(self, file_total_bytes):
76
+ for fd, _ in self.fd_cache.values():
77
+ os.truncate(fd, file_total_bytes)
78
+ deregister_fd(fd)
79
+ os.close(fd)
80
+ self.fd_cache = {}
81
+
82
+ def chunk_batch_pwrite(self, write_offsets):
83
+ chunks = []
84
+ chunk = []
85
+ total = 0
86
+ def add_chunk():
87
+ nonlocal chunk, total
88
+ if len(chunk) > 0:
89
+ chunks.append(chunk)
90
+ chunk = []
91
+ total = 0
92
+
93
+ for r in write_offsets:
94
+ write_file_path, write_bytes, write_file_offset = r
95
+ write_length = len(write_bytes)
96
+ if write_length == 0:
97
+ continue
98
+ if write_length > self.write_buf_size:
99
+ add_chunk()
100
+ chunks.append([r])
101
+ elif total + write_length > self.write_buf_size:
102
+ add_chunk()
103
+ chunk.append(r)
104
+ total += write_length
105
+ else:
106
+ chunk.append(r)
107
+ total += write_length
108
+ if len(chunk) == self.max_ops:
109
+ add_chunk()
110
+ add_chunk()
111
+ return chunks
112
+
113
+ def convert_to_pwrite_list(self, filepath, tensors, metadata):
114
+ head = {}
115
+ if metadata is not None:
116
+ head["__metadata__"] = metadata
117
+ dtype_dict = {
118
+ torch.float64 : 'F64',
119
+ torch.float32: 'F32',
120
+ torch.float16 : 'F16',
121
+ torch.bfloat16: 'BF16',
122
+ torch.float8_e4m3fn: 'F8_E4M3',
123
+ torch.int64 : 'I64',
124
+ torch.int32: 'I32',
125
+ torch.int16 : 'I16',
126
+ torch.int8: 'I8',
127
+ torch.uint8 : 'U8',
128
+ torch.bool : 'BOOL'
129
+ }
130
+ cur_off = 0
131
+ values = []
132
+ for k, v in tensors.items():
133
+ cur_len = v.numel() * v.element_size()
134
+ item = dict(
135
+ dtype = dtype_dict[v.dtype],
136
+ shape = list(v.shape),
137
+ data_offsets = [cur_off, cur_off + cur_len],
138
+ )
139
+ cur_off += cur_len
140
+ head[k] = item
141
+ values.append(v)
142
+ head_bytes = json.dumps(head, ensure_ascii=True).replace(" ","").encode("utf8")
143
+ n = np.array([len(head_bytes)], dtype = np.uint64).tobytes()
144
+ assert np.frombuffer(n, dtype=np.int64)[0] == len(head_bytes)
145
+ head_bytes = n + head_bytes
146
+ p_list = []
147
+ p_list.append((filepath, head_bytes, 0))
148
+ cur_off = len(head_bytes)
149
+ for v in values:
150
+ data_bytes = tensor_to_bytes(v)
151
+ p_list.append((filepath, data_bytes, cur_off))
152
+ cur_off += len(data_bytes)
153
+ return p_list
154
+
155
+ def save_tensors(self, filepath, tensors, metadata = None):
156
+ pwrite_list = self.convert_to_pwrite_list(filepath, tensors, metadata)
157
+ file_total_bytes = sum([len(item[1]) for item in pwrite_list])
158
+ for chunk in self.chunk_batch_pwrite(pwrite_list):
159
+ if len(chunk) == 1:
160
+ # 如果超过 self.write_buf_size 的数据,只允许单次 pwrite
161
+ write_file_path, write_bytes, write_file_offset = chunk[0]
162
+ fd, hf3fs_mount_point = self._open(write_file_path)
163
+ iov = self._iov[hf3fs_mount_point]
164
+ buf = self._buf[hf3fs_mount_point]
165
+ ior = self._ior[hf3fs_mount_point]
166
+ content_view = write_bytes
167
+ _write = 0
168
+ total = len(write_bytes)
169
+ while _write < total:
170
+ to_write = min(self.write_buf_size, total-_write)
171
+ buf[:to_write] = content_view[_write:_write+to_write]
172
+ ior.prepare(iov[:to_write], False, fd, write_file_offset+_write)
173
+ submit_result = ior.submit()
174
+ total_waited = 0
175
+ results = []
176
+ while True:
177
+ res = submit_result.wait(max_results=1000, min_results=0, timeout=timedelta(seconds=0))
178
+ total_waited += len(res)
179
+ results += res
180
+ if total_waited == 1:
181
+ break
182
+ time.sleep(0.01)
183
+ write_len = results[0].result
184
+ assert write_len == to_write, f'hf3fs 返回的 write_len({write_len}) 不匹配 file_path={write_file_path} offset={write_file_offset} to_write={to_write}'
185
+ _write += write_len
186
+ elif len(chunk) > 0:
187
+ # 多次 pwrite,加起来的总和不能超过 self.write_buf_size,避免最后一个比较大,但是 buf 只剩很小,要提交很多次的问题
188
+ # 这里只允许 batch write 同一个 mount point 的数据,不然比较难管理
189
+ hf3fs_mount_point = self._open(chunk[0][0])[1]
190
+ iov = self._iov[hf3fs_mount_point]
191
+ buf = self._buf[hf3fs_mount_point]
192
+ ior = self._ior[hf3fs_mount_point]
193
+ ops = []
194
+ buf_offsets = []
195
+ buf_offset = 0
196
+ for write_file_path, write_bytes, write_file_offset in chunk:
197
+ fd, h = self._open(write_file_path)
198
+ assert h == hf3fs_mount_point, f'不能 load 不同 mount point 的数据 {h} {hf3fs_mount_point}'
199
+ write_length = len(write_bytes)
200
+ op = [write_file_path, write_length, write_file_offset]
201
+ ops.append(op)
202
+ assert buf_offset+write_length <= self.write_buf_size, f'batch write 超过了 buf 最大长度 {self.write_buf_size}'
203
+ buf[buf_offset:buf_offset+write_length] = write_bytes
204
+ ior.prepare(iov[buf_offset:buf_offset+write_length], False, fd, write_file_offset, userdata=op)
205
+ buf_offsets.append((buf_offset, buf_offset+write_length))
206
+ buf_offset += write_length
207
+
208
+ submit_result = ior.submit()
209
+ total_waited = 0
210
+ results = []
211
+ while True:
212
+ res = submit_result.wait(max_results=1000, min_results=0, timeout=timedelta(seconds=0))
213
+ total_waited += len(res)
214
+ results += res
215
+ if total_waited == len(ops):
216
+ break
217
+ time.sleep(0.01)
218
+ for result in results:
219
+ write_file_path, write_length, write_file_offset = result.userdata
220
+ assert result.result == write_length, f'hf3fs 返回的 write_len({result.result}) 不匹配 file_path={write_file_path} offset={write_file_offset} to_write={write_length}'
221
+ self._close_all(file_total_bytes)
222
+
223
+ def save_file(tensors, filepath, metadata = None):
224
+ DistWriter().save_tensors(filepath, tensors, metadata=metadata)
inference/generate.py ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ from argparse import ArgumentParser
4
+ from typing import List
5
+
6
+ import torch
7
+ import torch.distributed as dist
8
+ from transformers import AutoTokenizer
9
+ from safetensors.torch import load_model
10
+
11
+ from model_v32 import Transformer, ModelArgs, update_ws
12
+
13
+
14
+ def sample(logits, temperature: float = 1.0):
15
+ """
16
+ Samples a token from the logits using temperature scaling.
17
+
18
+ Args:
19
+ logits (torch.Tensor): The logits tensor for token predictions.
20
+ temperature (float, optional): Temperature for scaling logits. Defaults to 1.0.
21
+
22
+ Returns:
23
+ torch.Tensor: The sampled token.
24
+ """
25
+ logits = logits / max(temperature, 1e-5)
26
+ probs = torch.softmax(logits, dim=-1, dtype=torch.float32)
27
+ return probs.div_(torch.empty_like(probs).exponential_(1)).argmax(dim=-1)
28
+
29
+
30
+ @torch.inference_mode()
31
+ def generate(
32
+ model: Transformer,
33
+ prompt_tokens: List[List[int]],
34
+ max_new_tokens: int,
35
+ eos_id: int,
36
+ temperature: float = 1.0
37
+ ) -> List[List[int]]:
38
+ """
39
+ Generates new tokens based on the given prompt tokens using the specified model.
40
+
41
+ Args:
42
+ model (Transformer): The transformer model used for token generation.
43
+ prompt_tokens (List[List[int]]): A list of lists containing the prompt tokens for each sequence.
44
+ max_new_tokens (int): The maximum number of new tokens to generate.
45
+ eos_id (int): The end-of-sequence token ID.
46
+ temperature (float, optional): The temperature value for sampling. Defaults to 1.0.
47
+
48
+ Returns:
49
+ List[List[int]]: A list of lists containing the generated tokens for each sequence.
50
+ """
51
+ prompt_lens = [len(t) for t in prompt_tokens]
52
+ assert max(prompt_lens) <= model.max_seq_len, f"Prompt length exceeds model maximum sequence length (max_seq_len={model.max_seq_len})"
53
+ total_len = min(model.max_seq_len, max_new_tokens + max(prompt_lens))
54
+ tokens = torch.full((len(prompt_tokens), total_len), -1, dtype=torch.long, device="cuda")
55
+ for i, t in enumerate(prompt_tokens):
56
+ tokens[i, :len(t)] = torch.tensor(t, dtype=torch.long, device="cuda")
57
+ prev_pos = 0
58
+ finished = torch.tensor([False] * len(prompt_tokens), device="cuda")
59
+ prompt_mask = tokens != -1
60
+ for cur_pos in range(min(prompt_lens), total_len):
61
+ logits = model.forward(tokens[:, prev_pos:cur_pos], prev_pos)
62
+ if temperature > 0:
63
+ next_token = sample(logits, temperature)
64
+ else:
65
+ next_token = logits.argmax(dim=-1)
66
+ next_token = torch.where(prompt_mask[:, cur_pos], tokens[:, cur_pos], next_token)
67
+ tokens[:, cur_pos] = next_token
68
+ finished |= torch.logical_and(~prompt_mask[:, cur_pos], next_token == eos_id)
69
+ prev_pos = cur_pos
70
+ if finished.all():
71
+ break
72
+ completion_tokens = []
73
+ for i, toks in enumerate(tokens.tolist()):
74
+ toks = toks[prompt_lens[i]:prompt_lens[i]+max_new_tokens]
75
+ if eos_id in toks:
76
+ toks = toks[:toks.index(eos_id)]
77
+ completion_tokens.append(toks)
78
+ return completion_tokens
79
+
80
+
81
+ def main(
82
+ ckpt_path: str,
83
+ config: str,
84
+ input_file: str = "",
85
+ interactive: bool = True,
86
+ max_new_tokens: int = 100,
87
+ temperature: float = 1.0,
88
+ ) -> None:
89
+ """
90
+ Main function to load the model and perform interactive or batch text generation.
91
+
92
+ Args:
93
+ ckpt_path (str): Path to the model checkpoint directory.
94
+ config (str): Path to the model configuration file.
95
+ input_file (str, optional): Path to a file containing input prompts. Defaults to "".
96
+ interactive (bool, optional): Whether to run in interactive mode. Defaults to True.
97
+ max_new_tokens (int, optional): Maximum number of new tokens to generate. Defaults to 100.
98
+ temperature (float, optional): Temperature for sampling. Defaults to 1.0.
99
+ """
100
+ world_size = int(os.getenv("WORLD_SIZE", "1"))
101
+ rank = int(os.getenv("RANK", "0"))
102
+ local_rank = int(os.getenv("LOCAL_RANK", "0"))
103
+ if world_size > 1:
104
+ dist.init_process_group("nccl")
105
+ global print
106
+ if rank != 0:
107
+ print = lambda *_, **__: None
108
+ torch.cuda.set_device(local_rank)
109
+ torch.set_default_dtype(torch.bfloat16)
110
+ torch.set_num_threads(8)
111
+ torch.manual_seed(33377335)
112
+ with open(config) as f:
113
+ args = ModelArgs(**json.load(f))
114
+ if "baidu" in input_file:
115
+ args.max_batch_size = 1
116
+ args.max_seq_len = 24 * 1024
117
+ print(args)
118
+ with torch.device("cuda"):
119
+ model = Transformer(args)
120
+ tokenizer = AutoTokenizer.from_pretrained(ckpt_path)
121
+ print("dummy run")
122
+ tokenizer.decode(generate(model, [tokenizer.encode("DeepSeek")], 2, -1, 1.)[0])
123
+ print("load model")
124
+ load_model(model, os.path.join(ckpt_path, f"model{rank}-mp{world_size}.safetensors"))
125
+ update_ws(model)
126
+ print("I'm DeepSeek 👋")
127
+
128
+ if interactive:
129
+ messages = []
130
+ while True:
131
+ if world_size == 1:
132
+ prompt = input(">>> ")
133
+ elif rank == 0:
134
+ prompt = input(">>> ")
135
+ objects = [prompt]
136
+ dist.broadcast_object_list(objects, 0)
137
+ else:
138
+ objects = [None]
139
+ dist.broadcast_object_list(objects, 0)
140
+ prompt = objects[0]
141
+ if prompt == "/exit":
142
+ break
143
+ elif prompt == "/clear":
144
+ messages.clear()
145
+ continue
146
+ messages.append({"role": "user", "content": prompt})
147
+ prompt_tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
148
+ completion_tokens = generate(model, [prompt_tokens], max_new_tokens, tokenizer.eos_token_id, temperature)
149
+ completion = tokenizer.decode(completion_tokens[0], skip_special_tokens=True)
150
+ print(completion)
151
+ messages.append({"role": "assistant", "content": completion})
152
+ else:
153
+ with open(input_file) as f:
154
+ prompts = f.read().split("\n\n")
155
+ prompt_tokens = [tokenizer.apply_chat_template([{"role": "user", "content": prompt}], add_generation_prompt=True) for prompt in prompts]
156
+ if "baidu" in input_file:
157
+ completion_tokens = [generate(model, [p], max_new_tokens, tokenizer.eos_token_id, temperature)[0] for p in prompt_tokens]
158
+ else:
159
+ completion_tokens = generate(model, prompt_tokens, max_new_tokens, tokenizer.eos_token_id, temperature)
160
+ completions = tokenizer.batch_decode(completion_tokens, skip_special_tokens=True)
161
+ for prompt, completion in zip(prompts, completions):
162
+ print("Prompt:", prompt)
163
+ print("Completion:", completion)
164
+ print()
165
+
166
+ if world_size > 1:
167
+ dist.destroy_process_group()
168
+
169
+
170
+ if __name__ == "__main__":
171
+ """
172
+ Command-line interface for distributed text generation.
173
+
174
+ Arguments:
175
+ --ckpt-path (str): Path to the model checkpoint directory.
176
+ --config (str): Path to the model configuration file.
177
+ --input-file (str, optional): File containing prompts for batch processing.
178
+ --interactive (bool, optional): Enable interactive mode for generating text.
179
+ --max-new-tokens (int, optional): Maximum number of new tokens to generate. Defaults to 200.
180
+ --temperature (float, optional): Temperature for sampling. Defaults to 0.2.
181
+
182
+ Raises:
183
+ AssertionError: If neither input-file nor interactive mode is specified.
184
+ """
185
+ parser = ArgumentParser()
186
+ parser.add_argument("--ckpt-path", type=str, required=True)
187
+ parser.add_argument("--config", type=str, required=True)
188
+ parser.add_argument("--input-file", type=str, default="")
189
+ parser.add_argument("--interactive", action="store_true")
190
+ parser.add_argument("--max-new-tokens", type=int, default=200)
191
+ parser.add_argument("--temperature", type=float, default=0.2)
192
+ args = parser.parse_args()
193
+ assert args.input_file or args.interactive, "Either input-file or interactive mode must be specified"
194
+ main(args.ckpt_path, args.config, args.input_file, args.interactive, args.max_new_tokens, args.temperature)
inference/model_v32.py ADDED
@@ -0,0 +1,941 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ from dataclasses import dataclass
3
+ from typing import Tuple, Optional, Literal
4
+
5
+ import torch
6
+ from torch import nn
7
+ import torch.nn.functional as F
8
+ import torch.distributed as dist
9
+
10
+ from tilelang_kernel import act_quant, fp8_gemm, fp8_index
11
+
12
+
13
+ world_size = 1
14
+ rank = 0
15
+ block_size = 128
16
+
17
+ @dataclass
18
+ class ModelArgs:
19
+ """
20
+ Data class for defining model arguments and hyperparameters.
21
+
22
+ Attributes:
23
+ max_batch_size (int): Maximum batch size.
24
+ max_seq_len (int): Maximum sequence length.
25
+ dtype (Literal["bf16", "fp8"]): Data type for computations.
26
+ scale_fmt (Optional[str]): Format for quantization scale.
27
+ vocab_size (int): Vocabulary size.
28
+ dim (int): Model dimension.
29
+ inter_dim (int): Intermediate dimension for MLP layers.
30
+ moe_inter_dim (int): Intermediate dimension for MoE layers.
31
+ n_layers (int): Number of transformer layers.
32
+ n_dense_layers (int): Number of dense layers in the model.
33
+ n_heads (int): Number of attention heads.
34
+ n_routed_experts (int): Number of routed experts for MoE layers.
35
+ n_shared_experts (int): Number of shared experts for MoE layers.
36
+ n_activated_experts (int): Number of activated experts in MoE layers.
37
+ n_expert_groups (int): Number of expert groups.
38
+ n_limited_groups (int): Number of limited groups for MoE routing.
39
+ score_func (Literal["softmax", "sigmoid"]): Scoring function for MoE routing.
40
+ route_scale (float): Scaling factor for routing scores.
41
+ q_lora_rank (int): LoRA rank for query projections.
42
+ kv_lora_rank (int): LoRA rank for key-value projections.
43
+ qk_nope_head_dim (int): Dimension for query-key projections without positional embeddings.
44
+ qk_rope_head_dim (int): Dimension for query-key projections with rotary embeddings.
45
+ v_head_dim (int): Dimension for value projections.
46
+ original_seq_len (int): Original sequence length.
47
+ rope_theta (float): Base for rotary positional encoding.
48
+ rope_factor (float): Scaling factor for extended sequence lengths.
49
+ beta_fast (int): Fast beta correction factor.
50
+ beta_slow (int): Slow beta correction factor.
51
+ mscale (float): Scaling factor for extended attention.
52
+ index_head_dim (int): Dimension for index head.
53
+ index_topk (int): Top-k for index head.
54
+ """
55
+ max_batch_size: int = 8
56
+ max_seq_len: int = 4096 * 4
57
+ dtype: Literal["bf16", "fp8"] = "bf16"
58
+ scale_fmt: Optional[str] = None
59
+ vocab_size: int = 102400
60
+ dim: int = 2048
61
+ inter_dim: int = 10944
62
+ moe_inter_dim: int = 1408
63
+ n_layers: int = 27
64
+ n_dense_layers: int = 1
65
+ n_heads: int = 16
66
+ # moe
67
+ n_routed_experts: int = 64
68
+ n_shared_experts: int = 2
69
+ n_activated_experts: int = 6
70
+ n_expert_groups: int = 1
71
+ n_limited_groups: int = 1
72
+ score_func: Literal["softmax", "sigmoid"] = "softmax"
73
+ route_scale: float = 1.
74
+ # mla
75
+ q_lora_rank: int = 0
76
+ kv_lora_rank: int = 512
77
+ qk_nope_head_dim: int = 128
78
+ qk_rope_head_dim: int = 64
79
+ v_head_dim: int = 128
80
+ # yarn
81
+ original_seq_len: int = 4096
82
+ rope_theta: float = 10000.0
83
+ rope_factor: float = 40
84
+ beta_fast: int = 32
85
+ beta_slow: int = 1
86
+ mscale: float = 1.
87
+ # index
88
+ index_n_heads: int = 64
89
+ index_head_dim: int = 128
90
+ index_topk: int = 2048
91
+
92
+ class ParallelEmbedding(nn.Module):
93
+ """
94
+ Embedding layer with parallelism support across distributed processes.
95
+
96
+ Args:
97
+ vocab_size (int): Vocabulary size.
98
+ dim (int): Embedding dimension.
99
+ """
100
+ def __init__(self, vocab_size: int, dim: int):
101
+ super().__init__()
102
+ self.vocab_size = vocab_size
103
+ self.dim = dim
104
+ assert vocab_size % world_size == 0, f"Vocabulary size must be divisible by world size (world_size={world_size})"
105
+ self.part_vocab_size = (vocab_size // world_size)
106
+ self.vocab_start_idx = rank * self.part_vocab_size
107
+ self.vocab_end_idx = self.vocab_start_idx + self.part_vocab_size
108
+ self.weight = nn.Parameter(torch.empty(self.part_vocab_size, self.dim))
109
+
110
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
111
+ """
112
+ Forward pass for parallel embedding layer.
113
+
114
+ Args:
115
+ x (torch.Tensor): Input tensor containing token indices.
116
+
117
+ Returns:
118
+ torch.Tensor: Embedded representations.
119
+
120
+ Raises:
121
+ ValueError: If `world_size` is not defined.
122
+ """
123
+ if world_size > 1:
124
+ mask = (x < self.vocab_start_idx) | (x >= self.vocab_end_idx)
125
+ x = x - self.vocab_start_idx
126
+ x[mask] = 0
127
+ y = F.embedding(x, self.weight)
128
+ if world_size > 1:
129
+ y[mask] = 0
130
+ dist.all_reduce(y)
131
+ return y
132
+
133
+
134
+ def linear(x: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor] = None,
135
+ scale_fmt: Optional[str] = None) -> torch.Tensor:
136
+ """
137
+ Applies a linear transformation to the incoming data: y = xA^T + b.
138
+ This function supports specialized implementations based on quantization
139
+ and tensor formats.
140
+
141
+ Args:
142
+ x (torch.Tensor): The input tensor.
143
+ weight (torch.Tensor): The weight tensor.
144
+ bias (Optional[torch.Tensor]): The bias tensor to be added. Default is None.
145
+ scale_fmt (Optional[str]): The format of scaling factors.
146
+
147
+ Returns:
148
+ torch.Tensor: The result of the linear transformation, which may involve
149
+ quantization-aware computations depending on the input parameters.
150
+
151
+ Notes:
152
+ - If `weight` is not quantized, a normal version is used for computation.
153
+ - For other cases, the function applies quantization to `x` and uses `fp8_gemm` for computation.
154
+ """
155
+ assert bias is None
156
+
157
+ if weight.dtype != torch.float8_e4m3fn:
158
+ return F.linear(x, weight)
159
+ else:
160
+ x, scale = act_quant(x, block_size, scale_fmt)
161
+ return fp8_gemm(x, scale, weight, weight.scale)
162
+
163
+
164
+ class Linear(nn.Module):
165
+ """
166
+ Custom linear layer with support for quantized weights and optional bias.
167
+
168
+ Args:
169
+ in_features (int): Number of input features.
170
+ out_features (int): Number of output features.
171
+ bias (bool): Whether to include a bias term. Defaults to False.
172
+ dtype (optional): Data type for the layer. Defaults to `torch.bfloat16`.
173
+ """
174
+ dtype = torch.bfloat16
175
+ scale_fmt: Optional[str] = None
176
+
177
+ def __init__(self, in_features: int, out_features: int, bias: bool = False, dtype = None):
178
+ super().__init__()
179
+ self.in_features = in_features
180
+ self.out_features = out_features
181
+ self.weight = nn.Parameter(torch.empty(out_features, in_features, dtype=dtype or Linear.dtype))
182
+ if self.weight.element_size() == 1:
183
+ scale_out_features = (out_features + block_size - 1) // block_size
184
+ scale_in_features = (in_features + block_size - 1) // block_size
185
+ self.weight.scale = self.scale = nn.Parameter(torch.empty(scale_out_features, scale_in_features, dtype=torch.float32))
186
+ else:
187
+ self.register_parameter("scale", None)
188
+ if bias:
189
+ self.bias = nn.Parameter(torch.empty(out_features))
190
+ else:
191
+ self.register_parameter("bias", None)
192
+
193
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
194
+ """
195
+ Forward pass for the custom linear layer.
196
+
197
+ Args:
198
+ x (torch.Tensor): Input tensor.
199
+
200
+ Returns:
201
+ torch.Tensor: Transformed tensor after linear computation.
202
+ """
203
+ return linear(x, self.weight, self.bias, self.scale_fmt)
204
+
205
+
206
+ class ColumnParallelLinear(Linear):
207
+ """
208
+ Linear layer with column parallelism, splitting output features across distributed processes.
209
+
210
+ Args:
211
+ in_features (int): Number of input features.
212
+ out_features (int): Total number of output features.
213
+ bias (bool): Whether to include a bias term. Defaults to False.
214
+ dtype (optional): Data type for the layer. Defaults to `torch.bfloat16`.
215
+ """
216
+ def __init__(self, in_features: int, out_features: int, bias: bool = False, dtype = None):
217
+ assert out_features % world_size == 0, f"Output features must be divisible by world size (world_size={world_size})"
218
+ self.part_out_features = out_features // world_size
219
+ super().__init__(in_features, self.part_out_features, bias, dtype)
220
+
221
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
222
+ """
223
+ Forward pass for column parallel linear layer.
224
+
225
+ Args:
226
+ x (torch.Tensor): Input tensor.
227
+
228
+ Returns:
229
+ torch.Tensor: Transformed tensor with column-parallel computation.
230
+ """
231
+ y = linear(x, self.weight, self.bias, self.scale_fmt)
232
+ return y
233
+
234
+
235
+ class RowParallelLinear(Linear):
236
+ """
237
+ Linear layer with row parallelism, splitting input features across distributed processes.
238
+
239
+ Args:
240
+ in_features (int): Total number of input features.
241
+ out_features (int): Number of output features.
242
+ bias (bool): Whether to include a bias term. Defaults to False.
243
+ dtype (optional): Data type for the layer. Defaults to `torch.bfloat16`.
244
+ """
245
+ def __init__(self, in_features: int, out_features: int, bias: bool = False, reduce_output = True, dtype = None):
246
+ assert in_features % world_size == 0, f"Input features must be divisible by world size (world_size={world_size})"
247
+ self.part_in_features = in_features // world_size
248
+ self.reduce_output = reduce_output
249
+ super().__init__(self.part_in_features, out_features, bias, dtype)
250
+
251
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
252
+ """
253
+ Forward pass for row parallel linear layer.
254
+
255
+ Args:
256
+ x (torch.Tensor): Input tensor.
257
+
258
+ Returns:
259
+ torch.Tensor: Transformed tensor with row-parallel computation.
260
+ """
261
+ y = linear(x, self.weight, None, self.scale_fmt)
262
+ if self.reduce_output and world_size > 1:
263
+ y = y.float()
264
+ dist.all_reduce(y)
265
+ if self.bias is not None:
266
+ y += self.bias
267
+ return y.type_as(x)
268
+
269
+
270
+ class RMSNorm(nn.Module):
271
+ """
272
+ Root Mean Square Layer Normalization (RMSNorm).
273
+
274
+ Args:
275
+ dim (int): Dimension of the input tensor.
276
+ eps (float): Epsilon value for numerical stability. Defaults to 1e-6.
277
+ """
278
+ def __init__(self, dim: int, eps: float = 1e-6):
279
+ super().__init__()
280
+ self.dim = dim
281
+ self.eps = eps
282
+ # rmsnorm in the checkpoint is stored in bf16, while the parameter here is stored in fp32 for convenient.
283
+ self.weight = nn.Parameter(torch.ones(dim, dtype=torch.float32))
284
+
285
+ def forward(self, x: torch.Tensor, residual: Optional[torch.Tensor] = None):
286
+ """
287
+ Forward pass for RMSNorm.
288
+
289
+ Args:
290
+ x (torch.Tensor): Input tensor.
291
+
292
+ Returns:
293
+ torch.Tensor: Normalized tensor with the same shape as input.
294
+ """
295
+ dtype = x.dtype
296
+ if residual is None:
297
+ x = x.float()
298
+ var = x.pow(2).mean(-1, keepdim=True)
299
+ x = x * torch.rsqrt(var + self.eps)
300
+ return (self.weight * x).to(dtype)
301
+ else:
302
+ x = residual = x.float() + residual.float()
303
+ var = x.pow(2).mean(-1, keepdim=True)
304
+ x = x * torch.rsqrt(var + self.eps)
305
+ return (self.weight * x).to(dtype), residual.to(dtype)
306
+
307
+
308
+ class LayerNorm(nn.Module):
309
+ """
310
+ Layer Normalization.
311
+ """
312
+ def __init__(self, dim: int, eps: float = 1e-6):
313
+ super().__init__()
314
+ self.dim = dim
315
+ self.eps = eps
316
+ # layernorm in the checkpoint is stored in bf16, while the parameters here are stored in fp32 for convenient.
317
+ self.weight = nn.Parameter(torch.ones(dim, dtype=torch.float32))
318
+ self.bias = nn.Parameter(torch.zeros(dim, dtype=torch.float32))
319
+
320
+ def forward(self, x: torch.Tensor):
321
+ return F.layer_norm(x.float(), (self.dim,), self.weight, self.bias, self.eps).type_as(x)
322
+
323
+
324
+ def precompute_freqs_cis(args: ModelArgs) -> torch.Tensor:
325
+ """
326
+ Precomputes frequency-based complex exponential values for rotary positional embeddings.
327
+
328
+ Args:
329
+ args (ModelArgs): Model arguments containing positional embedding parameters.
330
+
331
+ Returns:
332
+ torch.Tensor: Precomputed complex exponential values for positional embeddings.
333
+ """
334
+ dim = args.qk_rope_head_dim
335
+ seqlen = args.max_seq_len
336
+ beta_fast = args.beta_fast
337
+ beta_slow = args.beta_slow
338
+ base = args.rope_theta
339
+ factor = args.rope_factor
340
+
341
+ def find_correction_dim(num_rotations, dim, base, max_seq_len):
342
+ """
343
+ Computes the correction dimension for a given number of rotations in the rotary positional embedding.
344
+
345
+ Args:
346
+ num_rotations (float): Number of rotations to compute the correction for.
347
+ dim (int): Dimensionality of the embedding space.
348
+ base (float): Base value for the exponential computation.
349
+ max_seq_len (int): Maximum sequence length.
350
+
351
+ Returns:
352
+ float: The correction dimension based on the input parameters.
353
+ """
354
+ return dim * math.log(max_seq_len / (num_rotations * 2 * math.pi)) / (2 * math.log(base))
355
+
356
+ def find_correction_range(low_rot, high_rot, dim, base, max_seq_len):
357
+ """
358
+ Computes the range of correction dimensions for rotary positional embeddings.
359
+
360
+ Args:
361
+ low_rot (float): Lower bound for the number of rotations.
362
+ high_rot (float): Upper bound for the number of rotations.
363
+ dim (int): Dimensionality of the embedding space.
364
+ base (float): Base value for the exponential computation.
365
+ max_seq_len (int): Maximum sequence length.
366
+
367
+ Returns:
368
+ Tuple[int, int]: The range of correction dimensions (low, high), clamped to valid indices.
369
+ """
370
+ low = math.floor(find_correction_dim(low_rot, dim, base, max_seq_len))
371
+ high = math.ceil(find_correction_dim(high_rot, dim, base, max_seq_len))
372
+ return max(low, 0), min(high, dim-1)
373
+
374
+ def linear_ramp_factor(min, max, dim):
375
+ """
376
+ Computes a linear ramp function used to smooth values between a minimum and maximum range.
377
+
378
+ Args:
379
+ min (float): Minimum value for the ramp function.
380
+ max (float): Maximum value for the ramp function.
381
+ dim (int): Dimensionality of the ramp tensor.
382
+
383
+ Returns:
384
+ torch.Tensor: A tensor of shape (dim,) with values linearly interpolated between 0 and 1,
385
+ clamped to the range [0, 1].
386
+ """
387
+ if min == max:
388
+ max += 0.001
389
+ linear_func = (torch.arange(dim, dtype=torch.float32) - min) / (max - min)
390
+ ramp_func = torch.clamp(linear_func, 0, 1)
391
+ return ramp_func
392
+
393
+ freqs = 1.0 / (base ** (torch.arange(0, dim, 2, dtype=torch.float32) / dim))
394
+ if seqlen > args.original_seq_len:
395
+ low, high = find_correction_range(beta_fast, beta_slow, dim, base, args.original_seq_len)
396
+ smooth = 1 - linear_ramp_factor(low, high, dim // 2)
397
+ freqs = freqs / factor * (1 - smooth) + freqs * smooth
398
+
399
+ t = torch.arange(seqlen)
400
+ freqs = torch.outer(t, freqs)
401
+ freqs_cis = torch.polar(torch.ones_like(freqs), freqs)
402
+ return freqs_cis
403
+
404
+
405
+ def apply_rotary_emb(x: torch.Tensor, freqs_cis: torch.Tensor, interleaved: bool = True) -> torch.Tensor:
406
+ """
407
+ Applies rotary positional embeddings to the input tensor.
408
+
409
+ Args:
410
+ x (torch.Tensor): Input tensor with positional embeddings to be applied.
411
+ freqs_cis (torch.Tensor): Precomputed complex exponential values for positional embeddings.
412
+
413
+ Returns:
414
+ torch.Tensor: Tensor with rotary embeddings applied.
415
+ """
416
+ dtype = x.dtype
417
+ shape = x.shape
418
+ if not interleaved:
419
+ x = x.view(*shape[:-1], 2, -1).transpose(-1, -2).contiguous()
420
+ x = torch.view_as_complex(x.float().view(*shape[:-1], -1, 2))
421
+ freqs_cis = freqs_cis.view(1, x.size(1), 1, x.size(-1))
422
+ y = torch.view_as_real(x * freqs_cis).flatten(3)
423
+ if not interleaved:
424
+ y = torch.cat([y[..., 0::2], y[..., 1::2]], dim=-1)
425
+ return y.to(dtype)
426
+
427
+
428
+ def rotate_activation(x: torch.Tensor) -> torch.Tensor:
429
+ assert x.dtype == torch.bfloat16
430
+ from fast_hadamard_transform import hadamard_transform
431
+ hidden_size = x.size(-1)
432
+ # 确保hidden_size是2的幂
433
+ return hadamard_transform(x, scale=hidden_size ** -0.5)
434
+
435
+
436
+ class Indexer(torch.nn.Module):
437
+ def __init__(self, args: ModelArgs):
438
+ super().__init__()
439
+ self.dim: int = args.dim
440
+ self.n_heads: int = args.index_n_heads
441
+ self.n_local_heads = args.index_n_heads // world_size
442
+ self.head_dim: int = args.index_head_dim
443
+ self.rope_head_dim: int = args.qk_rope_head_dim
444
+ self.index_topk: int = args.index_topk
445
+ self.q_lora_rank: int = args.q_lora_rank
446
+ self.wq_b = Linear(self.q_lora_rank, self.n_heads * self.head_dim)
447
+ self.wk = Linear(self.dim, self.head_dim)
448
+ self.k_norm = LayerNorm(self.head_dim)
449
+ # weights_proj in the checkpoint is stored in bf16, while the parameters here are stored in fp32 for convenient.
450
+ self.weights_proj = Linear(self.dim, self.n_heads, dtype=torch.float32)
451
+ self.softmax_scale = self.head_dim ** -0.5
452
+ self.scale_fmt = args.scale_fmt
453
+
454
+ self.register_buffer("k_cache", torch.zeros(args.max_batch_size, args.max_seq_len, self.head_dim, dtype=torch.float8_e4m3fn), persistent=False)
455
+ self.register_buffer("k_scale_cache", torch.zeros(args.max_batch_size, args.max_seq_len, self.head_dim // block_size, dtype=torch.float32), persistent=False)
456
+
457
+
458
+ def forward(self, x: torch.Tensor, qr: torch.Tensor, start_pos: int, freqs_cis: torch.Tensor, mask: Optional[torch.Tensor]):
459
+ bsz, seqlen, _ = x.size()
460
+ end_pos = start_pos + seqlen
461
+ q = self.wq_b(qr)
462
+ q = q.view(bsz, seqlen, self.n_heads, self.head_dim)
463
+ q_pe, q_nope = torch.split(q, [self.rope_head_dim, self.head_dim - self.rope_head_dim], dim=-1)
464
+ # rope in indexer is not interleaved
465
+ q_pe = apply_rotary_emb(q_pe, freqs_cis, False)
466
+ q = torch.cat([q_pe, q_nope], dim=-1)
467
+ k = self.wk(x)
468
+ k = self.k_norm(k)
469
+ k_pe, k_nope = torch.split(k, [self.rope_head_dim, self.head_dim - self.rope_head_dim], dim=-1)
470
+ # rope in indexer is not interleaved
471
+ k_pe = apply_rotary_emb(k_pe.unsqueeze(2), freqs_cis, False).squeeze(2)
472
+ k = torch.cat([k_pe, k_nope], dim=-1)
473
+ q = rotate_activation(q)
474
+ k = rotate_activation(k)
475
+ q_fp8, q_scale = act_quant(q, block_size, self.scale_fmt)
476
+ k_fp8, k_scale = act_quant(k, block_size, self.scale_fmt)
477
+ self.k_cache[:bsz, start_pos:end_pos] = k_fp8
478
+ self.k_scale_cache[:bsz, start_pos:end_pos] = k_scale
479
+ weights = self.weights_proj(x.float()) * self.n_heads ** -0.5
480
+ weights = weights.unsqueeze(-1) * q_scale * self.softmax_scale
481
+ index_score = fp8_index(q_fp8.contiguous(), weights, self.k_cache[:bsz, :end_pos].contiguous(), self.k_scale_cache[:bsz, :end_pos].contiguous())
482
+ if mask is not None:
483
+ index_score += mask
484
+ topk_indices = index_score.topk(min(self.index_topk, end_pos), dim=-1)[1]
485
+ topk_indices_ = topk_indices.clone()
486
+ dist.broadcast(topk_indices_, src=0)
487
+ assert torch.all(topk_indices == topk_indices_), f"{topk_indices=} {topk_indices_=}"
488
+ return topk_indices
489
+
490
+
491
+ def weight_dequant(weight, scale):
492
+ shape = weight.shape
493
+ assert weight.dim() == 2
494
+ weight = weight.view(shape[0] // block_size, block_size, shape[1] // block_size, block_size).transpose(1, 2).reshape(-1, block_size * block_size)
495
+ weight = (weight.float() * scale.view(-1, 1).float()).to(torch.get_default_dtype()).view(shape[0] // block_size, shape[1] // block_size, block_size, block_size).transpose(1, 2).reshape(shape)
496
+ return weight
497
+
498
+ def update_ws(model):
499
+ for m in model.modules():
500
+ if isinstance(m, MLA):
501
+ if m.wkv_b.scale is not None:
502
+ m.dequant_wkv_b = weight_dequant(m.wkv_b.weight, m.wkv_b.scale)
503
+
504
+ class MLA(nn.Module):
505
+ """
506
+ Multi-Head Latent Attention (MLA) Layer.
507
+
508
+ Attributes:
509
+ dim (int): Dimensionality of the input features.
510
+ n_heads (int): Number of attention heads.
511
+ n_local_heads (int): Number of local attention heads for distributed systems.
512
+ q_lora_rank (int): Rank for low-rank query projection.
513
+ kv_lora_rank (int): Rank for low-rank key/value projection.
514
+ qk_nope_head_dim (int): Dimensionality of non-positional query/key projections.
515
+ qk_rope_head_dim (int): Dimensionality of rotary-positional query/key projections.
516
+ qk_head_dim (int): Total dimensionality of query/key projections.
517
+ v_head_dim (int): Dimensionality of value projections.
518
+ softmax_scale (float): Scaling factor for softmax in attention computation.
519
+ """
520
+ def __init__(self, args: ModelArgs):
521
+ super().__init__()
522
+ self.dim = args.dim
523
+ self.n_heads = args.n_heads
524
+ self.n_local_heads = args.n_heads // world_size
525
+ self.q_lora_rank = args.q_lora_rank
526
+ self.kv_lora_rank = args.kv_lora_rank
527
+ self.qk_nope_head_dim = args.qk_nope_head_dim
528
+ self.qk_rope_head_dim = args.qk_rope_head_dim
529
+ self.qk_head_dim = args.qk_nope_head_dim + args.qk_rope_head_dim
530
+ self.v_head_dim = args.v_head_dim
531
+
532
+ self.wq_a = Linear(self.dim, self.q_lora_rank)
533
+ self.q_norm = RMSNorm(self.q_lora_rank)
534
+ self.wq_b = ColumnParallelLinear(self.q_lora_rank, self.n_heads * self.qk_head_dim)
535
+ self.wkv_a = Linear(self.dim, self.kv_lora_rank + self.qk_rope_head_dim)
536
+ self.kv_norm = RMSNorm(self.kv_lora_rank)
537
+ self.wkv_b = ColumnParallelLinear(self.kv_lora_rank, self.n_heads * (self.qk_nope_head_dim + self.v_head_dim))
538
+ self.wo = RowParallelLinear(self.n_heads * self.v_head_dim, self.dim)
539
+ self.softmax_scale = self.qk_head_dim ** -0.5
540
+ self.scale_fmt = args.scale_fmt
541
+ if args.max_seq_len > args.original_seq_len:
542
+ mscale = 0.1 * args.mscale * math.log(args.rope_factor) + 1.0
543
+ self.softmax_scale = self.softmax_scale * mscale * mscale
544
+
545
+ self.indexer = Indexer(args)
546
+
547
+ self.register_buffer("kv_cache", torch.zeros(args.max_batch_size, args.max_seq_len, self.kv_lora_rank), persistent=False)
548
+ self.register_buffer("pe_cache", torch.zeros(args.max_batch_size, args.max_seq_len, self.qk_rope_head_dim), persistent=False)
549
+ self.dequant_wkv_b = None
550
+
551
+ def forward(self, x: torch.Tensor, start_pos: int, freqs_cis: torch.Tensor, mask: Optional[torch.Tensor]):
552
+ """
553
+ Forward pass for the Multi-Head Latent Attention (MLA) Layer.
554
+
555
+ Args:
556
+ x (torch.Tensor): Input tensor of shape (batch_size, seq_len, dim).
557
+ start_pos (int): Starting position in the sequence for caching.
558
+ freqs_cis (torch.Tensor): Precomputed complex exponential values for rotary embeddings.
559
+ mask (Optional[torch.Tensor]): Mask tensor to exclude certain positions from attention.
560
+
561
+ Returns:
562
+ torch.Tensor: Output tensor with the same shape as the input.
563
+ """
564
+ bsz, seqlen, _ = x.size()
565
+ end_pos = start_pos + seqlen
566
+ qr = self.q_norm(self.wq_a(x))
567
+ q = self.wq_b(qr)
568
+ q = q.view(bsz, seqlen, self.n_local_heads, self.qk_head_dim)
569
+ q_nope, q_pe = torch.split(q, [self.qk_nope_head_dim, self.qk_rope_head_dim], dim=-1)
570
+ q_pe = apply_rotary_emb(q_pe, freqs_cis)
571
+ kv = self.wkv_a(x)
572
+ kv, k_pe = torch.split(kv, [self.kv_lora_rank, self.qk_rope_head_dim], dim=-1)
573
+ kv = self.kv_norm(kv)
574
+ k_pe = apply_rotary_emb(k_pe.unsqueeze(2), freqs_cis)
575
+ # we use fp8 kv cache in actual deployment, so here we simulate the precision by casting kv to fp8 and then back to bf16.
576
+ kv_fp8, kv_scale = act_quant(kv, block_size, self.scale_fmt)
577
+ kv = (kv_fp8.view(-1, block_size).float() * kv_scale.view(-1, 1)).to(kv.dtype).view_as(kv)
578
+ self.kv_cache[:bsz, start_pos:end_pos] = kv
579
+ self.pe_cache[:bsz, start_pos:end_pos] = k_pe.squeeze(2)
580
+ if mask is not None: # MHA prefill
581
+ q = torch.cat([q_nope, q_pe], dim=-1)
582
+ kv = self.wkv_b(kv)
583
+ kv = kv.view(bsz, seqlen, self.n_local_heads, self.qk_nope_head_dim + self.v_head_dim)
584
+ k_nope, v = torch.split(kv, [self.qk_nope_head_dim, self.v_head_dim], dim=-1)
585
+ k = torch.cat([k_nope, k_pe.expand(-1, -1, self.n_local_heads, -1)], dim=-1)
586
+ del q_nope, q_pe, kv, k_nope, k_pe, kv_fp8, kv_scale
587
+ scores = torch.einsum("bshd,bthd->bsht", q, k).mul_(self.softmax_scale)
588
+ del q, k
589
+
590
+ # indexer
591
+ topk_indices = self.indexer(x, qr, start_pos, freqs_cis, mask)
592
+ index_mask = torch.full((bsz, seqlen, seqlen), float("-inf"), device=x.device).scatter_(-1, topk_indices, 0)
593
+ index_mask += mask
594
+ scores += index_mask.unsqueeze(2)
595
+ del topk_indices, index_mask, qr
596
+
597
+ scores = scores.softmax(dim=-1)
598
+ x = torch.einsum("bsht,bthd->bshd", scores, v)
599
+ # use code below if oom
600
+ # scores_ = torch.empty_like(scores)
601
+ # for i in range(0, scores.size(1), 512):
602
+ # scores_[:, i: i+512] = scores[:, i: i+512].softmax(dim=-1)
603
+ # del scores
604
+ # x = torch.einsum("bsht,bthd->bshd", scores_, v)
605
+ else: # MQA decode
606
+ if self.wkv_b.scale is not None and self.dequant_wkv_b is None:
607
+ self.dequant_wkv_b = weight_dequant(self.wkv_b.weight, self.wkv_b.scale)
608
+ wkv_b = self.wkv_b.weight if self.wkv_b.scale is None else self.dequant_wkv_b
609
+ wkv_b = wkv_b.view(self.n_local_heads, -1, self.kv_lora_rank)
610
+ q_nope = torch.einsum("bshd,hdc->bshc", q_nope, wkv_b[:, :self.qk_nope_head_dim])
611
+ scores = (torch.einsum("bshc,btc->bsht", q_nope, self.kv_cache[:bsz, :end_pos]) +
612
+ torch.einsum("bshr,btr->bsht", q_pe, self.pe_cache[:bsz, :end_pos])) * self.softmax_scale
613
+
614
+ # indexer
615
+ topk_indices = self.indexer(x, qr, start_pos, freqs_cis, mask)
616
+ index_mask = torch.full((bsz, 1, end_pos), float("-inf"), device=x.device).scatter_(-1, topk_indices, 0)
617
+ scores += index_mask.unsqueeze(2)
618
+
619
+ scores = scores.softmax(dim=-1)
620
+ x = torch.einsum("bsht,btc->bshc", scores, self.kv_cache[:bsz, :end_pos])
621
+ x = torch.einsum("bshc,hdc->bshd", x, wkv_b[:, -self.v_head_dim:])
622
+ x = self.wo(x.flatten(2))
623
+ return x
624
+
625
+
626
+ class MLP(nn.Module):
627
+ """
628
+ Multi-Layer Perceptron (MLP) used as a feed-forward layer.
629
+
630
+ Attributes:
631
+ w1 (nn.Module): Linear layer for input-to-hidden transformation.
632
+ w2 (nn.Module): Linear layer for hidden-to-output transformation.
633
+ w3 (nn.Module): Additional linear layer for feature transformation.
634
+ """
635
+ def __init__(self, dim: int, inter_dim: int, reduce_output: bool = True):
636
+ """
637
+ Initializes the MLP layer.
638
+
639
+ Args:
640
+ dim (int): Input and output dimensionality.
641
+ inter_dim (int): Hidden layer dimensionality.
642
+ """
643
+ super().__init__()
644
+ self.w1 = ColumnParallelLinear(dim, inter_dim)
645
+ self.w2 = RowParallelLinear(inter_dim, dim, reduce_output=reduce_output)
646
+ self.w3 = ColumnParallelLinear(dim, inter_dim)
647
+
648
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
649
+ """
650
+ Forward pass for the MLP layer.
651
+
652
+ Args:
653
+ x (torch.Tensor): Input tensor.
654
+
655
+ Returns:
656
+ torch.Tensor: Output tensor after MLP computation.
657
+ """
658
+ return self.w2((F.silu(self.w1(x).float()) * self.w3(x).float()).type_as(x))
659
+
660
+
661
+ class Gate(nn.Module):
662
+ """
663
+ Gating mechanism for routing inputs in a mixture-of-experts (MoE) model.
664
+
665
+ Attributes:
666
+ dim (int): Dimensionality of input features.
667
+ topk (int): Number of top experts activated for each input.
668
+ n_groups (int): Number of groups for routing.
669
+ topk_groups (int): Number of groups to route inputs to.
670
+ score_func (str): Scoring function ('softmax' or 'sigmoid').
671
+ route_scale (float): Scaling factor for routing weights.
672
+ weight (torch.nn.Parameter): Learnable weights for the gate.
673
+ bias (Optional[torch.nn.Parameter]): Optional bias term for the gate.
674
+ """
675
+ def __init__(self, args: ModelArgs):
676
+ """
677
+ Initializes the Gate module.
678
+
679
+ Args:
680
+ args (ModelArgs): Model arguments containing gating parameters.
681
+ """
682
+ super().__init__()
683
+ self.dim = args.dim
684
+ self.topk = args.n_activated_experts
685
+ self.n_groups = args.n_expert_groups
686
+ self.topk_groups = args.n_limited_groups
687
+ self.score_func = args.score_func
688
+ self.route_scale = args.route_scale
689
+ self.weight = nn.Parameter(torch.empty(args.n_routed_experts, args.dim))
690
+ self.bias = nn.Parameter(torch.empty(args.n_routed_experts, dtype=torch.float32)) if self.dim == 7168 else None
691
+
692
+ def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
693
+ """
694
+ Forward pass for the gating mechanism.
695
+
696
+ Args:
697
+ x (torch.Tensor): Input tensor.
698
+
699
+ Returns:
700
+ Tuple[torch.Tensor, torch.Tensor]: Routing weights and selected expert indices.
701
+ """
702
+ scores = linear(x.float(), self.weight.float())
703
+ if self.score_func == "softmax":
704
+ scores = scores.softmax(dim=-1)
705
+ else:
706
+ scores = scores.sigmoid()
707
+ original_scores = scores
708
+ if self.bias is not None:
709
+ scores = scores + self.bias
710
+ if self.n_groups > 1:
711
+ scores = scores.view(x.size(0), self.n_groups, -1)
712
+ if self.bias is None:
713
+ group_scores = scores.amax(dim=-1)
714
+ else:
715
+ group_scores = scores.topk(2, dim=-1)[0].sum(dim=-1)
716
+ indices = group_scores.topk(self.topk_groups, dim=-1)[1]
717
+ mask = scores.new_ones(x.size(0), self.n_groups, dtype=bool).scatter_(1, indices, False)
718
+ scores = scores.masked_fill_(mask.unsqueeze(-1), float("-inf")).flatten(1)
719
+ indices = scores.topk(self.topk, dim=-1)[1]
720
+ weights = original_scores.gather(1, indices)
721
+ if self.score_func == "sigmoid":
722
+ weights /= weights.sum(dim=-1, keepdim=True)
723
+ weights *= self.route_scale
724
+ return weights, indices
725
+
726
+
727
+ class Expert(nn.Module):
728
+ """
729
+ Expert layer for Mixture-of-Experts (MoE) models.
730
+
731
+ Attributes:
732
+ w1 (nn.Module): Linear layer for input-to-hidden transformation.
733
+ w2 (nn.Module): Linear layer for hidden-to-output transformation.
734
+ w3 (nn.Module): Additional linear layer for feature transformation.
735
+ """
736
+ def __init__(self, dim: int, inter_dim: int):
737
+ """
738
+ Initializes the Expert layer.
739
+
740
+ Args:
741
+ dim (int): Input and output dimensionality.
742
+ inter_dim (int): Hidden layer dimensionality.
743
+ """
744
+ super().__init__()
745
+ self.w1 = Linear(dim, inter_dim)
746
+ self.w2 = Linear(inter_dim, dim)
747
+ self.w3 = Linear(dim, inter_dim)
748
+
749
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
750
+ """
751
+ Forward pass for the Expert layer.
752
+
753
+ Args:
754
+ x (torch.Tensor): Input tensor.
755
+
756
+ Returns:
757
+ torch.Tensor: Output tensor after expert computation.
758
+ """
759
+ return self.w2((F.silu(self.w1(x).float()) * self.w3(x).float()).type_as(x))
760
+
761
+
762
+ class MoE(nn.Module):
763
+ """
764
+ Mixture-of-Experts (MoE) module.
765
+
766
+ Attributes:
767
+ dim (int): Dimensionality of input features.
768
+ n_routed_experts (int): Total number of experts in the model.
769
+ n_local_experts (int): Number of experts handled locally in distributed systems.
770
+ n_activated_experts (int): Number of experts activated for each input.
771
+ gate (nn.Module): Gating mechanism to route inputs to experts.
772
+ experts (nn.ModuleList): List of expert modules.
773
+ shared_experts (nn.Module): Shared experts applied to all inputs.
774
+ """
775
+ def __init__(self, args: ModelArgs):
776
+ """
777
+ Initializes the MoE module.
778
+
779
+ Args:
780
+ args (ModelArgs): Model arguments containing MoE parameters.
781
+ """
782
+ super().__init__()
783
+ self.dim = args.dim
784
+ assert args.n_routed_experts % world_size == 0, f"Number of experts must be divisible by world size (world_size={world_size})"
785
+ self.n_routed_experts = args.n_routed_experts
786
+ self.n_local_experts = args.n_routed_experts // world_size
787
+ self.n_activated_experts = args.n_activated_experts
788
+ self.experts_start_idx = rank * self.n_local_experts
789
+ self.experts_end_idx = self.experts_start_idx + self.n_local_experts
790
+ self.gate = Gate(args)
791
+ self.experts = nn.ModuleList([Expert(args.dim, args.moe_inter_dim) if self.experts_start_idx <= i < self.experts_end_idx else None
792
+ for i in range(self.n_routed_experts)])
793
+ self.shared_experts = MLP(args.dim, args.n_shared_experts * args.moe_inter_dim, reduce_output=False)
794
+
795
+ def run_gate(self, x):
796
+ return self.gate(x)
797
+
798
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
799
+ """
800
+ Forward pass for the MoE module.
801
+
802
+ Args:
803
+ x (torch.Tensor): Input tensor.
804
+
805
+ Returns:
806
+ torch.Tensor: Output tensor after expert routing and computation.
807
+ """
808
+ shape = x.size()
809
+ x = x.view(-1, self.dim)
810
+ weights, indices = self.run_gate(x)
811
+ y = torch.zeros_like(x, dtype=torch.float32)
812
+ counts = torch.bincount(indices.flatten(), minlength=self.n_routed_experts).tolist()
813
+ for i in range(self.experts_start_idx, self.experts_end_idx):
814
+ if counts[i] == 0:
815
+ continue
816
+ expert = self.experts[i]
817
+ idx, top = torch.where(indices == i)
818
+ y[idx] += expert(x[idx]) * weights[idx, top, None]
819
+ y += self.shared_experts(x)
820
+ if world_size > 1:
821
+ dist.all_reduce(y)
822
+ return y.type_as(x).view(shape)
823
+
824
+
825
+ class Block(nn.Module):
826
+ """
827
+ Transformer block combining attention and feed-forward layers.
828
+
829
+ Attributes:
830
+ attn (nn.Module): Attention layer (MLA).
831
+ ffn (nn.Module): Feed-forward network (MLP or MoE).
832
+ attn_norm (nn.Module): Layer normalization for attention.
833
+ ffn_norm (nn.Module): Layer normalization for feed-forward network.
834
+ """
835
+ def __init__(self, layer_id: int, args: ModelArgs):
836
+ """
837
+ Initializes the Transformer block.
838
+
839
+ Args:
840
+ layer_id (int): Layer index in the transformer.
841
+ args (ModelArgs): Model arguments containing block parameters.
842
+ """
843
+ super().__init__()
844
+ self.attn = MLA(args)
845
+ self.ffn = MLP(args.dim, args.inter_dim) if layer_id < args.n_dense_layers else MoE(args)
846
+ self.attn_norm = RMSNorm(args.dim)
847
+ self.ffn_norm = RMSNorm(args.dim)
848
+
849
+ def forward(self, x: torch.Tensor, residual: torch.Tensor, start_pos: int, freqs_cis: torch.Tensor, mask: Optional[torch.Tensor]) -> torch.Tensor:
850
+ """
851
+ Forward pass for the Transformer block.
852
+
853
+ Args:
854
+ x (torch.Tensor): Input tensor.
855
+ start_pos (int): Starting position in the sequence.
856
+ freqs_cis (torch.Tensor): Precomputed complex exponential values for rotary embeddings.
857
+ mask (Optional[torch.Tensor]): Mask tensor to exclude certain positions from attention.
858
+
859
+ Returns:
860
+ torch.Tensor: Output tensor after block computation.
861
+ """
862
+ if residual is None:
863
+ x, residual = self.attn_norm(x), x
864
+ else:
865
+ x, residual = self.attn_norm(x, residual)
866
+ x = self.attn(x, start_pos, freqs_cis, mask)
867
+ x, residual = self.ffn_norm(x, residual)
868
+ x = self.ffn(x)
869
+ return x, residual
870
+
871
+
872
+ class Transformer(nn.Module):
873
+ """
874
+ Transformer model with positional embeddings, multiple layers, and output projection.
875
+
876
+ Attributes:
877
+ max_seq_len (int): Maximum sequence length for the transformer.
878
+ embed (nn.Module): Embedding layer for input tokens.
879
+ layers (torch.nn.ModuleList): List of transformer blocks.
880
+ norm (nn.Module): Layer normalization applied after all blocks.
881
+ head (nn.Module): Output projection layer mapping to vocabulary size.
882
+ freqs_cis (torch.Tensor): Precomputed complex exponential values for rotary embeddings.
883
+ """
884
+ def __init__(self, args: ModelArgs):
885
+ """
886
+ Initializes the Transformer model.
887
+
888
+ Args:
889
+ args (ModelArgs): Model arguments containing transformer parameters.
890
+ """
891
+ global world_size, rank
892
+ world_size = dist.get_world_size() if dist.is_initialized() else 1
893
+ rank = dist.get_rank() if dist.is_initialized() else 0
894
+ Linear.dtype = torch.float8_e4m3fn if args.dtype == "fp8" else torch.bfloat16
895
+ Linear.scale_fmt = args.scale_fmt
896
+ super().__init__()
897
+ self.max_seq_len = args.max_seq_len
898
+ self.embed = ParallelEmbedding(args.vocab_size, args.dim)
899
+ self.layers = torch.nn.ModuleList()
900
+ for layer_id in range(args.n_layers):
901
+ self.layers.append(Block(layer_id, args))
902
+ self.norm = RMSNorm(args.dim)
903
+ # lm_head in the checkpoint is stored in bf16, while the parameter here is stored in fp32 for easier computation of logits later.
904
+ self.head = ColumnParallelLinear(args.dim, args.vocab_size, dtype=torch.float32)
905
+ self.register_buffer("freqs_cis", precompute_freqs_cis(args), persistent=False)
906
+
907
+ @torch.inference_mode()
908
+ def forward(self, tokens: torch.Tensor, start_pos: int = 0):
909
+ """
910
+ Forward pass for the Transformer model.
911
+
912
+ Args:
913
+ tokens (torch.Tensor): Input tensor of token IDs with shape (batch_size, seq_len).
914
+ start_pos (int, optional): Starting position in the sequence for rotary embeddings. Defaults to 0.
915
+
916
+ Returns:
917
+ torch.Tensor: Logits tensor of shape (batch_size, vocab_size).
918
+ """
919
+ seqlen = tokens.size(1)
920
+ freqs_cis = self.freqs_cis[start_pos:start_pos+seqlen]
921
+ mask = torch.full((seqlen, seqlen), float("-inf"), device=tokens.device).triu_(1) if seqlen > 1 else None
922
+ h, residual = self.embed(tokens), None
923
+ for layer in self.layers:
924
+ h, residual = layer(h, residual, start_pos, freqs_cis, mask)
925
+ h, _ = self.norm(h, residual)
926
+ logits = self.head(h[:, -1].float())
927
+ if world_size > 1:
928
+ all_logits = [torch.empty_like(logits) for _ in range(world_size)]
929
+ dist.all_gather(all_logits, logits)
930
+ logits = torch.cat(all_logits, dim=-1)
931
+ return logits
932
+
933
+
934
+ if __name__ == "__main__":
935
+ torch.set_default_dtype(torch.bfloat16)
936
+ torch.set_default_device("cuda")
937
+ torch.manual_seed(0)
938
+ args = ModelArgs()
939
+ x = torch.randint(0, args.vocab_size, (2, 128))
940
+ model = Transformer(args)
941
+ print(model(x).size())
inference/requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ torch
2
+ triton
3
+ transformers
4
+ safetensors
5
+ fast_hadamard_transform
6
+ tilelang==0.1.6.post1
inference/tilelang_kernel.py ADDED
@@ -0,0 +1,274 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import tilelang
3
+ import tilelang.language as T
4
+ from typing import Tuple, Optional
5
+
6
+
7
+ tilelang.set_log_level("WARNING")
8
+
9
+ pass_configs = {
10
+ tilelang.PassConfigKey.TL_DISABLE_WARP_SPECIALIZED: True,
11
+ tilelang.PassConfigKey.TL_DISABLE_TMA_LOWER: True,
12
+ tilelang.PassConfigKey.TL_DISABLE_FAST_MATH: True,
13
+ }
14
+
15
+ FP8 = "float8_e4m3"
16
+ BF16 = "bfloat16"
17
+ FP32 = "float32"
18
+
19
+
20
+ def fast_log2_ceil(x):
21
+ bits_x = T.reinterpret("uint32", x)
22
+ exp_x = (bits_x >> 23) & 0xFF
23
+ man_bits = bits_x & ((1 << 23) - 1)
24
+ return T.Cast("int32", exp_x - 127 + T.if_then_else(man_bits != 0, 1, 0))
25
+
26
+
27
+ def fast_pow2(x):
28
+ bits_x = (x + 127) << 23
29
+ return T.reinterpret("float32", bits_x)
30
+
31
+
32
+ def fast_round_scale(amax, fp8_max_inv):
33
+ return fast_pow2(fast_log2_ceil(amax * fp8_max_inv))
34
+
35
+
36
+ @tilelang.jit(pass_configs=pass_configs)
37
+ def act_quant_kernel(
38
+ N, in_dtype=BF16, out_dtype=FP8, scale_dtype=FP32, round_scale=False
39
+ ):
40
+ M = T.symbolic("M")
41
+ fp8_min = -448.0
42
+ fp8_max = 448.0
43
+ fp8_max_inv = 1 / fp8_max
44
+ num_stages = 0 if round_scale else 2
45
+ blk_m = 32
46
+ group_size = 128
47
+
48
+ @T.prim_func
49
+ def act_quant_kernel_(
50
+ X: T.Tensor[(M, N), in_dtype],
51
+ Y: T.Tensor[(M, N), out_dtype],
52
+ S: T.Tensor[(M, T.ceildiv(N, group_size)), scale_dtype],
53
+ ):
54
+ with T.Kernel(T.ceildiv(M, blk_m), T.ceildiv(N, group_size), threads=128) as (
55
+ pid_m,
56
+ pid_n,
57
+ ):
58
+ x_shared = T.alloc_shared((blk_m, group_size), in_dtype)
59
+ x_local = T.alloc_fragment((blk_m, group_size), in_dtype)
60
+ amax_local = T.alloc_fragment((blk_m,), scale_dtype)
61
+ s_local = T.alloc_fragment((blk_m,), scale_dtype)
62
+ y_local = T.alloc_fragment((blk_m, group_size), out_dtype)
63
+ y_shared = T.alloc_shared((blk_m, group_size), out_dtype)
64
+
65
+ for _ in T.Pipelined(1, num_stages=num_stages):
66
+ T.copy(X[pid_m * blk_m, pid_n * group_size], x_shared)
67
+ T.copy(x_shared, x_local)
68
+ T.reduce_absmax(x_local, amax_local, dim=1)
69
+ for i in T.Parallel(blk_m):
70
+ amax_local[i] = T.max(amax_local[i], 1e-4)
71
+ if round_scale:
72
+ s_local[i] = fast_round_scale(amax_local[i], fp8_max_inv)
73
+ else:
74
+ s_local[i] = amax_local[i] * fp8_max_inv
75
+ for i, j in T.Parallel(blk_m, group_size):
76
+ y_local[i, j] = T.clamp(
77
+ x_local[i, j] / s_local[i], fp8_min, fp8_max
78
+ )
79
+ for i in T.Parallel(blk_m):
80
+ S[pid_m * blk_m + i, pid_n] = s_local[i]
81
+ T.copy(y_local, y_shared)
82
+ T.copy(y_shared, Y[pid_m * blk_m, pid_n * group_size])
83
+
84
+ return act_quant_kernel_
85
+
86
+
87
+ def act_quant(
88
+ x: torch.Tensor, block_size: int = 128, scale_fmt: Optional[str] = None
89
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
90
+ """
91
+ Quantizes the input tensor `x` using block-wise quantization.
92
+
93
+ Args:
94
+ x (torch.Tensor): The input tensor to be quantized. Must be contiguous and its last dimension size must be divisible by `block_size`.
95
+ block_size (int, optional): The size of the blocks to be used for quantization. Default is 128.
96
+ scale_fmt (Optional[str], optional): The format of the scale. Default is None.
97
+ Returns:
98
+ Tuple[torch.Tensor, torch.Tensor]: A tuple containing:
99
+ - The quantized tensor with dtype `torch.float8_e4m3fn`.
100
+ - A tensor of scaling factors with dtype `torch.float32`.
101
+ """
102
+ assert x.is_contiguous(), "Input tensor must be contiguous"
103
+ assert x.size(-1) % block_size == 0, (
104
+ f"Last dimension size must be divisible by block_size (block_size={block_size})"
105
+ )
106
+ N = x.size(-1)
107
+ y = torch.empty_like(x, dtype=torch.float8_e4m3fn)
108
+ s = x.new_empty(*x.size()[:-1], N // block_size, dtype=torch.float32)
109
+ kernel = act_quant_kernel(N, round_scale=scale_fmt is not None)
110
+ kernel(x.view(-1, N), y.view(-1, N), s.view(-1, N // block_size))
111
+ return y, s
112
+
113
+
114
+ @tilelang.jit(pass_configs=pass_configs)
115
+ def fp8_gemm_kernel(N, K, out_dtype=BF16, accum_dtype="float32"):
116
+ assert out_dtype in [BF16, "float32"]
117
+
118
+ M = T.symbolic("M")
119
+ group_size = 128
120
+ block_M = 32
121
+ block_N = 128
122
+ block_K = 128
123
+
124
+ @T.prim_func
125
+ def fp8_gemm_kernel_(
126
+ A: T.Tensor[(M, K), FP8],
127
+ B: T.Tensor[(N, K), FP8],
128
+ C: T.Tensor[(M, N), out_dtype],
129
+ scales_a: T.Tensor[(M, T.ceildiv(K, group_size)), FP32],
130
+ scales_b: T.Tensor[(T.ceildiv(N, group_size), T.ceildiv(K, group_size)), FP32],
131
+ ):
132
+ with T.Kernel(T.ceildiv(N, block_N), T.ceildiv(M, block_M), threads=128) as (
133
+ bx,
134
+ by,
135
+ ):
136
+ A_shared = T.alloc_shared((block_M, block_K), FP8)
137
+ B_shared = T.alloc_shared((block_N, block_K), FP8)
138
+ C_shared = T.alloc_shared((block_M, block_N), out_dtype)
139
+ Scale_C_shared = T.alloc_shared((block_M), FP32)
140
+ C_local = T.alloc_fragment((block_M, block_N), accum_dtype)
141
+ C_local_accum = T.alloc_fragment((block_M, block_N), accum_dtype)
142
+
143
+ # Improve L2 Cache
144
+ T.use_swizzle(panel_size=10)
145
+
146
+ T.clear(C_local)
147
+ T.clear(C_local_accum)
148
+ K_iters = T.ceildiv(K, block_K)
149
+ for k in T.Pipelined(K_iters, num_stages=4):
150
+ # Load A into shared memory
151
+ T.copy(A[by * block_M, k * block_K], A_shared)
152
+ # Load B into shared memory
153
+ T.copy(B[bx * block_N, k * block_K], B_shared)
154
+ # Load scale into shared memory
155
+ Scale_B = scales_b[bx * block_N // group_size, k]
156
+ for i in T.Parallel(block_M):
157
+ Scale_C_shared[i] = scales_a[by * block_M + i, k] * Scale_B
158
+
159
+ T.gemm(A_shared, B_shared, C_local, transpose_B=True)
160
+ # Promote to enable 2xAcc
161
+ for i, j in T.Parallel(block_M, block_N):
162
+ C_local_accum[i, j] += C_local[i, j] * Scale_C_shared[i]
163
+ T.clear(C_local)
164
+ # TMA store
165
+ T.copy(C_local_accum, C_shared)
166
+ T.copy(C_shared, C[by * block_M, bx * block_N])
167
+
168
+ return fp8_gemm_kernel_
169
+
170
+
171
+ def fp8_gemm(
172
+ a: torch.Tensor, a_s: torch.Tensor, b: torch.Tensor, b_s: torch.Tensor
173
+ ) -> torch.Tensor:
174
+ """
175
+ Perform a matrix multiplication using FP8 precision.
176
+
177
+ Args:
178
+ a (torch.Tensor): The first input matrix, must be contiguous.
179
+ a_s (torch.Tensor): The scaling factor for the first input matrix, must be contiguous.
180
+ b (torch.Tensor): The second input matrix, must be contiguous.
181
+ b_s (torch.Tensor): The scaling factor for the second input matrix, must be contiguous.
182
+
183
+ Returns:
184
+ torch.Tensor: The result of the matrix multiplication.
185
+ """
186
+ assert a.is_contiguous() and b.is_contiguous(), "Input tensors must be contiguous"
187
+ assert a_s.is_contiguous() and b_s.is_contiguous(), (
188
+ "Scaling factor tensors must be contiguous"
189
+ )
190
+ K = a.size(-1)
191
+ M = a.numel() // K
192
+ N = b.size(0)
193
+ c = a.new_empty(*a.size()[:-1], N, dtype=torch.get_default_dtype())
194
+ kernel = fp8_gemm_kernel(N, K)
195
+ kernel(a.view(M, K), b, c.view(M, N), a_s.view(M, -1), b_s)
196
+ return c
197
+
198
+
199
+ @tilelang.jit(out_idx=[4], pass_configs=pass_configs)
200
+ def fp8_index_kernel(h: int, d: int):
201
+ b = T.symbolic("b")
202
+ m = T.symbolic("m")
203
+ n = T.symbolic("n")
204
+
205
+ blk_n1 = 512
206
+ blk_n2 = 128
207
+
208
+ @T.prim_func
209
+ def fp8_index_kernel_(
210
+ q: T.Tensor[(b, m, h, d), FP8],
211
+ q_s: T.Tensor[(b, m, h), FP32],
212
+ k: T.Tensor[(b, n, d), FP8],
213
+ k_s: T.Tensor[(b, n), FP32],
214
+ o: T.Tensor[(b, m, n), FP32],
215
+ ) -> None:
216
+ with T.Kernel(b, m, T.ceildiv(n, blk_n1)) as (i_b, i_m, i1_n):
217
+ q_smem = T.alloc_shared((h, d), FP8)
218
+ T.copy(q[i_b, i_m, 0, 0], q_smem)
219
+
220
+ q_s_frag = T.alloc_fragment(h, FP32)
221
+ T.copy(q_s[i_b, i_m, 0], q_s_frag)
222
+
223
+ for i2_n in T.Pipelined(blk_n1 // blk_n2, num_stages=2):
224
+ k_smem = T.alloc_shared((blk_n2, d), FP8)
225
+ T.copy(k[i_b, i1_n * blk_n1 + i2_n * blk_n2, 0], k_smem)
226
+
227
+ k_s_frag = T.alloc_fragment(blk_n2, FP32)
228
+ T.copy(k_s[i_b, i1_n * blk_n1 + i2_n * blk_n2], k_s_frag)
229
+
230
+ logits = T.alloc_fragment((blk_n2, h), FP32)
231
+ T.gemm(
232
+ k_smem,
233
+ q_smem,
234
+ logits,
235
+ transpose_A=False,
236
+ transpose_B=True,
237
+ clear_accum=True,
238
+ )
239
+
240
+ for i_h, i3_n in T.Parallel(h, blk_n2):
241
+ logits[i3_n, i_h] = T.max(logits[i3_n, i_h], 0) * q_s_frag[i_h]
242
+
243
+ logits_sum = T.alloc_fragment(blk_n2, FP32)
244
+ T.reduce_sum(logits, logits_sum, dim=1)
245
+
246
+ for i3_n in T.Parallel(blk_n2):
247
+ logits_sum[i3_n] *= k_s_frag[i3_n]
248
+
249
+ T.copy(logits_sum, o[i_b, i_m, i1_n * blk_n1 + i2_n * blk_n2])
250
+
251
+ return fp8_index_kernel_
252
+
253
+
254
+ def fp8_index(
255
+ q: torch.Tensor,
256
+ q_s: torch.Tensor,
257
+ k: torch.Tensor,
258
+ k_s: torch.Tensor,
259
+ ) -> torch.Tensor:
260
+ """
261
+ Perform index score using FP8 precision.
262
+
263
+ Args:
264
+ q (torch.Tensor): The Q tensor, must be contiguous.
265
+ q_s (torch.Tensor): The scaling factor for Q (float), must be contiguous.
266
+ k (torch.Tensor): The K tensor, must be contiguous.
267
+ k_s (torch.Tensor): The scaling factor for K (e8m0 here), must be contiguous.
268
+
269
+ fp8 q @ fp8 k -> fp32 logits
270
+ relu(fp32 logits) * q_s (weights) -> fp32 logits
271
+ fp32 logits -> fp32 logits_sum
272
+ fp32 logits_sum * k_s (e8m0) -> fp32 index_score
273
+ """
274
+ return fp8_index_kernel(q.shape[2], q.shape[3])(q, q_s, k, k_s)