Update README.md
Browse files
README.md
CHANGED
|
@@ -31,6 +31,12 @@ library_name: transformers
|
|
| 31 |
<b>📰 <a href="https://moonshotai.github.io/Kimi-K2/">Tech Blog</a></b> | <b>📄 Paper Link (coming soon)</b>
|
| 32 |
</p>
|
| 33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
## 1. Model Introduction
|
| 35 |
|
| 36 |
Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.
|
|
|
|
| 31 |
<b>📰 <a href="https://moonshotai.github.io/Kimi-K2/">Tech Blog</a></b> | <b>📄 Paper Link (coming soon)</b>
|
| 32 |
</p>
|
| 33 |
|
| 34 |
+
## 0. Changelog
|
| 35 |
+
|
| 36 |
+
### 2025.7.15
|
| 37 |
+
- We have updated our tokenizer implementation. Now special tokens like `[EOS]` can be encoded to their token ids.
|
| 38 |
+
- We fixed a bug in the chat template that was breaking multi-turn tool calls.
|
| 39 |
+
|
| 40 |
## 1. Model Introduction
|
| 41 |
|
| 42 |
Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.
|