'Make knowledge free for everyone'

Original INT4 model has been dequantized with my own custom script:

DQ_int4-to-bf16_dequant (inspired by the deepseek V3 dequant script)

BF version of: moonshotai/Kimi-K2-Thinking Buy Me a Coffee at ko-fi.com

Downloads last month
186
Safetensors
Model size
1T params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DevQuasar/moonshotai.Kimi-K2-Thinking-BF16

Finetuned
(2)
this model