Upload complete model
Browse files
README.md
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: modified-mit
|
| 4 |
+
library_name: mlx
|
| 5 |
+
tags:
|
| 6 |
+
- mlx
|
| 7 |
+
pipeline_tag: text-generation
|
| 8 |
+
base_model: moonshotai/Kimi-K2-Instruct-0905
|
| 9 |
+
---
|
| 10 |
+
**CURRENTLY UPLOADING**
|
| 11 |
+
*Notice will be removed once complete*
|
| 12 |
+
|
| 13 |
+
**See Kimi-K2-Instruct-0905 Dynamic MLX in action - [COMING SOON](https://youtu.be/-zfUvA2CDqE)**
|
| 14 |
+
|
| 15 |
+
*q3.824bit dynamic quant typically achieves 1.... perplexity in our testing, slotting closer to q4 perplexity (1.168) than q3 perplexity (1.900).*
|
| 16 |
+
| Quantization | Perplexity |
|
| 17 |
+
|:------------:|:----------:|
|
| 18 |
+
| **q2** | 41.293 |
|
| 19 |
+
| **q3** | 1.900 |
|
| 20 |
+
| **q3.824** | 1.256 |
|
| 21 |
+
| **q3.985** | 1.243 |
|
| 22 |
+
| **q4** | 1.168 |
|
| 23 |
+
| **q5** | 1.141 |
|
| 24 |
+
| **q6** | 1.128 |
|
| 25 |
+
| **q8** | 1.128 |
|
| 26 |
+
|
| 27 |
+
## Usage Notes
|
| 28 |
+
|
| 29 |
+
* Runs on a single M3 Ultra 512GB RAM using [Inferencer app](https://inferencer.com)
|
| 30 |
+
* Does not require expanding VRAM limit
|
| 31 |
+
* However expanding it will allow you to avoid slow downs with larger context windows:
|
| 32 |
+
* `sudo sysctl iogpu.wired_limit_mb=507000`
|
| 33 |
+
* Expect ~20 tokens/s
|
| 34 |
+
* Quantized with a modified version of [MLX](https://github.com/ml-explore/mlx) 0.26
|
| 35 |
+
* For more details see [demonstration video](https://youtu.be/-zfUvA2CDqE) or visit [Kimi K2](https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905).
|