Add files using upload-large-folder tool
Browse files- README.md +0 -4
- config.json +2 -2
README.md
CHANGED
|
@@ -11,10 +11,6 @@ pipeline_tag: text-generation
|
|
| 11 |
|
| 12 |
# unsloth-Qwen3-Coder-30B-A3B-Instruct-qx6-mlx
|
| 13 |
|
| 14 |
-
test model
|
| 15 |
-
|
| 16 |
-
this is part of a series created to evaluate the effect of quanting with mixed precision
|
| 17 |
-
|
| 18 |
This model [unsloth-Qwen3-Coder-30B-A3B-Instruct-qx6-mlx](https://huggingface.co/unsloth-Qwen3-Coder-30B-A3B-Instruct-qx6-mlx) was
|
| 19 |
converted to MLX format from [unsloth/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct)
|
| 20 |
using mlx-lm version **0.26.3**.
|
|
|
|
| 11 |
|
| 12 |
# unsloth-Qwen3-Coder-30B-A3B-Instruct-qx6-mlx
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
This model [unsloth-Qwen3-Coder-30B-A3B-Instruct-qx6-mlx](https://huggingface.co/unsloth-Qwen3-Coder-30B-A3B-Instruct-qx6-mlx) was
|
| 15 |
converted to MLX format from [unsloth/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct)
|
| 16 |
using mlx-lm version **0.26.3**.
|
config.json
CHANGED
|
@@ -27,7 +27,7 @@
|
|
| 27 |
"qkv_bias": false,
|
| 28 |
"quantization": {
|
| 29 |
"group_size": 64,
|
| 30 |
-
"bits":
|
| 31 |
"model.embed_tokens": {
|
| 32 |
"group_size": 64,
|
| 33 |
"bits": 6
|
|
@@ -1575,7 +1575,7 @@
|
|
| 1575 |
},
|
| 1576 |
"quantization_config": {
|
| 1577 |
"group_size": 64,
|
| 1578 |
-
"bits":
|
| 1579 |
"model.embed_tokens": {
|
| 1580 |
"group_size": 64,
|
| 1581 |
"bits": 6
|
|
|
|
| 27 |
"qkv_bias": false,
|
| 28 |
"quantization": {
|
| 29 |
"group_size": 64,
|
| 30 |
+
"bits": 6,
|
| 31 |
"model.embed_tokens": {
|
| 32 |
"group_size": 64,
|
| 33 |
"bits": 6
|
|
|
|
| 1575 |
},
|
| 1576 |
"quantization_config": {
|
| 1577 |
"group_size": 64,
|
| 1578 |
+
"bits": 6,
|
| 1579 |
"model.embed_tokens": {
|
| 1580 |
"group_size": 64,
|
| 1581 |
"bits": 6
|