Unsloth - Granite 4.0 H 350M MXFP4 Hybrid GGUF

Dense model utilizing MXFP4_MOE with hybrid weights on a dense model. Achieving interesting results that show smaller file size, more TPS, and near lossless precision.

Use The Following Models!

  • MXFP4_MOE-output_q6_k-router_gate_emb_f16 (This is the special version)

    29.7% smaller than F16 • 1652.1 TPS • 0.04959% precision loss compared to F16

Why is this version special? Because precision loss on tiny models like this affect the model to the extreme. To achieve ~30% smaller size at this small of a precision loss is the exact trade off desired when needing to minimize the models size. This is the primary variant suggested for this model.


Stats compared against the standard Q8_0 (precision loss still compared to F16)

  • MXFP4_MOE-output_q6_K-router_gate_emb_q6_K

    9.1% smaller than Q8 • 1721.67 TPS • 0.214% precision loss


This repository contains a set of hybrid MXFP4 quantized GGUF models designed to explore a surprising discovery:

A carefully targeted combination of MXFP4 + high-precision embeddings/output weights can deliver near-Q8 accuracy with Q4–Q6 level throughput and smaller file sizes than Q8.

Unlike pure MXFP4, which heavily degrades dense models. This hybrid method selectively protects tensors that matter most for semantic stability, while allowing MXFP4 to accelerate everything else.

This is experimental. And should be treated as such. I am more than encouraging people to use this model and leave feedback! Though precision loss seemed near lossless, did the hybrid models act strange in certain situations? Worse or better on some topics compared to the original model? Did it do better/worse overall on everything? I'd love to hear back from others!


The Magic Model

MXFP4_MOE-output_q6_k-router_gate_emb_f16

(29.7% smaller than F16 • 1652.1 TPS • 0.04959% precision loss compared to F16)

This... this is hot.. if I do say so myself.

The following was the conversion script:

llama-quantize \
  --tensor-type token_embd.weight=F16 \
  --tensor-type output.weight=Q6_K \
  --tensor-type 'router.*'=F16 \
  --tensor-type 'gate.*'=F16 \
  "Path_To_F16_GGUF.gguf" \
  "Path_To_GGUF.gguf" \
  mxfp4_moe

MXFP4_MOE-output_q6_K-router_gate_emb_q6_K

(9.1% smaller than Q8 • 1721.67 TPS • 0.214% precision loss )

This version created beat out everything in every way in the MXFP4 hybrid family created. Out of the batch, this MXFP4 hybrid was the only worth considering to utilize.

The following was the conversion script:

llama-quantize \
  --tensor-type token_embd.weight=Q6_K \
  --tensor-type output.weight=Q6_K \
  --tensor-type 'router.*'=Q6_K \
  --tensor-type 'gate.*'=Q6_K \
  "Path_To_F16_GGUF.gguf" \
  "Path_To_GGUF.gguf" \
  mxfp4_moe

MXFP4_MOE Hybrid Naming Scheme & Synopsis

Multiple different combinations of converted models were created. The results were interesting to say the least. The following table will explain my naming scheme to what was done to the model to create it.

Suffix Example Meaning
MXFP4_MOE Pure MXFP4 pipeline
MXFP4_MOE-Q8 Embedding/output in Q8_0
MXFP4_MOE-F16 Embedding/output in F16
output_mxfp4-embd_q8 Output → MXFP4, Embedding → Q8
output_mxfp4-router_gate_emb_q5_K Output → MXFP4, Emb/Router/Gate → Q5_K
MXFP4_MOE-Q6_K Both embedding + output in Q6_K
Q8_0, Q6_K, Q4_K_M Pure model-wide quantizations

The results achieved were interesting to say the least. It was a brute force game of mass creating models with hybrid methods to find combinations that didn't cause too much noise and paired well with MXFP4.

This repo showcases the converted models, whether good or bad that was created. But, I have been testing other models in different combinations as well. The winning hybrid combinations shown in this repo DOES NOT always equate to the same results on different models.

Some models do better or worse with different kinds of combinations. It depends if it's dense, MOE, and much more. Many times the results surprise me. Many models no matter the combination will not play nice with MXFP4. At least with the methods shown here.


Benchmark Methodology

All models were tested with a unified automated harness using llama.cpp tools.

Included tests:

  • Throughput:
    llama-bench with descending GPU offload (-ngl 35 → 0) and automatic OOM retry.
    Highest successful TPS is recorded.

  • Perplexity:
    Three domains: general, code, math.
    Each uses an auto-generated corpus of ~32k tokens.
    Perplexity is computed with llama-perplexity at 2048-token context.
    Same GPU retry logic as above.

  • Precision loss:
    Each model is compared to its family F16 baseline.
    Precision-loss % is computed for all PPL domains, plus an averaged score.
    Models are ranked by this metric.


Table - Overview of Results

Comparing to F16.

model_name size_reduction tps_change
MXFP4_MOE-output_q6_k-router_gate_emb_f16 29.69% -11.37%
MXFP4_MOE-output_f16-router_gate_emb_f16 29.69% -13.16%
MXFP4_MOE-output_q6_k-embd_f16 35.94% -15.36%
MXFP4_MOE-F16 35.94% -15.21%
MXFP4_MOE-output_f16-router_gate_emb_q6_k 51.56% -8.38%
MXFP4_MOE-output_mxfp4-router_gate_emb_f16 51.56% -11.52%
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K 51.56% -7.63%
MXFP4_MOE-Q6_K 50% -6.55%
MXFP4_MOE-Q8 46.88% -6.77%
Q8_0 46.88% -7.07%
Q6_K 59.38% -9.24%
MXFP4_MOE-output_mxfp4-embd_f16 35.94% -11.02%
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 51.56% -7.64%
MXFP4_MOE-output_mxfp4-embd_q6_K 50% -7.01%
MXFP4_MOE-output_mxfp4-embd_q8 46.88% -7.39%
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 46.88% -6.63%
MXFP4_MOE-Q5_K 51.56% -6.49%
MXFP4_MOE-output_mxfp4-embd_q5_K 51.56% -6.37%
Q5_K_M 62.5% -8.98%
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 53.12% -7.48%
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 56.25% -6.61%
MXFP4_MOE-Q4_K 53.12% -6.47%
MXFP4_MOE-output_mxfp4-embd_q4_K 53.12% -6.23%
Q4_K_M 67.19% -8.45%
MXFP4_MOE-output_q8-embd_mxfp4 53.12% -6.53%
MXFP4_MOE 73.44% -1.13%
  • All percentages compared against the selected family F16 baseline.

Table - File Size + TPS + Avg Precision Loss

model_name file_size_gb bench_tps avg_prec_loss
F16 0.64 1863.96 0
MXFP4_MOE-output_q6_k-router_gate_emb_f16 0.45 1652.1 0.0459
MXFP4_MOE-output_f16-router_gate_emb_f16 0.45 1618.73 0.0934
MXFP4_MOE-output_q6_k-embd_f16 0.41 1577.68 0.11
MXFP4_MOE-F16 0.41 1580.44 0.12
MXFP4_MOE-output_f16-router_gate_emb_q6_k 0.31 1707.74 0.1855
MXFP4_MOE-output_mxfp4-router_gate_emb_f16 0.31 1649.31 0.1855
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K 0.31 1721.67 0.214
MXFP4_MOE-Q6_K 0.32 1741.83 0.2545
MXFP4_MOE-Q8 0.34 1737.7 0.3695
Q8_0 0.34 1732.23 0.3695
Q6_K 0.26 1691.78 0.6105
MXFP4_MOE-output_mxfp4-embd_f16 0.41 1658.55 0.6519
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 0.31 1721.57 0.693
MXFP4_MOE-output_mxfp4-embd_q6_K 0.32 1733.28 0.8372
MXFP4_MOE-output_mxfp4-embd_q8 0.34 1726.18 0.8454
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 0.34 1740.43 0.8454
MXFP4_MOE-Q5_K 0.31 1742.99 2.1423
MXFP4_MOE-output_mxfp4-embd_q5_K 0.31 1745.27 2.6333
Q5_K_M 0.24 1696.53 2.9645
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 0.3 1724.55 3.1646
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 0.28 1740.67 4.3156
MXFP4_MOE-Q4_K 0.3 1743.34 4.5808
MXFP4_MOE-output_mxfp4-embd_q4_K 0.3 1747.89 4.7838
Q4_K_M 0.21 1706.54 12.1189
MXFP4_MOE-output_q8-embd_mxfp4 0.3 1742.28 13.915
MXFP4_MOE 0.17 1842.9 8225.0298
  • Bench NGL was 35
  • Utilized CUDA

Table - PPL Columns

model_name gen gen_er code code_er math math_er
F16 18.1241 0.4654 1.9547 0.0175 10.2753 0.2312
MXFP4_MOE-output_q6_k-router_gate_emb_f16 18.1547 0.4667 1.9543 0.0175 10.2742 0.2311
MXFP4_MOE-output_f16-router_gate_emb_f16 18.1532 0.4667 1.9546 0.0175 10.2881 0.2316
MXFP4_MOE-output_q6_k-embd_f16 18.1555 0.4664 1.9539 0.0175 10.2956 0.2317
MXFP4_MOE-F16 18.1603 0.4666 1.9546 0.0175 10.2923 0.2317
MXFP4_MOE-output_f16-router_gate_emb_q6_k 18.1862 0.4686 1.9581 0.0175 10.2794 0.2314
MXFP4_MOE-output_mxfp4-router_gate_emb_f16 18.1862 0.4686 1.9581 0.0175 10.2794 0.2314
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K 18.2137 0.4694 1.9581 0.0175 10.2726 0.2311
MXFP4_MOE-Q6_K 18.2289 0.4697 1.9583 0.0175 10.2754 0.2311
MXFP4_MOE-Q8 18.2363 0.4693 1.9558 0.0175 10.3198 0.2325
Q8_0 18.2363 0.4693 1.9558 0.0175 10.3198 0.2325
Q6_K 18.3753 0.4719 1.9612 0.0175 10.2869 0.2294
MXFP4_MOE-output_mxfp4-embd_f16 18.2903 0.4697 1.9572 0.0175 10.3689 0.2334
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 18.334 0.472 1.9603 0.0175 10.3405 0.2326
MXFP4_MOE-output_mxfp4-embd_q6_K 18.3312 0.4717 1.9612 0.0175 10.3818 0.2338
MXFP4_MOE-output_mxfp4-embd_q8 18.3491 0.4717 1.958 0.0175 10.391 0.234
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 18.3491 0.4717 1.958 0.0175 10.391 0.234
MXFP4_MOE-Q5_K 18.8193 0.4864 1.9665 0.0177 10.4795 0.2366
MXFP4_MOE-output_mxfp4-embd_q5_K 18.9164 0.4885 1.9678 0.0177 10.569 0.2391
Q5_K_M 18.9868 0.4897 1.9833 0.0179 10.5497 0.2372
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 19.176 0.4956 1.9713 0.0178 10.5672 0.2381
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 19.0072 0.4913 1.9966 0.0182 10.8847 0.2476
MXFP4_MOE-Q4_K 19.1505 0.4952 1.992 0.0181 10.9094 0.25
MXFP4_MOE-output_mxfp4-embd_q4_K 19.1528 0.4949 1.9946 0.0181 10.957 0.2506
Q4_K_M 21.3531 0.5635 2.0638 0.0194 11.6069 0.2693
MXFP4_MOE-output_q8-embd_mxfp4 22.2013 0.5834 2.1047 0.0199 11.4647 0.2597
MXFP4_MOE 1172.2706 45.947 303.0942 7.7666 308.3771 10.9069
  • gen = ppl_general
  • gen_er = ppl_general_error
  • code = ppl_code
  • code_er = ppl_code_error
  • math = ppl_math
  • math_er = ppl_math_error

Table - Precision Loss Columns

model_name loss_general loss_code loss_math
F16 0 0 0
MXFP4_MOE-output_q6_k-router_gate_emb_f16 0.1688 -0.0205 -0.0107
MXFP4_MOE-output_f16-router_gate_emb_f16 0.1606 -0.0051 0.1246
MXFP4_MOE-output_q6_k-embd_f16 0.1732 -0.0409 0.1976
MXFP4_MOE-F16 0.1997 -0.0051 0.1654
MXFP4_MOE-output_f16-router_gate_emb_q6_k 0.3426 0.1739 0.0399
MXFP4_MOE-output_mxfp4-router_gate_emb_f16 0.3426 0.1739 0.0399
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K 0.4944 0.1739 -0.0263
MXFP4_MOE-Q6_K 0.5782 0.1842 0.001
MXFP4_MOE-Q8 0.6191 0.0563 0.4331
Q8_0 0.6191 0.0563 0.4331
Q6_K 1.386 0.3325 0.1129
MXFP4_MOE-output_mxfp4-embd_f16 0.917 0.1279 0.9109
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 1.1581 0.2865 0.6345
MXFP4_MOE-output_mxfp4-embd_q6_K 1.1427 0.3325 1.0365
MXFP4_MOE-output_mxfp4-embd_q8 1.2414 0.1688 1.126
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 1.2414 0.1688 1.126
MXFP4_MOE-Q5_K 3.8358 0.6037 1.9873
MXFP4_MOE-output_mxfp4-embd_q5_K 4.3715 0.6702 2.8583
Q5_K_M 4.76 1.4631 2.6705
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 5.8039 0.8492 2.8408
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 4.8725 2.1436 5.9307
MXFP4_MOE-Q4_K 5.6632 1.9082 6.1711
MXFP4_MOE-output_mxfp4-embd_q4_K 5.6759 2.0412 6.6344
Q4_K_M 17.8161 5.5814 12.9592
MXFP4_MOE-output_q8-embd_mxfp4 22.496 7.6738 11.5753
MXFP4_MOE 6368.021 15405.9191 2901.1494
  • loss_general = precision_loss_general_pct
  • loss_code = precision_loss_code_pct
  • loss_math = precision_loss_math_pct
Downloads last month
205
GGUF
Hardware compatibility
Log In to view the estimation

6-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for magiccodingman/Granite-4.0-H-350M-Unsloth-MXFP4-Hybrid-GGUF

Quantized
(1)
this model

Collection including magiccodingman/Granite-4.0-H-350M-Unsloth-MXFP4-Hybrid-GGUF