Unsloth - Qwen3 VL 32B Thinking MXFP4 Hybrid GGUF

Dense model utilizing MXFP4_MOE with hybrid weights on a dense model. Achieving interesting results that show smaller file size, more TPS, and near lossless precision.

Use one of the 3 found magic models!

Stats compared against the standard Q8_0 (precision loss still compared to F16)

  • MXFP4_MOE-Q6_K

    2.9% smaller than Q8 • 22.96 TPS • 0.0112% precision loss

    What makes this special is this basically is THE Q8 for this model. The Q8_0 under performs significantly and this is the quant you'd want if you're wanting that level of precision. Though the Q6_K base is also ridiculously low precision. Honestly, unless you're a scientist, this is more a cool showcase of MXFP4, not necessarily something you'd want to sacrifice the VRAM for in my opinion.


This repository contains a set of hybrid MXFP4 quantized GGUF models designed to explore a surprising discovery:

A carefully targeted combination of MXFP4 + high-precision embeddings/output weights can deliver near-Q8 accuracy with Q4–Q6 level throughput and smaller file sizes than Q8.

Unlike pure MXFP4, which heavily degrades dense models. This hybrid method selectively protects tensors that matter most for semantic stability, while allowing MXFP4 to accelerate everything else.

This is experimental. And should be treated as such. I am more than encouraging people to use these models and leave feedback! Though precision loss seemed near lossless, did the hybrid models act strange in certain situations? Worse or better on some topics compared to the original model? Did it do better/worse overall on everything? I'd love to hear back from others!


The Magic Model

MXFP4_MOE-Q6_K

(5.2% smaller than Q8 • 264.49 TPS • 0.0992% precision loss )

Honestly, this one is hands down the best. Best TPS, lowest precision loss, this is the one you want.

The following was the conversion script:

llama-quantize \
  --tensor-type token_embd.weight=Q6_0 \
  --tensor-type output.weight=Q6_0 \
  "Path_To_F16_GGUF.gguf" \
  "Path_To_GGUF.gguf" \
  mxfp4_moe

MXFP4_MOE Hybrid Naming Scheme & Synopsis

Multiple different combinations of converted models were created. The results were interesting to say the least. The following table will explain my naming scheme to what was done to the model to create it.

Suffix Example Meaning
MXFP4_MOE Pure MXFP4 pipeline
MXFP4_MOE-Q8 Embedding/output in Q8_0
MXFP4_MOE-F16 Embedding/output in F16
output_mxfp4-embd_q8 Output → MXFP4, Embedding → Q8
output_mxfp4-router_gate_emb_q5_K Output → MXFP4, Emb/Router/Gate → Q5_K
MXFP4_MOE-Q6_K Both embedding + output in Q6_K
Q8_0, Q6_K, Q4_K_M Pure model-wide quantizations

The results achieved were interesting to say the least. It was a brute force game of mass creating models with hybrid methods to find combinations that didn't cause too much noise and paired well with MXFP4.

This repo showcases the converted models, whether good or bad that was created. But, I have been testing other models in different combinations as well. The winning hybrid combinations shown in this repo DOES NOT always equate to the same results on different models.

Some models do better or worse with different kinds of combinations. It depends if it's dense, MOE, and much more. Many times the results surprise me. Many models no matter the combination will not play nice with MXFP4. At least with the methods shown here.


Table - Overview of Results

Comparing to F16.

model_name size_reduction tps_change
MXFP4_MOE-Q6_K 48.44% 77.16%
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K 51.53% 81.71%
Q6_K 58.97% 97.22%
MXFP4_MOE-Q8 46.86% 75.54%
MXFP4_MOE-output_q8-embd_mxfp4 47.5% 72.15%
MXFP4_MOE-F16 40.8% 50.54%
Q8_0 46.86% 71.6%
MXFP4_MOE-Q5_K 49.29% 76.85%
Q5_K_M 64.57% 89.89%
MXFP4_MOE-Q4_K 50.11% 82.1%
Q4_K_M 69.85% 131.02%
MXFP4_MOE-output_mxfp4-embd_q8 49.68% 78.09%
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 49.68% 65.82%
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 53.06% 83.49%
MXFP4_MOE-output_mxfp4-embd_q4_K 50.27% 77.62%
MXFP4_MOE-output_mxfp4-embd_q5_K 50.12% 77.01%
MXFP4_MOE-output_mxfp4-embd_q6_K 49.96% 77.08%
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 54.92% 79.94%
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 56.66% 93.6%
MXFP4_MOE 73.42% 77.85%
  • All percentages compared against the selected family F16 baseline.

Table - File Size + TPS + Avg Precision Loss

model_name file_size_gb bench_tps avg_prec_loss
F16 61.03 12.96 0
MXFP4_MOE-Q6_K 31.47 22.96 0.0112
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K 29.58 23.55 0.0207
Q6_K 25.04 25.56 0.0227
MXFP4_MOE-Q8 32.43 22.75 0.0351
MXFP4_MOE-output_q8-embd_mxfp4 32.04 22.31 0.0383
MXFP4_MOE-F16 36.13 19.51 0.0389
Q8_0 32.43 22.24 0.0406
MXFP4_MOE-Q5_K 30.95 22.92 0.2384
Q5_K_M 21.62 24.61 0.2767
MXFP4_MOE-Q4_K 30.45 23.6 0.489
Q4_K_M 18.4 29.94 1.1387
MXFP4_MOE-output_mxfp4-embd_q8 30.71 23.08 4.6019
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 30.71 21.49 4.6019
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 28.65 23.78 4.6022
MXFP4_MOE-output_mxfp4-embd_q4_K 30.35 23.02 4.6072
MXFP4_MOE-output_mxfp4-embd_q5_K 30.44 22.94 4.6178
MXFP4_MOE-output_mxfp4-embd_q6_K 30.54 22.95 4.6208
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 27.51 23.32 4.6877
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 26.45 25.09 4.8599
MXFP4_MOE 16.22 23.05 9.9769
  • Bench NGL was 35
  • Utilized CUDA

Table - PPL Columns

model_name gen gen_er code code_er math math_er
F16 6.0666 0.122 1.3296 0.0073 5.6056 0.1018
MXFP4_MOE-Q6_K 6.0612 0.1217 1.3295 0.0073 5.6129 0.102
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K 6.0636 0.1218 1.3294 0.0073 5.6127 0.102
Q6_K 6.0667 0.1219 1.3302 0.0073 5.6068 0.1018
MXFP4_MOE-Q8 6.0636 0.1219 1.3295 0.0073 5.6029 0.1017
MXFP4_MOE-output_q8-embd_mxfp4 6.0629 0.1218 1.3295 0.0073 5.603 0.1017
MXFP4_MOE-F16 6.0589 0.1217 1.3295 0.0073 5.6066 0.1018
Q8_0 6.0614 0.1218 1.3295 0.0073 5.604 0.1018
MXFP4_MOE-Q5_K 6.0866 0.1224 1.3301 0.0073 5.6251 0.1023
Q5_K_M 6.0847 0.1223 1.3329 0.0073 5.6215 0.1022
MXFP4_MOE-Q4_K 6.1012 0.1224 1.3308 0.0073 5.6508 0.1028
Q4_K_M 6.1801 0.1249 1.3394 0.0074 5.6509 0.103
MXFP4_MOE-output_mxfp4-embd_q8 6.3876 0.128 1.3383 0.0072 6.0462 0.1118
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 6.3876 0.128 1.3383 0.0072 6.0462 0.1118
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 6.3865 0.128 1.3382 0.0072 6.0477 0.1118
MXFP4_MOE-output_mxfp4-embd_q4_K 6.3856 0.1279 1.3385 0.0072 6.0481 0.1118
MXFP4_MOE-output_mxfp4-embd_q5_K 6.3879 0.128 1.3383 0.0072 6.0486 0.1118
MXFP4_MOE-output_mxfp4-embd_q6_K 6.3894 0.128 1.3384 0.0072 6.0473 0.1118
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 6.3945 0.1282 1.339 0.0072 6.0513 0.1119
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 6.4104 0.1288 1.3417 0.0073 6.0542 0.1121
MXFP4_MOE 6.9165 0.1439 1.3823 0.008 6.2759 0.1183
  • gen = ppl_general
  • gen_er = ppl_general_error
  • code = ppl_code
  • code_er = ppl_code_error
  • math = ppl_math
  • math_er = ppl_math_error

Table - Precision Loss Columns

model_name loss_general loss_code loss_math
F16 0 0 0
MXFP4_MOE-Q6_K -0.089 -0.0075 0.1302
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K -0.0495 -0.015 0.1267
Q6_K 0.0016 0.0451 0.0214
MXFP4_MOE-Q8 -0.0495 -0.0075 -0.0482
MXFP4_MOE-output_q8-embd_mxfp4 -0.061 -0.0075 -0.0464
MXFP4_MOE-F16 -0.1269 -0.0075 0.0178
Q8_0 -0.0857 -0.0075 -0.0285
MXFP4_MOE-Q5_K 0.3297 0.0376 0.3479
Q5_K_M 0.2984 0.2482 0.2836
MXFP4_MOE-Q4_K 0.5703 0.0903 0.8063
Q4_K_M 1.8709 0.7371 0.8081
MXFP4_MOE-output_mxfp4-embd_q8 5.2913 0.6543 7.86
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 5.2913 0.6543 7.86
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 5.2731 0.6468 7.8868
MXFP4_MOE-output_mxfp4-embd_q4_K 5.2583 0.6694 7.8939
MXFP4_MOE-output_mxfp4-embd_q5_K 5.2962 0.6543 7.9028
MXFP4_MOE-output_mxfp4-embd_q6_K 5.3209 0.6619 7.8796
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 5.405 0.707 7.951
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 5.6671 0.91 8.0027
MXFP4_MOE 14.0095 3.9636 11.9577
  • loss_general = precision_loss_general_pct
  • loss_code = precision_loss_code_pct
  • loss_math = precision_loss_math_pct
Downloads last month
525
GGUF
Model size
33B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for magiccodingman/Qwen3-VL-32B-Thinking-Unsloth-MXFP4-Hybrid-GGUF

Quantized
(1)
this model

Collection including magiccodingman/Qwen3-VL-32B-Thinking-Unsloth-MXFP4-Hybrid-GGUF