This is a MXFP4_MOE quantization of the model ERNIE-4.5-21B-A3B-PT

Model quantized with BF16 GGUF's from: https://huggingface.co/unsloth/ERNIE-4.5-21B-A3B-PT-GGUF

Original model: https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-PT

Downloads last month
281
GGUF
Model size
22B params
Architecture
ernie4_5-moe
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/ERNIE-4.5-21B-A3B-PT-MXFP4_MOE-GGUF

Quantized
(21)
this model