Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
magiccodingman
/
Granite-4.0-H-1B-Unsloth-MXFP4-Hybrid-GGUF
like
0
Text Generation
GGUF
granitemoehybrid
mxfp4_hybrid
quantized
cpu
gpu
mxfp4
mxfp4_moe
qwen3
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Granite-4.0-H-1B-Unsloth-MXFP4-Hybrid-GGUF
1.58 GB
1 contributor
History:
8 commits
magiccodingman
Update README.md
12b5887
verified
1 day ago
Benchmarks
Fixed naming & benchmarks
4 days ago
.gitattributes
Safe
1.61 kB
Add GGUF models + tokenizer with LFS
4 days ago
README.md
11.7 kB
Update README.md
1 day ago
chat_template.jinja
Safe
6.42 kB
Add GGUF models + tokenizer with LFS
4 days ago
config.json
Safe
1.85 kB
Add GGUF models + tokenizer with LFS
4 days ago
generation_config.json
Safe
171 Bytes
Add GGUF models + tokenizer with LFS
4 days ago
granite-4.0-h-1b-unsloth-MXFP4_MOE-output_mxfp4-router_gate_emb_q8.gguf
1.55 GB
xet
Fixed naming & benchmarks
4 days ago
merges.txt
Safe
917 kB
Add GGUF models + tokenizer with LFS
4 days ago
special_tokens_map.json
Safe
579 Bytes
Add GGUF models + tokenizer with LFS
4 days ago
tokenizer.json
7.15 MB
xet
Add GGUF models + tokenizer with LFS
4 days ago
tokenizer_config.json
Safe
24.2 kB
Add GGUF models + tokenizer with LFS
4 days ago
vocab.json
Safe
1.61 MB
Add GGUF models + tokenizer with LFS
4 days ago