Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
magiccodingman
/
Granite-4.0-H-1B-Unsloth-MXFP4-Hybrid-GGUF
like
0
Text Generation
GGUF
granitemoehybrid
mxfp4_hybrid
quantized
cpu
gpu
mxfp4
mxfp4_moe
qwen3
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
7cc5b4b
Granite-4.0-H-1B-Unsloth-MXFP4-Hybrid-GGUF
1.58 GB
1 contributor
History:
4 commits
magiccodingman
Add GGUF models + tokenizer with LFS
7cc5b4b
14 days ago
Benchmarks
Add GGUF models + tokenizer with LFS
14 days ago
.gitattributes
Safe
1.61 kB
Add GGUF models + tokenizer with LFS
14 days ago
README.md
10.6 kB
Update README.md
14 days ago
chat_template.jinja
Safe
6.42 kB
Add GGUF models + tokenizer with LFS
14 days ago
config.json
Safe
1.85 kB
Add GGUF models + tokenizer with LFS
14 days ago
generation_config.json
Safe
171 Bytes
Add GGUF models + tokenizer with LFS
14 days ago
granite-4.0-h-350m-unsloth-MXFP4_MOE-output_mxfp4-router_gate_emb_q8.gguf
Safe
1.55 GB
xet
Add GGUF models + tokenizer with LFS
14 days ago
merges.txt
Safe
917 kB
Add GGUF models + tokenizer with LFS
14 days ago
special_tokens_map.json
Safe
579 Bytes
Add GGUF models + tokenizer with LFS
14 days ago
tokenizer.json
Safe
7.15 MB
xet
Add GGUF models + tokenizer with LFS
14 days ago
tokenizer_config.json
Safe
24.2 kB
Add GGUF models + tokenizer with LFS
14 days ago
vocab.json
Safe
1.61 MB
Add GGUF models + tokenizer with LFS
14 days ago