-
-
-
-
-
-
Inference Providers
Active filters:
int4
tencent/DeepSeek-V3.1-Terminus-W4AFP8
Text Generation
•
349B
•
Updated
•
154
•
10
tclf90/qwen2.5-72b-instruct-gptq-int4
Text Generation
•
12B
•
Updated
•
19
•
2
ForeseeLab/foreseeai-qwen3-4b-iot-int4
Text Generation
•
4B
•
Updated
•
3
•
1
ModelCloud/GLM-4.6-REAP-268B-A32B-GPTQMODEL-W4A16
Text Generation
•
269B
•
Updated
•
39
•
2
Advantech-EIOT/intel_llama-2-chat-7b
Text Generation
•
Updated
•
2
RedHatAI/zephyr-7b-beta-marlin
Text Generation
•
1B
•
Updated
•
31
RedHatAI/TinyLlama-1.1B-Chat-v1.0-marlin
Text Generation
•
0.3B
•
Updated
•
170
•
2
RedHatAI/OpenHermes-2.5-Mistral-7B-marlin
Text Generation
•
1B
•
Updated
•
23
•
2
RedHatAI/Nous-Hermes-2-Yi-34B-marlin
Text Generation
•
5B
•
Updated
•
4
•
5
ecastera/ecastera-eva-westlake-7b-spanish-int4-gguf
7B
•
Updated
•
9
•
2
softmax/Llama-2-70b-chat-hf-marlin
Text Generation
•
10B
•
Updated
softmax/falcon-180B-chat-marlin
Text Generation
•
26B
•
Updated
study-hjt/Meta-Llama-3-8B-Instruct-GPTQ-Int4
Text Generation
•
2B
•
Updated
study-hjt/Meta-Llama-3-70B-Instruct-GPTQ-Int4
Text Generation
•
11B
•
Updated
•
1
•
6
study-hjt/Meta-Llama-3-70B-Instruct-AWQ
Text Generation
•
11B
•
Updated
study-hjt/Qwen1.5-110B-Chat-GPTQ-Int4
Text Generation
•
17B
•
Updated
•
1
•
2
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4
Text Generation
•
2B
•
Updated
study-hjt/Qwen1.5-110B-Chat-AWQ
Text Generation
•
17B
•
Updated
•
2
modelscope/Yi-1.5-34B-Chat-AWQ
Text Generation
•
5B
•
Updated
•
1
•
1
modelscope/Yi-1.5-6B-Chat-GPTQ
Text Generation
•
1B
•
Updated
•
1
modelscope/Yi-1.5-6B-Chat-AWQ
Text Generation
•
1B
•
Updated
•
48
modelscope/Yi-1.5-9B-Chat-GPTQ
Text Generation
•
2B
•
Updated
•
1
modelscope/Yi-1.5-9B-Chat-AWQ
Text Generation
•
2B
•
Updated
•
7
modelscope/Yi-1.5-34B-Chat-GPTQ
Text Generation
•
5B
•
Updated
•
2
•
1
jojo1899/Phi-3-mini-128k-instruct-ov-int4
Text Generation
•
Updated
•
3
jojo1899/Llama-2-13b-chat-hf-ov-int4
Text Generation
•
Updated
•
1
jojo1899/Mistral-7B-Instruct-v0.2-ov-int4
Text Generation
•
Updated
model-scope/glm-4-9b-chat-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
20
•
6
ModelCloud/Mistral-Nemo-Instruct-2407-gptq-4bit
Text Generation
•
3B
•
Updated
•
18
•
5
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
2B
•
Updated
•
84
•
4