Inference Providers

·

Metrics for top trending models

Kimi-K2-Thinking
novita
cheapest
0.6 2.5 262144 1.42 13 Yes No
Kimi-K2-Thinking
together
fastest
1.2 4 262144 0.47 74 Yes No
DeepSeek-OCR
novita
0.03 0.03 8192 0.99 190 No Yes
MiniMax-M2
novita
0.3 1.2 204800 1.23 63 No No
Llama-3.1-8B-Instruct
novita
cheapest
0.02 0.05 16384 0.74 77 No No
Llama-3.1-8B-Instruct
nebius
0.03 0.09 131072 0.53 57 No Yes
Llama-3.1-8B-Instruct
cerebras
fastest
0.1 0.1 - 0.32 1146 No No
Llama-3.1-8B-Instruct
sambanova
0.1 0.2 16384 0.43 674 Yes Yes
Llama-3.1-8B-Instruct
nscale
0.06 0.06 131072 0.92 59 No Yes
Llama-3.1-8B-Instruct
fireworks-ai
0.2 0.2 131072 0.40 191 No No
Llama-3.1-8B-Instruct
scaleway
- - - 0.44 91 Yes Yes
Llama-3.1-8B-Instruct
ovhcloud
- - - 0.40 42 Yes Yes
gpt-oss-20b
groq
fastest
0.1 0.5 131072 0.21 772 Yes No
gpt-oss-20b
novita
0.04 0.15 131072 0.89 119 No Yes
gpt-oss-20b
nebius
0.05 0.2 131072 0.47 178 Yes Yes
gpt-oss-20b
nscale
0.05 0.2 131072 1.05 23 Yes Yes
gpt-oss-20b
hyperbolic
cheapest
0.1 0.1 131072 0.68 79 No No
gpt-oss-20b
together
0.05 0.2 131072 0.45 183 Yes No
gpt-oss-20b
fireworks-ai
0.05 0.2 131072 0.82 136 Yes No
gpt-oss-20b
ovhcloud
- - - 0.32 107 Yes Yes
GLM-4.6
novita
cheapest
0.6 2.2 204800 0.90 23 Yes No
GLM-4.6
cerebras
fastest
- - - 0.34 288 Yes No
GLM-4.6
zai-org
- - - 2.58 69 Yes No
gpt-oss-120b
groq
0.15 0.75 131072 0.35 394 Yes No
gpt-oss-120b
novita
cheapest
0.05 0.25 131072 1.05 126 No Yes
gpt-oss-120b
nebius
0.15 0.6 131072 0.52 115 Yes Yes
gpt-oss-120b
cerebras
fastest
0.25 0.69 - 0.30 914 Yes No
gpt-oss-120b
sambanova
0.22 0.59 131072 0.95 394 Yes Yes
gpt-oss-120b
nscale
0.1 0.4 131072 1.05 34 Yes Yes
gpt-oss-120b
hyperbolic
0.3 0.3 131072 0.75 268 Yes No
gpt-oss-120b
together
0.15 0.6 131072 0.38 88 Yes Yes
gpt-oss-120b
fireworks-ai
0.15 0.6 131072 0.88 87 Yes No
gpt-oss-120b
scaleway
- - - 0.48 153 Yes Yes
gpt-oss-120b
ovhcloud
- - - 0.38 132 Yes Yes
DeepSeek-V3.2-Exp
novita
0.27 0.41 163840 0.89 28 Yes Yes
DeepSeek-R1
novita
0.7 2.5 64000 0.84 28 Yes No
DeepSeek-R1
sambanova
fastest
- - - 1.32 149 No No
DeepSeek-R1
hyperbolic
cheapest
2 2 163840 0.96 49 No No
DeepSeek-R1
together
3 7 163840 0.63 56 No Yes
DeepSeek-R1
fireworks-ai
- - - - - - -
Qwen3-8B
nscale
0.07 0.18 40960 0.94 57 Yes No
Qwen3-4B-Instruct-2507
nscale
0.01 0.03 262144 1.10 68 Yes No
Qwen3-VL-8B-Instruct
novita
0.08 0.5 131072 1.09 47 Yes Yes
Mistral-7B-Instruct-v0.3
novita
- - - - - - -
Mistral-7B-Instruct-v0.3
together
- - - - - - -
Mistral-7B-Instruct-v0.3
ovhcloud
- - - 0.48 43 Yes Yes
Kimi-K2-Instruct-0905
groq
fastest
- - 262144 0.26 175 Yes No
Kimi-K2-Instruct-0905
novita
cheapest
0.6 2.5 262144 1.56 35 No No
Kimi-K2-Instruct-0905
together
1 3 262144 0.61 46 Yes Yes
Qwen2.5-7B-Instruct
together
0.3 0.3 32768 0.30 143 Yes Yes
Llama-3.2-3B-Instruct
novita
cheapest
0.03 0.05 32768 0.58 132 Yes No
Llama-3.2-3B-Instruct
sambanova
- - - - - - -
Llama-3.2-3B-Instruct
hyperbolic
0.1 0.1 131072 0.78 133 No No
Llama-3.2-3B-Instruct
together
fastest
0.06 0.06 131072 0.36 151 Yes Yes
Qwen3-VL-30B-A3B-Instruct
novita
0.2 0.7 131072 0.95 89 Yes Yes
Qwen3-Coder-30B-A3B-Instruct
nebius
cheapest
0.1 0.3 262144 0.46 52 Yes Yes
Qwen3-Coder-30B-A3B-Instruct
fireworks-ai
- - - - - - -
Qwen3-Coder-30B-A3B-Instruct
scaleway
- - - 0.50 57 Yes No
Qwen3-Coder-30B-A3B-Instruct
ovhcloud
fastest
- - - 0.61 110 Yes Yes
Meta-Llama-3-8B-Instruct
groq
- - - - - - -
Meta-Llama-3-8B-Instruct
novita
0.04 0.04 8192 0.69 75 No No
Meta-Llama-3-8B-Instruct
together
- - - - - - -
Qwen2.5-VL-7B-Instruct
hyperbolic
0.2 0.2 32768 0.48 75 No No
Llama-3.2-1B-Instruct
novita
- - 131000 0.78 213 No No
Llama-3.2-1B-Instruct
sambanova
- - - - - - -
gemma-3-27b-it
nebius
cheapest fastest
0.2 0.6 110000 0.33 77 No Yes
gemma-3-27b-it
scaleway
- - - 0.61 43 Yes No
Kimi-K2-Instruct
novita
cheapest
0.57 2.3 131072 0.92 30 Yes Yes
Kimi-K2-Instruct
nebius
0.5 2.4 131072 0.73 26 Yes Yes
Kimi-K2-Instruct
together
1 3 131072 0.76 39 Yes Yes
Kimi-K2-Instruct
fireworks-ai
fastest
0.6 2.5 131072 0.93 48 Yes No
Qwen3-Next-80B-A3B-Instruct
novita
0.15 1.5 131072 0.63 143 Yes No
Qwen3-Next-80B-A3B-Instruct
hyperbolic
cheapest fastest
0.3 0.3 262144 0.49 148 No No
Qwen3-Next-80B-A3B-Instruct
together
0.15 1.5 262144 0.43 139 Yes Yes
DeepSeek-R1-Distill-Qwen-1.5B
nscale
0.1 0.1 131072 0.98 137 No No
GLM-4.5-Air
novita
cheapest
0.13 0.85 131072 0.74 56 Yes No
GLM-4.5-Air
nebius
fastest
0.2 1.2 131072 0.43 69 Yes Yes
GLM-4.5-Air
fireworks-ai
- - - - - - -
GLM-4.5-Air
zai-org
- - - 2.72 68 Yes No
cogito-671b-v2.1
together
cheapest fastest
1.25 1.25 163840 0.62 64 Yes Yes
cogito-671b-v2.1
fireworks-ai
- - 163840 0.55 42 Yes No
SmolLM3-3B
hf-inference
- - - 0.30 86 Yes Yes
Qwen3-VL-235B-A22B-Thinking
novita
0.98 3.95 131072 1.14 38 No No
Qwen3-32B
groq
0.29 0.59 131072 0.31 279 Yes No
Qwen3-32B
novita
0.1 0.45 40960 0.67 52 No No
Qwen3-32B
nebius
0.1 0.3 40960 0.46 46 Yes Yes
Qwen3-32B
cerebras
fastest
0.4 0.8 - 0.36 610 No No
Qwen3-32B
sambanova
0.4 0.8 32768 2.21 201 Yes Yes
Qwen3-32B
nscale
cheapest
0.08 0.25 40960 1.10 25 Yes Yes
Qwen3-32B
ovhcloud
- - - 0.54 38 Yes Yes
Llama-3.3-70B-Instruct
groq
0.59 0.79 131072 0.24 279 Yes No
Llama-3.3-70B-Instruct
novita
cheapest
0.13 0.39 131072 0.69 18 Yes No
Llama-3.3-70B-Instruct
nebius
0.25 0.75 131072 0.45 129 Yes Yes
Llama-3.3-70B-Instruct
cerebras
fastest
0.85 1.2 - 0.38 917 Yes No
Llama-3.3-70B-Instruct
sambanova
0.6 1.2 131072 0.79 262 Yes Yes
Llama-3.3-70B-Instruct
nscale
0.4 0.4 131072 0.97 16 No Yes
Llama-3.3-70B-Instruct
hyperbolic
0.4 0.4 131072 0.68 142 No No
Llama-3.3-70B-Instruct
together
0.88 0.88 131072 0.62 101 Yes Yes
Llama-3.3-70B-Instruct
fireworks-ai
0.9 0.9 131072 0.45 132 No No
Llama-3.3-70B-Instruct
scaleway
- - - 0.67 36 Yes Yes
Llama-3.3-70B-Instruct
ovhcloud
- - - 0.52 29 Yes Yes
Qwen3-Coder-480B-A35B-Instruct
novita
cheapest
0.29 1.2 262144 0.83 47 Yes Yes
Qwen3-Coder-480B-A35B-Instruct
nebius
0.4 1.8 262144 1.18 32 Yes Yes
Qwen3-Coder-480B-A35B-Instruct
cerebras
- - - - - - -
Qwen3-Coder-480B-A35B-Instruct
hyperbolic
fastest
2 2 262144 1.25 48 Yes No
Qwen3-Coder-480B-A35B-Instruct
together
2 2 262144 0.52 47 Yes Yes
Qwen3-Coder-480B-A35B-Instruct
fireworks-ai
0.45 1.8 262144 3.21 21 Yes No
Qwen3-235B-A22B-Instruct-2507
novita
cheapest
0.09 0.58 131072 1.04 27 Yes Yes
Qwen3-235B-A22B-Instruct-2507
nebius
0.2 0.6 262144 0.55 14 Yes Yes
Qwen3-235B-A22B-Instruct-2507
cerebras
fastest
0.6 1.2 - 0.36 232 Yes No
Qwen3-235B-A22B-Instruct-2507
nscale
0.2 0.6 32768 1.09 24 Yes Yes
Qwen3-235B-A22B-Instruct-2507
hyperbolic
2 2 262144 1.02 45 No No
Qwen3-235B-A22B-Instruct-2507
together
0.2 0.6 262144 0.58 29 No Yes
Qwen3-235B-A22B-Instruct-2507
fireworks-ai
0.22 0.88 262144 0.85 38 Yes No
Qwen3-235B-A22B-Instruct-2507
scaleway
- - - 0.41 82 Yes Yes
Llama-4-Maverick-17B-128E-Instruct
groq
cheapest fastest
0.2 0.6 131072 1.69 256 Yes No
Llama-4-Maverick-17B-128E-Instruct
sambanova
0.63 1.8 131072 2.76 222 Yes Yes
Llama-4-Maverick-17B-128E-Instruct
fireworks-ai
0.22 0.88 1048576 0.63 86 Yes No
gpt-oss-safeguard-20b
groq
- - 131072 0.22 794 Yes No
gemma-2-2b-it
nebius
0.02 0.06 8192 0.33 145 No Yes
Llama-4-Scout-17B-16E-Instruct
groq
fastest
0.11 0.34 131072 0.26 344 Yes No
Llama-4-Scout-17B-16E-Instruct
novita
0.1 0.5 131072 0.49 37 No No
Llama-4-Scout-17B-16E-Instruct
sambanova
- - - - - - -
Llama-4-Scout-17B-16E-Instruct
nscale
cheapest
0.09 0.29 890000 0.79 33 Yes Yes
Llama-4-Scout-17B-16E-Instruct
together
0.18 0.59 1048576 0.42 69 No No
Llama-4-Scout-17B-16E-Instruct
fireworks-ai
0.15 0.6 1048576 0.76 55 Yes No
DeepSeek-R1-0528-Qwen3-8B
novita
0.06 0.09 128000 0.89 77 No No
DeepSeek-V3-0324
novita
0.27 1.12 163840 1.21 29 Yes Yes
DeepSeek-V3-0324
nebius
0.75 2.25 32768 1.02 113 No No
DeepSeek-V3-0324
sambanova
fastest
3 4.5 131072 1.24 147 Yes Yes
DeepSeek-V3-0324
hyperbolic
1.25 1.25 163840 1.51 21 No No
DeepSeek-V3-0324
together
1.25 1.25 131072 0.94 29 Yes Yes
DeepSeek-V3-0324
fireworks-ai
cheapest
0.9 0.9 163840 1.08 61 Yes No
Apertus-8B-Instruct-2509
publicai
- - - 1.24 36 No Yes
Mixtral-8x7B-Instruct-v0.1
together
- - - - - - -
Mixtral-8x7B-Instruct-v0.1
ovhcloud
- - - 0.56 21 No Yes
Mistral-Nemo-Instruct-2407
ovhcloud
- - - 0.46 28 Yes Yes
Qwen3-14B
nscale
0.07 0.2 40960 0.98 48 Yes Yes
DeepSeek-R1-Distill-Qwen-14B
novita
cheapest fastest
0.15 0.15 32768 0.99 62 No No
DeepSeek-R1-Distill-Qwen-14B
nscale
0.2 0.2 131072 0.94 37 No No
DeepSeek-R1-Distill-Qwen-7B
nscale
0.15 0.15 131072 0.93 67 No No
Qwen3-30B-A3B-Instruct-2507
nebius
0.1 0.3 262144 0.54 73 Yes Yes
DeepSeek-V3.1
novita
cheapest
0.27 1 131072 1.28 29 Yes No
DeepSeek-V3.1
together
0.6 1.7 131072 1.74 63 Yes No
DeepSeek-V3.1
fireworks-ai
fastest
- - 163840 0.94 72 Yes No
Qwen3-VL-30B-A3B-Thinking
novita
0.2 1 131072 0.87 88 Yes Yes
Qwen2.5-Coder-32B-Instruct
nscale
cheapest
0.06 0.2 131072 1.03 27 No Yes
Qwen2.5-Coder-32B-Instruct
hyperbolic
fastest
0.2 0.2 32768 1.23 64 No No
Qwen2.5-Coder-32B-Instruct
together
- - - - - - -
Qwen2.5-Coder-32B-Instruct
scaleway
- - - 0.46 39 Yes No
GLM-4.5
novita
0.6 2.2 131072 0.72 59 Yes No
GLM-4.5
nebius
0.6 2.2 131072 0.45 37 Yes Yes
GLM-4.5
fireworks-ai
cheapest fastest
0.55 2.19 131072 0.95 83 Yes No
GLM-4.5
zai-org
- - - 3.09 58 Yes No
DeepSeek-V3
novita
fastest
0.4 1.3 64000 1.14 30 Yes No
DeepSeek-V3
together
cheapest
1.25 1.25 131072 1.15 26 Yes Yes
DeepSeek-V3
fireworks-ai
- - - - - - -
QwQ-32B
groq
- - - - - - -
QwQ-32B
nebius
- - - - - - -
QwQ-32B
sambanova
- - - - - - -
QwQ-32B
nscale
cheapest
0.18 0.2 131072 1.05 27 Yes Yes
QwQ-32B
hyperbolic
fastest
0.4 0.4 131072 1.02 59 No No
QwQ-32B
fireworks-ai
- - - - - - -
Qwen2.5-Coder-7B-Instruct
nscale
0.01 0.03 131072 1.08 61 No Yes
GLM-4.5V
novita
cheapest
0.6 1.8 65536 0.90 64 Yes No
GLM-4.5V
zai-org
fastest
- - - 2.64 72 No No
cogito-671b-v2.1-FP8
together
1.25 1.25 163840 0.76 61 No Yes
DeepSeek-R1-Distill-Qwen-32B
novita
cheapest fastest
0.3 0.3 64000 0.56 73 No Yes
DeepSeek-R1-Distill-Qwen-32B
nscale
0.3 0.3 131072 1.85 24 No Yes
Qwen3-235B-A22B
novita
0.2 0.8 40960 1.14 35 No No
Qwen3-235B-A22B
nscale
cheapest
0.2 0.6 32000 1.38 23 Yes Yes
Qwen3-235B-A22B
together
0.2 0.6 40960 0.59 44 Yes Yes
Qwen3-235B-A22B
fireworks-ai
fastest
0.22 0.88 131072 1.04 72 Yes No
GLM-4.6-FP8
zai-org
- - - 2.76 63 Yes No
gemma-2-9b-it
groq
- - - - - - -
gemma-2-9b-it
nebius
0.03 0.09 8192 0.45 131 No Yes
DeepSeek-V3.1-Terminus
novita
0.27 1 131072 1.16 32 Yes Yes
Qwen3-4B-Thinking-2507
nscale
0.01 0.03 262144 0.93 94 Yes No
Qwen3-30B-A3B
novita
cheapest
0.09 0.45 40960 0.92 49 No No
Qwen3-30B-A3B
fireworks-ai
fastest
0.15 0.6 131072 0.88 118 Yes No
L3-70B-Euryale-v2.1
novita
1.48 1.48 8192 0.90 41 No No
Qwen2.5-72B-Instruct
novita
cheapest
0.38 0.4 32000 0.68 33 Yes No
Qwen2.5-72B-Instruct
hyperbolic
0.4 0.4 131072 1.07 37 Yes No
Qwen2.5-72B-Instruct
together
fastest
1.2 1.2 131072 0.49 71 Yes Yes
Qwen2.5-72B-Instruct
fireworks-ai
- - - - - - -
ERNIE-4.5-VL-424B-A47B-Base-PT
novita
0.42 1.25 123000 1.15 43 No No
Llama-3.1-70B-Instruct
fireworks-ai
0.9 0.9 131072 0.57 131 No No
Qwen3-VL-235B-A22B-Instruct
novita
0.3 1.5 131072 1.01 46 Yes Yes
NVIDIA-Nemotron-Nano-12B-v2
nebius
0.07 0.2 131072 0.49 118 No No
GLM-4.5V-FP8
zai-org
- - - 2.55 68 Yes No
Qwen3-Next-80B-A3B-Thinking
novita
0.15 1.5 131072 0.94 105 Yes No
Qwen3-Next-80B-A3B-Thinking
hyperbolic
cheapest fastest
0.3 0.3 262144 0.54 172 No No
Qwen3-Next-80B-A3B-Thinking
together
0.15 1.5 262144 1.36 124 Yes Yes
Qwen2.5-VL-72B-Instruct
nebius
0.25 0.75 32000 0.41 31 No Yes
Qwen2.5-VL-72B-Instruct
hyperbolic
cheapest
0.6 0.6 32768 0.65 15 No No
Qwen2.5-VL-72B-Instruct
ovhcloud
fastest
- - - 0.51 32 No Yes
DeepSeek-R1-Distill-Llama-70B
groq
- - - - - - -
DeepSeek-R1-Distill-Llama-70B
novita
0.8 0.8 8192 1.51 31 No Yes
DeepSeek-R1-Distill-Llama-70B
sambanova
fastest
0.7 1.4 131072 0.86 192 No No
DeepSeek-R1-Distill-Llama-70B
nscale
cheapest
0.75 0.75 131072 0.97 16 No No
DeepSeek-R1-Distill-Llama-70B
scaleway
- - - 0.70 24 No Yes
DeepSeek-R1-Distill-Llama-70B
ovhcloud
- - - 0.38 30 No Yes
Llama-4-Maverick-17B-128E-Instruct-FP8
novita
cheapest fastest
0.17 0.85 1048576 0.44 41 No No
Llama-4-Maverick-17B-128E-Instruct-FP8
together
0.27 0.85 1048576 0.47 35 No Yes
DeepSeek-R1-Distill-Llama-8B
novita
- - - - - - -
DeepSeek-R1-Distill-Llama-8B
nscale
0.05 0.05 131072 0.91 57 No No
c4ai-command-a-03-2025
cohere
- - - 0.46 49 Yes No
Qwen3-235B-A22B-Thinking-2507
novita
0.3 3 131072 0.94 20 Yes No
Qwen3-235B-A22B-Thinking-2507
cerebras
- - - - - - -
Qwen3-235B-A22B-Thinking-2507
fireworks-ai
cheapest fastest
0.22 0.88 262144 0.84 43 Yes No
Llama-Guard-4-12B
groq
0.2 0.2 131072 0.33 6 No No
Arch-Router-1.5B
hf-inference
- - - 0.22 63 No Yes
Qwen3-30B-A3B-Thinking-2507
nebius
0.1 0.3 262144 0.44 101 Yes Yes
ERNIE-4.5-0.3B-PT
novita
- - 120000 0.89 104 No No
ERNIE-4.5-VL-28B-A3B-PT
novita
0.14 0.56 30000 0.89 75 No No
Baichuan-M2-32B
novita
0.07 0.07 131072 1.67 42 No Yes
DeepSeek-R1-0528
novita
0.7 2.5 163840 0.97 28 Yes No
DeepSeek-R1-0528
nebius
cheapest
0.8 2.4 163840 0.43 27 Yes Yes
DeepSeek-R1-0528
sambanova
fastest
5 7 131072 1.61 185 Yes Yes
DeepSeek-R1-0528
hyperbolic
3 3 163840 0.89 48 No No
DeepSeek-R1-0528
together
3 7 163840 0.65 55 No Yes
DeepSeek-R1-0528
fireworks-ai
3 8 163840 0.92 74 Yes No
GLM-4.1V-9B-Thinking
novita
0.035 0.138 65536 0.73 70 No No
Meta-Llama-3-70B-Instruct
groq
- - - - - - -
Meta-Llama-3-70B-Instruct
novita
0.51 0.74 8192 0.80 26 No Yes
Meta-Llama-3-70B-Instruct
hyperbolic
cheapest
0.4 0.4 8192 0.78 101 No No
Meta-Llama-3-70B-Instruct
together
fastest
0.88 0.88 8192 0.45 109 No Yes
Qwen2.5-VL-32B-Instruct
fireworks-ai
0.22 0.88 128000 0.56 58 No No
Qwen3-Coder-480B-A35B-Instruct-FP8
together
2 2 262144 0.65 47 Yes Yes
GLM-4.5-Air-FP8
together
0.2 1.1 131072 0.79 104 Yes Yes
Llama-3.1-405B-Instruct
sambanova
- - - - - - -
Llama-3.1-405B-Instruct
fireworks-ai
3 3 131072 0.60 63 Yes No
AFM-4.5B
together
0.048 0.15 65536 0.73 166 No Yes
command-a-translate-08-2025
cohere
- - - 0.30 62 Yes No
c4ai-command-r7b-arabic-02-2025
cohere
- - - 0.35 67 No No
aya-expanse-32b
cohere
- - - 0.31 42 No No
Hermes-2-Pro-Llama-3-8B
novita
0.14 0.14 8192 0.60 82 No No
MiniMax-M1-80k
novita
0.55 2.2 1000000 2.20 54 Yes No
L3-8B-Stheno-v3.2
novita
0.05 0.05 8192 0.87 66 No No
ERNIE-4.5-21B-A3B-PT
novita
0.07 0.28 120000 1.04 80 No No
Apertus-70B-Instruct-2509
publicai
- - - 1.00 52 No Yes
GLM-4-32B-0414
novita
0.55 1.66 32000 0.94 37 No No
DeepSeek-Prover-V2-671B
novita
0.7 2.5 160000 0.81 15 No No
cogito-v2-preview-llama-70B
together
0.88 0.88 32768 1.23 51 Yes Yes
Mixtral-8x22B-Instruct-v0.1
nscale
cheapest
1.2 1.2 65536 1.03 25 No Yes
Mixtral-8x22B-Instruct-v0.1
together
- - - - - - -
Mixtral-8x22B-Instruct-v0.1
fireworks-ai
fastest
1.2 1.2 65536 0.43 69 No No
L3-8B-Lunaris-v1
novita
0.05 0.05 8192 0.77 74 No No
cogito-v2-preview-llama-405B
together
3.5 3.5 32768 0.89 26 Yes Yes
cogito-v2-preview-llama-109B-MoE
together
0.18 0.59 32767 0.56 86 Yes Yes
command-a-reasoning-08-2025
cohere
- - - 1.24 51 Yes No
command-a-vision-07-2025
cohere
- - - 0.28 65 No No
aya-vision-32b
cohere
- - - 0.37 57 No No
c4ai-command-r-08-2024
cohere
- - - 0.36 52 Yes No
c4ai-command-r7b-12-2024
cohere
- - - 0.28 115 Yes No
ERNIE-4.5-300B-A47B-Base-PT
novita
0.28 1.1 123000 0.97 27 No Yes
Hermes-3-Llama-3.1-70B
hyperbolic
0.4 0.4 12288 0.43 30 No No
Llama-3_1-Nemotron-Ultra-253B-v1
nebius
0.6 1.8 131072 0.42 39 No Yes
Qwen2.5-Coder-7B
nebius
0.03 0.09 32768 0.39 195 No Yes
Hermes-4-405B
nebius
1 3 131072 0.41 32 No No
Hermes-4-70B
nebius
0.13 0.4 131072 0.51 82 No No
Dobby-Unhinged-Llama-3.3-70B
fireworks-ai
0.9 0.9 131072 0.43 50 No No
Gemma-SEA-LION-v4-27B-IT
publicai
- - - 2.21 47 No Yes
Qwen2.5-Coder-3B-Instruct
nscale
0.01 0.03 32768 0.76 72 No Yes
Qwen3-235B-A22B-FP8
together
0.2 0.6 40960 0.60 45 Yes Yes
marin-8b-instruct
together
0.18 0.18 4096 0.27 178 No Yes
WizardLM-2-8x22B
novita
0.62 0.62 65535 0.50 45 No No
Llama-3.3-Swallow-70B-Instruct-v0.4
sambanova
0.6 1.2 131072 2.76 152 No Yes