This model is a RTN/W8A8-FP8 quantization of the original MistralAI/Devstral-small-2507 24B model. We reduced the amount of shards from 10 to 6 making it practically usable on a single GPU node.

We tested it with the following consumer grade hardware.

Test machine: CPU: AMD Ryzen 7 7700X 8-Core GPU: AMD Radeon RX9070XT 16Gb RAM: 64Gb 2x32Gb DDR5-5200 HDD: 2x 1Tb M.2 NVMe PCIe 5.0

Downloads last month
42
Safetensors
Model size
24B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Imagilux/Devstral-Small-2507-FP8

Collection including Imagilux/Devstral-Small-2507-FP8