Update README.md
Browse files
README.md
CHANGED
|
@@ -36,7 +36,7 @@ It was evaluated on a several tasks to assess the its quality in comparison to t
|
|
| 36 |
### Model Optimizations
|
| 37 |
|
| 38 |
This model was obtained by quantizing the weights and activations of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) to FP4 data type, ready for inference with vLLM>=0.9.1
|
| 39 |
-
This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately
|
| 40 |
|
| 41 |
Only the weights and activations of the linear operators within transformers blocks are quantized using [LLM Compressor](https://github.com/vllm-project/llm-compressor).
|
| 42 |
|
|
|
|
| 36 |
### Model Optimizations
|
| 37 |
|
| 38 |
This model was obtained by quantizing the weights and activations of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) to FP4 data type, ready for inference with vLLM>=0.9.1
|
| 39 |
+
This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%.
|
| 40 |
|
| 41 |
Only the weights and activations of the linear operators within transformers blocks are quantized using [LLM Compressor](https://github.com/vllm-project/llm-compressor).
|
| 42 |
|