This is an INT4 quantized version of the meta-llama/Llama-3.1-8B-Instruct model. The Python packages used in creating this model are as follows:

openvino==2025.0.0
optimum==1.24.0
optimum-intel==1.22.0
nncf==2.15.0
torch==2.6.0
transformers==4.48.3

This quantized model is created using the following command:

optimum-cli export openvino --model "meta-llama/Llama-3.1-8B-Instruct" --weight-format int4 --group-size 128 --sym --ratio 1 --all-layers ./llama-3_1-8b-instruct-ov-int4

For more details, run the following command from your Python environment: optimum-cli export openvino --help

INFO:nncf:Statistics of the bitwidth distribution:

Num bits (N) % all parameters (layers) % ratio-defining parameters (layers)
4 100% (226 / 226) 100% (226 / 226)
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including jojo1899/llama-3_1-8b-instruct-ov-int4