--- license: apache-2.0 base_model: - Qwen/Qwen3-VL-2B-Instruct tags: - autoround - nvfp4 --- This is [Qwen/Qwen3-VL-8B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-2B-Instruct) quantized with [AutoRound](https://github.com/intel/auto-round/tree/main/auto_round) in W4A16 (GPTQ format). The model has been created, tested, and evaluated by The Kaitchup. The model is *NOT* compatible with vLLM (as of v0.11). - **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) - **License:** Apache 2.0 license ## How to Support My Work Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). This helps me a lot to continue quantizing and evaluating models for free. Or you prefer to give some GPU hours, "[buy me a coffee](https://ko-fi.com/bnjmn_marie)"