|
|
--- |
|
|
base_model: Qwen/Qwen-Image-Edit-2509 |
|
|
base_model_relation: quantized |
|
|
datasets: |
|
|
- mit-han-lab/svdquant-datasets |
|
|
language: |
|
|
- en |
|
|
library_name: diffusers |
|
|
license: apache-2.0 |
|
|
pipeline_tag: text-to-image |
|
|
tags: |
|
|
- image-editing |
|
|
- SVDQuant |
|
|
- Qwen-Image-Edit-2509 |
|
|
- Diffusion |
|
|
- Quantization |
|
|
- ICLR2025 |
|
|
|
|
|
--- |
|
|
<p align="center" style="border-radius: 10px"> |
|
|
<img src="https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/nunchaku_v2.png" width="30%" alt="Nunchaku Logo"/> |
|
|
</p> |
|
|
|
|
|
<div align="center"> |
|
|
<a href=https://discord.gg/Wk6PnwX9Sm target="_blank"><img src=https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fdiscord.com%2Fapi%2Finvites%2FWk6PnwX9Sm%3Fwith_counts%3Dtrue&query=%24.approximate_member_count&logo=discord&logoColor=white&label=Discord&color=green&suffix=%20total height=22px></a> |
|
|
<a href=https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/wechat.jpg target="_blank"><img src=https://img.shields.io/badge/WeChat-07C160?logo=wechat&logoColor=white height=22px></a> |
|
|
</div> |
|
|
|
|
|
# Model Card for nunchaku-qwen-image-edit-2509 |
|
|
|
|
|
 |
|
|
This repository contains Nunchaku-quantized versions of [Qwen-Image-Edit-2509](https://huggingface.co/Qwen/Qwen-Image-Edit-2509), an image-editing model based on [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image), advances in complex text rendering. It is optimized for efficient inference while maintaining minimal loss in performance. |
|
|
|
|
|
## News |
|
|
|
|
|
- [2025-09-25] 🔥 Release **4-bit [4/8-step lightning Qwen-Image-Edit](https://huggingface.co/lightx2v/Qwen-Image-Lightning)**! |
|
|
- [2025-09-24] 🚀 Release 4-bit SVDQuant quantized Qwen-Image-Edit-2509 model with rank 32 and 128! |
|
|
|
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
- **Developed by:** Nunchaku Team |
|
|
- **Model type:** image-to-image |
|
|
- **License:** apache-2.0 |
|
|
- **Quantized from model:** [Qwen-Image-Edit-2509](https://huggingface.co/Qwen/Qwen-Image-Edit-2509) |
|
|
|
|
|
### Model Files |
|
|
|
|
|
- [`svdq-int4_r32-qwen-image-edit-2509.safetensors`](./svdq-int4_r32-qwen-image-edit-2509.safetensors): SVDQuant INT4 (rank 32) Qwen-Image-Edit-2509 model. For users with non-Blackwell GPUs (pre-50-series). |
|
|
- [`svdq-int4_r128-qwen-image-edit-2509.safetensors`](./svdq-int4_r128-qwen-image-edit-2509.safetensors): SVDQuant INT4 (rank 128) Qwen-Image-Edit-2509 model. For users with non-Blackwell GPUs (pre-50-series). It offers better quality than the rank 32 model, but it is slower. |
|
|
- [`svdq-int4_r32-qwen-image-edit-2509-lightningv2.0-4steps.safetensors`](./svdq-int4_r32-qwen-image-edit-2509-lightningv2.0-4steps.safetensors): SVDQuant INT4 (rank 32) 4-step Qwen-Image-Edit-2509 model by fusing [Qwen-Image-Lightning-4steps-V2.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-4steps-V2.0-bf16.safetensors) using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series). |
|
|
- [`svdq-int4_r128-qwen-image-edit-2509-lightning`](./svdq-int4_r128-qwen-image-edit-2509-lightning): SVDQuant INT4 (rank 128) 4-step Qwen-Image-Edit-2509 model by fusing [Qwen-Image-Lightning-4steps-V2.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-4steps-V2.0-bf16.safetensors) using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series). |
|
|
- [`svdq-int4_r32-qwen-image-edit-2509-lightningv2.0-8steps.safetensors`](./svdq-int4_r32-qwen-image-edit-2509-lightningv2.0-8steps.safetensors): SVDQuant INT4 (rank 32) 8-step Qwen-Image-Edit-2509 model by fusing [Qwen-Image-Lightning-8steps-V2.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-8steps-V2.0-bf16.safetensors) using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series). |
|
|
- [`svdq-int4_r128-qwen-image-edit-2509-lightningv2.0-8steps.safetensors`](./svdq-int4_r128-qwen-image-edit-2509-lightningv2.0-8steps.safetensors): SVDQuant INT4 (rank 128) 8-step Qwen-Image-Edit-2509 model by fusing [Qwen-Image-Lightning-8steps-V2.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-8steps-V2.0-bf16.safetensors) using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series). |
|
|
- [`svdq-fp4_r32-qwen-image-edit-2509.safetensors`](./svdq-fp4_r32-qwen-image-edit-2509.safetensors): SVDQuant NVFP4 (rank 32) Qwen-Image-Edit-2509 model. For users with Blackwell GPUs (50-series). |
|
|
- [`svdq-fp4_r128-qwen-image-edit-2509.safetensors`](./svdq-fp4_r128-qwen-image-edit-2509.safetensors): SVDQuant NVFP4 (rank 128) Qwen-Image-Edit-2509 model. For users with Blackwell GPUs (50-series). It offers better quality than the rank 32 model, but it is slower. |
|
|
- [`svdq-fp4_r32-qwen-image-edit-2509-lightningv2.0-4steps.safetensors`](./svdq-fp4_r32-qwen-image-edit-2509-lightningv2.0-4steps.safetensors): SVDQuant NVFP4 (rank 32) 4-step Qwen-Image-Edit-2509 model by fusing [Qwen-Image-Lightning-4steps-V2.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-4steps-V2.0-bf16.safetensors) using LoRA strength = 1.0. For users with Blackwell GPUs (50-series). |
|
|
- [`svdq-fp4_r128-qwen-image-edit-2509-lightningv2.0-4steps.safetensors`](./svdq-fp4_r128-qwen-image-edit-2509-lightningv2.0-4steps.safetensors): SVDQuant NVFP4 (rank 128) 4-step Qwen-Image-Edit-2509 model by fusing [Qwen-Image-Lightning-4steps-V2.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-4steps-V2.0-bf16.safetensors) using LoRA strength = 1.0. For users with Blackwell GPUs (50-series). |
|
|
- [`svdq-fp4_r32-qwen-image-edit-2509-lightningv2.0-8steps.safetensors`](./svdq-fp4_r32-qwen-image-edit-2509-lightningv2.0-8steps.safetensors): SVDQuant NVFP4 (rank 32) 8-step Qwen-Image-Edit-2509 model by fusing [Qwen-Image-Lightning-8steps-V2.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-8steps-V2.0-bf16.safetensors) using LoRA strength = 1.0. For users with Blackwell GPUs (50-series). |
|
|
- [`svdq-fp4_r128-qwen-image-edit-2509-lightningv2.0-8steps.safetensors`](./svdq-fp4_r128-qwen-image-edit-2509-lightningv2.0-8steps.safetensors): SVDQuant NVFP4 (rank 128) 8-step Qwen-Image-Edit-2509 model by fusing [Qwen-Image-Lightning-8steps-V2.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-8steps-V2.0-bf16.safetensors) using LoRA strength = 1.0. For users with Blackwell GPUs (50-series). |
|
|
|
|
|
|
|
|
### Model Sources |
|
|
|
|
|
- **Inference Engine:** [nunchaku](https://github.com/nunchaku-tech/nunchaku) |
|
|
- **Quantization Library:** [deepcompressor](https://github.com/nunchaku-tech/deepcompressor) |
|
|
- **Paper:** [SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models](http://arxiv.org/abs/2411.05007) |
|
|
- **Demo:** [svdquant.mit.edu](https://svdquant.mit.edu) |
|
|
|
|
|
## Usage |
|
|
|
|
|
- Diffusers Usage: See [qwen-image-edit-2509.py](https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit-2509.py). Check this [tutorial](https://nunchaku.tech/docs/nunchaku/usage/qwen-image-edit.html) for more advanced usage. |
|
|
- ComfyUI Usage: See [nunchaku-qwen-image-edit-2509.json](https://nunchaku.tech/docs/ComfyUI-nunchaku/workflows/qwenimage.html#nunchaku-qwen-image-edit-2509-json). |
|
|
|
|
|
## Performance |
|
|
|
|
|
 |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{ |
|
|
li2024svdquant, |
|
|
title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models}, |
|
|
author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song}, |
|
|
booktitle={The Thirteenth International Conference on Learning Representations}, |
|
|
year={2025} |
|
|
} |
|
|
``` |