File size: 4,438 Bytes
7a8d403 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
---
frameworks:
- Pytorch
license: Apache License 2.0
tasks:
- text-to-image-synthesis
#model-type:
##如 gpt、phi、llama、chatglm、baichuan 等
#- gpt
#domain:
##如 nlp、cv、audio、multi-modal
#- nlp
#language:
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
base_model:
- Qwen/Qwen-Image
base_model_relation: adapter
---
# Qwen-Image 精确分区控制模型

## 模型介绍
本模型是基于 [Qwen-Image](https://www.modelscope.cn/models/Qwen/Qwen-Image) 训练的精确分区控制模型 V2 版本,模型结构为 LoRA,可以通过输入每个实体的文本和区域条件(蒙版图)来控制每个实体的位置和形状。训练框架基于 [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) 构建,采用的数据集是 [Qwen-Image-Self-Generated-Dataset](https://www.modelscope.cn/datasets/DiffSynth-Studio/Qwen-Image-Self-Generated-Dataset)。
相比于 [V1](https://www.modelscope.cn/models/DiffSynth-Studio/Qwen-Image-EliGen) 版本,模型采用 Qwen-Image 自生成的数据集训练,生成图像的风格更符合基模。
## 效果展示
|实体控制条件|生成图1|生成图2|生成图3|
|-|-|-|-|
|||||
|||||
## 推理代码
```
git clone https://github.com/modelscope/DiffSynth-Studio.git
cd DiffSynth-Studio
pip install -e .
```
```python
from diffsynth.pipelines.qwen_image import QwenImagePipeline, ModelConfig
from modelscope import dataset_snapshot_download, snapshot_download
import torch
from PIL import Image
pipe = QwenImagePipeline.from_pretrained(
torch_dtype=torch.bfloat16,
device="cuda",
model_configs=[
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="transformer/diffusion_pytorch_model*.safetensors"),
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="text_encoder/model*.safetensors"),
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="vae/diffusion_pytorch_model.safetensors"),
],
tokenizer_config=ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="tokenizer/"),
)
snapshot_download("DiffSynth-Studio/Qwen-Image-EliGen-V2", local_dir="models/DiffSynth-Studio/Qwen-Image-EliGen-V2", allow_file_pattern="model.safetensors")
pipe.load_lora(pipe.dit, "models/DiffSynth-Studio/Qwen-Image-EliGen-V2/model.safetensors")
global_prompt = "Qwen-Image-EliGen魔法咖啡厅的宣传海报,主体是两杯魔法咖啡,一杯冒着火焰,一杯冒着冰锥,背景是浅蓝色水雾,海报写着“Qwen-Image-EliGen魔法咖啡厅”、“新品上市”"
entity_prompts = ["一杯红色魔法咖啡,杯中火焰燃烧", "一杯红色魔法咖啡,杯中冰锥环绕", "字:“新品上市”", "字:“Qwen-Image-EliGen魔法咖啡厅”"]
dataset_snapshot_download(dataset_id="DiffSynth-Studio/examples_in_diffsynth", local_dir="./", allow_file_pattern=f"data/examples/eligen/qwen-image/example_6/*.png")
masks = [Image.open(f"./data/examples/eligen/qwen-image/example_6/{i}.png").convert('RGB').resize((1328, 1328)) for i in range(len(entity_prompts))]
image = pipe(
prompt=global_prompt,
seed=0,
eligen_entity_prompts=entity_prompts,
eligen_entity_masks=masks,
)
image.save("image.jpg")
```
## 引用
如果您觉得我们的工作对您有所帮助,欢迎引用我们的成果。
```
@article{zhang2025eligen,
title={Eligen: Entity-level controlled image generation with regional attention},
author={Zhang, Hong and Duan, Zhongjie and Wang, Xingjun and Chen, Yingda and Zhang, Yu},
journal={arXiv preprint arXiv:2501.01097},
year={2025}
}
``` |