Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: sana
|
| 3 |
+
tags:
|
| 4 |
+
- text-to-image
|
| 5 |
+
- Sana
|
| 6 |
+
- 1024px_based_image_size
|
| 7 |
+
- Multi-language
|
| 8 |
+
language:
|
| 9 |
+
- en
|
| 10 |
+
- zh
|
| 11 |
+
base_model:
|
| 12 |
+
- Efficient-Large-Model/Sana_1600M_1024px_MultiLing
|
| 13 |
+
pipeline_tag: text-to-image
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
<p align="center" style="border-radius: 10px">
|
| 17 |
+
<img src="https://raw.githubusercontent.com/NVlabs/Sana/refs/heads/main/asset/logo.png" width="35%" alt="logo"/>
|
| 18 |
+
</p>
|
| 19 |
+
|
| 20 |
+
<div style="display:flex;justify-content: center">
|
| 21 |
+
<a href="https://huggingface.co/collections/Efficient-Large-Model/sana-673efba2a57ed99843f11f9e"><img src="https://img.shields.io/static/v1?label=Demo&message=Huggingface&color=yellow"></a>  
|
| 22 |
+
<a href="https://github.com/NVlabs/Sana"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github"></a>  
|
| 23 |
+
<a href="https://nvlabs.github.io/Sana/"><img src="https://img.shields.io/static/v1?label=Project&message=Github&color=blue&logo=github-pages"></a>  
|
| 24 |
+
<a href="https://hanlab.mit.edu/projects/sana/"><img src="https://img.shields.io/static/v1?label=Page&message=MIT&color=darkred&logo=github-pages"></a>  
|
| 25 |
+
<a href="https://arxiv.org/abs/2410.10629"><img src="https://img.shields.io/static/v1?label=Arxiv&message=Sana&color=red&logo=arxiv"></a>  
|
| 26 |
+
<a href="https://nv-sana.mit.edu/"><img src="https://img.shields.io/static/v1?label=Demo&message=MIT&color=yellow"></a>  
|
| 27 |
+
<a href="https://discord.gg/rde6eaE5Ta"><img src="https://img.shields.io/static/v1?label=Discuss&message=Discord&color=purple&logo=discord"></a>  
|
| 28 |
+
</div>
|
| 29 |
+
|
| 30 |
+
# Model card
|
| 31 |
+
|
| 32 |
+
We introduce **Sana**, a text-to-image framework that can efficiently generate images up to 4096 Γ 4096 resolution.
|
| 33 |
+
Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU.
|
| 34 |
+
|
| 35 |
+
Source code is available at https://github.com/NVlabs/Sana.
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
## Compare with base model
|
| 39 |
+
|
| 40 |
+
| Model | Language |
|
| 41 |
+
|----------------------------------------------------------------------------------------|----------------------------|
|
| 42 |
+
| [Sana_1600M_1024px](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px) | English |
|
| 43 |
+
| Sana_1600M_1024px_MultiLing | English, Chinese, Emoji |
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
| Model | Sample-1 | Sample-2 | Sample-3 | Sample-4 |
|
| 47 |
+
|-----------------------------------------------------------------------------------|-------------------------------------------------|-----------------------------------------------------------------------------------|--------------------------------------------------------------|-------------------------------------------------------------------------|
|
| 48 |
+
| [Sana_1600M_1024px](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px) | <img src="assets/π― η©Ώη π εΉ π·0.jpg" width=256> | <img src="assets/η« Wearing πΆ flying on the ε½©θΉ with πΉ in the βοΈ0.jpg" width=256> | <img src="assets/π¦ teaching π― to catch π¦0.jpg" width=256> | <img src="assets/ιθ² π
δΈηιΏε, traditional Chinese style0.jpg" width=256> |
|
| 49 |
+
| Sana_1600M_1024px_MultiLing | <img src="assets/π― η©Ώη π εΉ π·1.jpg" width=256> | <img src="assets/η« Wearing πΆ flying on the ε½©θΉ with πΉ in the βοΈ1.jpg" width=256> | <img src="assets/π¦ teaching π― to catch π¦1.jpg" width=256> | <img src="assets/ιθ² π
δΈηιΏε, traditional Chinese style1.jpg" width=256> |
|
| 50 |
+
| Prompt | π― η©Ώη π εΉ π· | η« Wearing πΆ flying on the ε½©θΉ with πΉ in the βοΈ | π¦ teaching π― to catch π¦ | ιθ² π
δΈηιΏε, traditional Chinese style |
|
| 51 |
+
|
| 52 |
+
### Model Description
|
| 53 |
+
|
| 54 |
+
- **Developed by:** NVIDIA, Sana
|
| 55 |
+
- **Model type:** Linear-Diffusion-Transformer-based text-to-image generative model
|
| 56 |
+
- **Model size:** 1648M parameters
|
| 57 |
+
- **Model resolution:** This model is developed to generate 1024px based images with multi-scale heigh and width.
|
| 58 |
+
- **License:** [CC BY-NC-SA 4.0 License](./LICENSE.txt)
|
| 59 |
+
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts.
|
| 60 |
+
It is a Linear Diffusion Transformer that uses one fixed, pretrained text encoders ([Gemma2-2B-IT](https://huggingface.co/google/gemma-2-2b-it))
|
| 61 |
+
and one 32x spatial-compressed latent feature encoder ([DC-AE](https://hanlab.mit.edu/projects/dc-ae)).
|
| 62 |
+
- **Special:** This model is fine-tuned from the base model [Efficient-Large-Model/Sana_1600M_1024px](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px) and it supports Emoji, Chinese and English and all mixed prompts.
|
| 63 |
+
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/NVlabs/Sana) and the [Sana report on arXiv](https://arxiv.org/abs/2410.10629).
|
| 64 |
+
|
| 65 |
+
### Model Sources
|
| 66 |
+
|
| 67 |
+
For research purposes, we recommend our `generative-models` Github repository (https://github.com/NVlabs/Sana),
|
| 68 |
+
which is more suitable for both training and inference and for which most advanced diffusion sampler like Flow-DPM-Solver is integrated.
|
| 69 |
+
[MIT Han-Lab](https://nv-sana.mit.edu/) provides free Sana inference.
|
| 70 |
+
- **Repository:** ttps://github.com/NVlabs/Sana
|
| 71 |
+
- **Demo:** https://nv-sana.mit.edu/
|
| 72 |
+
|
| 73 |
+
### 𧨠Diffusers
|
| 74 |
+
|
| 75 |
+
PR developing: [Sana](https://github.com/huggingface/diffusers/pull/9982) and [DC-AE](https://github.com/huggingface/diffusers/pull/9708)
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
## Uses
|
| 79 |
+
|
| 80 |
+
### Direct Use
|
| 81 |
+
|
| 82 |
+
The model is intended for research purposes only. Possible research areas and tasks include
|
| 83 |
+
|
| 84 |
+
- Generation of artworks and use in design and other artistic processes.
|
| 85 |
+
- Applications in educational or creative tools.
|
| 86 |
+
- Research on generative models.
|
| 87 |
+
- Safe deployment of models which have the potential to generate harmful content.
|
| 88 |
+
|
| 89 |
+
- Probing and understanding the limitations and biases of generative models.
|
| 90 |
+
|
| 91 |
+
Excluded uses are described below.
|
| 92 |
+
|
| 93 |
+
### Out-of-Scope Use
|
| 94 |
+
|
| 95 |
+
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
|
| 96 |
+
|
| 97 |
+
## Limitations and Bias
|
| 98 |
+
|
| 99 |
+
### Limitations
|
| 100 |
+
|
| 101 |
+
|
| 102 |
+
- The model does not achieve perfect photorealism
|
| 103 |
+
- The model cannot render complex legible text
|
| 104 |
+
- fingers, .etc in general may not be generated properly.
|
| 105 |
+
- The autoencoding part of the model is lossy.
|
| 106 |
+
|
| 107 |
+
### Bias
|
| 108 |
+
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|