nalexand commited on
Commit
0c831fb
·
verified ·
1 Parent(s): 19fc1b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -8
README.md CHANGED
@@ -72,15 +72,19 @@ https://github.com/user-attachments/assets/154df173-88d3-4ad1-b543-f7410380b13a
72
  # 13 frames (max) 63.88 s/it (*FP8) vae decode 284 sec
73
 
74
  # Compared to ComfyUA
75
- ComfyUA (fp8) This (fp16) optimized vae
76
- 1120*630 33 frames * 16 steps 1470 sec 85.10 s/it * 16 = 1362 sec
77
- vae decode +117 sec +58 sec
78
- total 1587 sec 1420 sec 1.12x faster
79
 
80
- This (*fp8) optimized vae
81
- 76.49 s/it * 16 = 1224 sec
82
- +58 sec
83
- 1282 sec 1.24x faster !!
 
84
  *fp8 - 3070 Ti doesn`t support calculations in fp8, loaded weights in fp8 converting for calculations to fp16 "on the fly"
85
 
86
  Visualy hard to notice diference in quality between fp8 and fp16..
 
 
 
 
72
  # 13 frames (max) 63.88 s/it (*FP8) vae decode 284 sec
73
 
74
  # Compared to ComfyUA
75
+ ComfyUA (fp8) This (fp16) optimized vae
76
+ 1120*630 33 frames * 16 steps 1470 sec 85.10 s/it * 16 = 1362 sec
77
+ vae decode +117 sec +58 sec
78
+ total 1587 sec 1420 sec 1.12x faster
79
 
80
+ This (*fp8) optimized vae
81
+ 76.49 s/it * 16 = 1224 sec
82
+ +58 sec
83
+ 1282 sec 1.24x faster !!
84
+
85
  *fp8 - 3070 Ti doesn`t support calculations in fp8, loaded weights in fp8 converting for calculations to fp16 "on the fly"
86
 
87
  Visualy hard to notice diference in quality between fp8 and fp16..
88
+
89
+ To try FP16 - download original model and use convert_files / optimize_files and put safetensors in ./Wan2.2-I2V-A14B/(low_noise_model/high_noise_model) and swith to false "self.load_as_fp8 = True" - in image2videolocal.py
90
+ more details: [https://github.com/nalexand/Wan2.2](https://github.com/nalexand/Wan2.2)