Thank you!
Thank you for your hard job and for the sharing of this and the other models in such a simplicity way, it feels like 'retro' sd/xl :)
I will be happy to help with anything - testing staff or whatever, and i can give results in any format, or do whatever to help with your project...
... basic editing skills on most languages
.. expert video/photo/lighting edit, record, hardware...
... i believe where around September 2022 i started comfy and never quit, so i know this and that...
NVIDIA GeForce RTX 4060 Ti - 16gb vram / 32gb ram / win 11
sorry if this is not the right place of offering help, as i said - i am more of a graphic person
I do know someone was asking about what keywords can trigger certain behavior.
- Research all the LoRAs added (from the metadata seen on Hugging Face on the file) and create a document that includes all the key words and phrases. Also include origin URL. I haven't been able to find all of them, but I think they are all on civitai. (Civitai drives me nuts as they hide the model filename unless you click download.)
- Camera movement guide. This might be limited due to the following issue:
- Find a way to combat slow-motion videos. For instance, if you try to make someone surf or snow ski, it's all slow motion. I think it is because there is a lack of high-noise models and LoRAs, but I haven't had the time to recreate Rapid and experiment by adding high-noise or create a Rapid High version that would run for the first 2 steps then switch to regular Rapid (Low) for the remaining steps (3-4).
- Find a good way to add voice+lip sync into Rapid. All I have is using a Rapid start frame and then put it into an Ovi workflow. Basically just doing I2VA with Ovi. Ovi's video model is not straight forward. I was hoping to do some sort of model merge with Rapid or at least add WAN 2.x LoRAs, but the blocks don't align.
Doing it, will come back soon.
I am going to use "wan2.2-rapid-mega-aio-v7.safetensors" as a base... if you recomm any other as a base correct me