Instructions to use Video-Reason/VBVR-Wan2.2-diffsynth with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Video-Reason/VBVR-Wan2.2-diffsynth with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Video-Reason/VBVR-Wan2.2-diffsynth", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
Hi, thank you for your work. Regarding wan2.2 high&low lora, can it be used directly in ComfyUI?
#1
by RedHn - opened
Hi, thanks for your interest! The wan2.2 high & low LoRAs can be used in ComfyUI, but not always directly in their original format. Thanks to the community for their great work, they’ve already been converted into ComfyUI-compatible versions, please refer to:
https://huggingface.co/LiconStudio/VBVR-wan2.2-comfy-bf16/tree/main
https://huggingface.co/Kijai/WanVideo_comfy/tree/main/LoRAs/VBVR
