This is an implementation of DragNUWA for ComfyUI
DragNUWA: DragNUWA enables users to manipulate backgrounds or objects within images directly, and the model seamlessly translates these actions into camera movements or object motions, generating the corresponding video.
Install
-
Clone this repo into custom_nodes directory of ComfyUI location
-
Run pip install -r requirements.txt
-
Download the weights of DragNUWA drag_nuwa_svd.pth and put it to
ComfyUI/models/checkpoints/drag_nuwa_svd.pth
For chinese users:drag_nuwa_svd.pth
smaller and faster fp16 model: dragnuwa-svd-pruned.fp16.safetensors from https://github.com/painebenjamin/app.enfugue.ai
For chinese users: wget https://hf-mirror.com/benjamin-paine/dragnuwa-pruned-safetensors/resolve/main/dragnuwa-svd-pruned.fp16.safetensors
不能直接在浏览器下载,或者参照 https://hf-mirror.com/ 官方使用说明
Tools
Motion Traj Tool Generate motion trajectories
<img src="assets/multiline.png" raw=true> <img src="assets/multiline.gif" raw=true>Examples
- basic workflow
https://github.com/chaojie/ComfyUI-DragNUWA/blob/main/workflow.json
- InstantCameraMotionBrush & InstantObjectMotionBrush
InstantCameraMotionBrush Node Generate zoomin/zoomout/left/right/up/bottom camera motion brush
InstantObjectMotionBrush Node Generate zoomin/zoomout/left/right/up/bottom object motion brush (by draw mask on object)
<img src="assets/instantmotionbrush.png" raw=true>https://github.com/chaojie/ComfyUI-DragNUWA/blob/main/workflow_InstantMotionBrush.json
- optical flow workflow
Thanks for Fannovol16's Unimatch_ OptFlowPreprocessor Thanks for toyxyz's load optical flow from directory
<img src="assets/optical_flow.png" raw=true>https://github.com/chaojie/ComfyUI-DragNUWA/blob/main/workflow_optical_flow.json
- motion brush
need nodes "https://github.com/chaojie/ComfyUI-RAFT"
<img src="assets/motionbrush.png" raw=true>https://github.com/chaojie/ComfyUI-DragNUWA/blob/main/workflow_motionbrush.json