ComfyOnline

LivePortrait

  • Fully operational workflows
  • No missing nodes or models
  • No manual setups required
  • Features stunning visuals

Introduction

Convert your still photos into live videos with this workflow. Bring still photos to life with realistic animations. this workflow maps facial movements onto your image, creating mesmerizing animated portraits that retain the original photo’s essence while adding dynamic motion.

Description

Alright, buckle up buttercups, because we're about to dive headfirst into the whimsical world of LivePortrait!

What in the AI-animated-world is LivePortrait?

Imagine you have a single photo of your Aunt Mildred, but you secretly wish she could deliver a sassy monologue. LivePortrait, conjured up by the wizards at Kuaishou Technology, is your wish-granting genie! It's a slick piece of tech that takes a humble still image and turns it into a living, breathing (well, animated) video. Think of it as giving your photos the gift of gab, using a driving force like a video, some audio, a text script, or even some AI magic to dictate the facial expressions and head movements.

Forget those clunky, processing-power-hungry diffusion methods! LivePortrait is like a ninja – swift and efficient. It's built on an "implicit-keypoint-based framework" (try saying that five times fast!), which is just a fancy way of saying it's cleverly designed to be both speedy and give you control. This tech is all about being practical: good-looking results, easy to tweak, and doesn't require a supercomputer to run. In fact, it can crank out frames faster than you can say "cheese" – a blistering 12.8 milliseconds per frame on an RTX 4090 with PyTorch!

For a deeper dive into the sorcery, head on over to the LivePortrait headquarters.

How does this magic trick work?

Think of it as a puppet show, but with AI. LivePortrait grabs the "look" from your source image and the "moves" from your driving source. It then mashes them together to create an animated portrait that looks like Aunt Mildred is finally ready for her close-up.

Here's the breakdown:

  • Appearance Feature Extractor: This is like taking a snapshot of Aunt Mildred's soul (or, you know, her facial features).
  • Motion Extractor: This guy watches the driving video and figures out what kind of faces are being made – smiles, winks, the works!
  • Warping Module: This is where the magic happens. It takes Aunt Mildred's "soul snapshot" and bends it to match the expressions from the driving video.
  • Image Generator: This fella takes the warped features and builds the final, photorealistic animated frame. Voila!
  • Stitching & Retargeting Module: (Optional) This neat trick lets you glue the animated portrait back onto the original image AND lets you fine-tune specific facial features, like making sure Aunt Mildred's eyes really POP!

ComfyUI LivePortrait: Your Ticket to Portrait Animation Paradise!

Thanks to the wizardry of kijai's ComfyUI-LivePortraitKJ node and workflow, you can now make your photos dance in ComfyUI without needing a PhD in computer science! Here's your cheat sheet:

The Super-Simple Steps to ComfyUI LivePortrait Img2Vid Glory:

  1. Summon the Live Portrait Models:

    • Conjure the "DownloadAndLoadLivePortraitModels" node.
    • Set the precision to "auto" or "fp16" for maximum performance.
  2. Choose Your Face-Finding Familiar:

    • Pick between the "LivePortraitLoadCropper" (InsightFace) and "LivePortraitLoadMediaPipeCropper" nodes.
    • InsightFace is like a bloodhound, incredibly accurate but with a commercial license. MediaPipe is faster on a CPU but a bit less sharp.
    • Both spit out a "cropper" – your tool for finding and framing faces.
  3. Prepare Your Source Image Sacrifice:

    • Load your portrait using the "Load Image" node.
    • Resize it to a neat 512x512 using the "ImageResize" node.
    • Plug the resized image into the "LivePortraitCropper" node.
    • Don't forget to connect your chosen face detector's "cropper" output to this node!

    Important "LivePortraitCropper" Knobs to Twiddle:

    • "dsize": This controls the output resolution of the cropped face image.

      • Crank it up for more detail (but slower processing), dial it down for speed (but potentially less detail).
    • "scale": This is your zoom control.

      • Too high and you're staring at pores; too low and you've got a whole headshot with distracting backgrounds. Aim for the sweet spot: the entire face with just a hint of background.
    • "face_index": If you've got a crowd of faces, this lets you pick which one to animate.

    • "vx_ratio" and "vy_ratio" (Optional): These are your nudge controls, letting you tweak the crop's position.

    • "face_index_order": Sets how the faces are numbered from "largest to smallest," "left to right," or "top to bottom."

  4. Load and Prep the Driving Video:

    • Use the "VHS_LoadVideo" node to bring in your driving video.
    • Use the "frame_load_cap" primitive to choose the "num_frames."
    • Resize those video frames to 480x480 using the "GetImageSizeAndCount" node.
    • Consider cropping those driving video frames to match the source with a "LivePortraitCropper" node.
  5. Apply Motion Transfer Magic:

    • Drop in the "LivePortraitProcess" node.
    • Connect all the things: your pipeline, source image crop info, cropped source image, and driving frames.

    "LivePortraitProcess" Fine-Tuning Frenzy:

    • "lip_zero": Turns off the lips when they're not doing anything interesting. Less twitchy lips!
    • "lip_zero_threshold": How much lip action is too little? Adjust this if the lips are too quiet or too noisy.
    • "stitching": Blends the animated face back into the original image for a seamless look.
    • "delta_multiplier": Crank this up to exaggerate the expressions, dial it down for subtle movements.
    • "mismatch_method": What happens when the source image and driving video are different lengths? Choose from "constant", "cycle", "mirror", and "cut".
    • "relative_motion_mode": The key setting for controlling the type of motion transfer. Choose "relative," "source_video_smoothed," "relative_rotation_only," "single_frame," and "off."
    • "driving_smooth_observation_variance": Controls smoothness of the driving video when using the "source_video_smoothed" option.
  6. Composite the Masterpiece (Optional):

    • Use the "LivePortraitComposite" node to put the animated face back onto the original body.
    • Connect the original image, animated face, LivePortrait output, and a mask (if you're feeling fancy).
  7. Retarget Eyes and Lips (Optional):

    • For ultimate control, use the "LivePortraitRetargeting" node.
    • Tweak the eye and lip multipliers to your heart's content.
    • Connect the retargeting info to "LivePortraitProcess."

Important Disclaimer: Remember that Insightface model requires a non-commercial license!

Vid2Vid Dreams? If you're itching to animate portraits video-to-video, dive into the LivePortrait | Animate Portraits | Vid2Vid workflow!

Now go forth and make those portraits dance! Just don't blame me if Aunt Mildred starts demanding royalties.

Metadata