LivePortrait 2.0: Full-Body Avatar Motion Capture

Tried making a digital avatar for livestreams. First attempt with LivePortrait 1.0 was head-only, looked weird. Version 2.0 added full-body tracking - game changer. Here's my setup for real-time motion capture that doesn't require a suit.

Problem

Real-time avatar mode had 2-3 second delay between my movement and avatar response. Unusable for livestreaming or interactive applications. GPU at 40% utilization, so not a hardware bottleneck.

Frame time: 2850ms (target: <100ms)

What I Tried

Attempt 1: Reduced video resolution - slight improvement but still laggy.
Attempt 2: Disabled temporal consistency - smoother but artifacts appeared.
Attempt 3: Used multiple threads - didn't help, model is sequential.

Actual Fix

The issue was that LivePortrait was processing on CPU instead of GPU. The key fix was explicitly setting the device AND using the optimized model variant. Also, the camera preprocessing pipeline needed to be on GPU.

# Real-time optimized setup
import torch
from liveportrait import LivePortraitModel

# Use optimized variant (not default)
model = LivePortraitModel.from_pretrained(
    "kwai-vgi/liveportrait-v2.0-optimized",  # Key: use optimized
    device="cuda",  # Explicit GPU placement
    torch_dtype=torch.float16
)

# Enable real-time mode
model.enable_realtime_mode(
    target_fps=30,
    latency_mode="low",  # Options: low, medium, high
)

# GPU preprocessing
from liveportrait.processing import GPUPreprocessor

preprocessor = GPUPreprocessor(
    device="cuda",
    resize_target=(512, 512),  # Lower for speed
    detect_threshold=0.5
)

# Real-time inference loop
import cv2

cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
cap.set(cv2.CAP_PROP_FPS, 30)

while True:
    ret, frame = cap.read()
    if not ret:
        break

    # GPU preprocessing
    processed = preprocessor.process(frame)

    # Fast inference
    with torch.inference_mode():
        result = model.generate(
            processed,
            sync_audio=True,
            temporal_smoothing=0.3
        )

    # Display
    cv2.imshow("Avatar", result)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

What I Learned

Production Setup

# Install LivePortrait 2.0
git clone https://github.com/KwaiVGI/LivePortrait.git
cd LivePortrait

# Install dependencies
conda create -n liveportrait python=3.10
conda activate liveportrait

pip install -r requirements.txt

# Download full-body model
python scripts/download_models.py --variant fullbody

# Test camera setup
python scripts/test_camera.py

# Run real-time demo
python apps/realtime.py --model fullbody --fps 30

Related Resources