Wan2.2
1. Model Introduction
Wan2.2 series are the most popular and open and advanced large-scale video generative models.
This generation delivers comprehensive upgrades across the board:
- Effective MoE Architecture: Introduces a Mixture-of-Experts (MoE) architecture into video diffusion models. By separating the denoising process cross timesteps with specialized powerful expert models, this enlarges the overall model capacity while maintaining the same computational cost.
- Cinematic-level Aesthetics: Incorporates meticulously curated aesthetic data, complete with detailed labels for lighting, composition, contrast, color tone, and more. This allows for more precise and controllable cinematic style generation, facilitating the creation of videos with customizable aesthetic preferences.
- Complex Motion Generation: Trained on a significantly larger data, with +65.6% more images and +83.2% more videos. This expansion notably enhances the model's generalization across multiple dimensions such as motions, semantics, and aesthetics, achieving TOP performance among all open-sourced and closed-sourced models.
- Efficient High-Definition Hybrid TI2V: Open-sources a 5B model built with our advanced Wan2.2-VAE that achieves a compression ratio of 16×16×4. This model supports both text-to-video and image-to-video generation at 720P resolution with 24fps and can also run on consumer-grade graphics cards like 4090. It is one of the fastest 720P@24fps models currently available, capable of serving both the industrial and academic sectors simultaneously.
For more details, please refer to the official Wan2.2 GitHub Repository.
2. SGLang-diffusion Installation
SGLang-diffusion offers multiple installation methods. You can choose the most suitable installation method based on your hardware platform and requirements.
Please refer to the official SGLang-diffusion installation guide for installation instructions.
3. Model Deployment
This section provides deployment configurations optimized for different hardware platforms and use cases.
3.1 Basic Configuration
The Wan2.2 series offers models in various sizes, architectures and input types, optimized for different hardware platforms. The recommended launch configurations vary by hardware and model size.
Interactive Command Generator: Use the configuration selector below to automatically generate the appropriate deployment command for your hardware platform, model size.
sglang serve \ --model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers \ --dit-layerwise-offload true
3.2 Configuration Tips
Current supported optimzation all listed here.
--vae-path: Path to a custom VAE model or HuggingFace model ID (e.g., fal/FLUX.2-Tiny-AutoEncoder). If not specified, the VAE will be loaded from the main model path.--num-gpus {NUM_GPUS}: Number of GPUs to use--tp-size {TP_SIZE}: Tensor parallelism size (only for the encoder; should not be larger than 1 if text encoder offload is enabled, as layer-wise offload plus prefetch is faster)--sp-degree {SP_SIZE}: Sequence parallelism size (typically should match the number of GPUs)--ulysses-degree {ULYSSES_DEGREE}: The degree of DeepSpeed-Ulysses-style SP in USP--ring-degree {RING_DEGREE}: The degree of ring attention-style SP in USP
4. Model Invocation
4.1 Basic Usage
For more API usage and request examples, please refer to: SGLang Diffusion OpenAI API
4.1.1 Launch a server and then send requests
sglang serve --model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers --port 3000
curl http://127.0.0.1:3000/v1/images/generations \
-o >(jq -r '.data[0].b64_json' | base64 --decode > example.png) \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "black-forest-labs/FLUX.1-dev",
"prompt": "A cute baby sea otter",
"n": 1,
"size": "1024x1024",
"response_format": "b64_json"
}'
4.1.2 Generate a video without launching a server
SERVER_ARGS=(
--model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers
--text-encoder-cpu-offload
--pin-cpu-memory
--num-gpus 4
--ulysses-degree=2
--enable-cfg-parallel
)
SAMPLING_ARGS=(
--prompt "A curious raccoon"
--save-output
--output-path outputs
--output-file-name "A curious raccoon.mp4"
)
sglang generate "${SERVER_ARGS[@]}" "${SAMPLING_ARGS[@]}"
4.2 Advanced Usage
4.2.1 Cache-DiT Acceleration
SGLang integrates Cache-DiT, a caching acceleration engine for Diffusion Transformers (DiT), to achieve up to 7.4x inference speedup with minimal quality loss. You can set SGLANG_CACHE_DIT_ENABLED=True to enable it. For more details, please refer to the SGLang Cache-DiT documentation.
Basic Usage
SGLANG_CACHE_DIT_ENABLED=true sglang serve --model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers
Advanced Usage
-
DBCache Parameters: DBCache controls block-level caching behavior:
Parameter Env Variable Default Description Fn SGLANG_CACHE_DIT_FN1 Number of first blocks to always compute Bn SGLANG_CACHE_DIT_BN0 Number of last blocks to always compute W SGLANG_CACHE_DIT_WARMUP4 Warmup steps before caching starts R SGLANG_CACHE_DIT_RDT0.24 Residual difference threshold MC SGLANG_CACHE_DIT_MC3 Maximum continuous cached steps -
TaylorSeer Configuration: TaylorSeer improves caching accuracy using Taylor expansion:
Parameter Env Variable Default Description Enable SGLANG_CACHE_DIT_TAYLORSEERfalse Enable TaylorSeer calibrator Order SGLANG_CACHE_DIT_TS_ORDER1 Taylor expansion order (1 or 2) Combined Configuration Example:
SGLANG_CACHE_DIT_ENABLED=true \
SGLANG_CACHE_DIT_FN=2 \
SGLANG_CACHE_DIT_BN=1 \
SGLANG_CACHE_DIT_WARMUP=4 \
SGLANG_CACHE_DIT_RDT=0.4 \
SGLANG_CACHE_DIT_MC=4 \
SGLANG_CACHE_DIT_TAYLORSEER=true \
SGLANG_CACHE_DIT_TS_ORDER=2 \
sglang serve --model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers
4.2.2 GPU Optimization
--dit-cpu-offload: Use CPU offload for DiT inference. Enable if run out of memory with FSDP.--text-encoder-cpu-offload: Use CPU offload for text encoder inference. Enable if run out of memory with FSDP.--image-encoder-cpu-offload: Use CPU offload for image encoder inference. Enable if run out of memory with FSDP.--vae-cpu-offload: Use CPU offload for VAE. Enable if run out of memory.--pin-cpu-memory: Pin memory for CPU offload. Only added as a temp workaround if it throws "CUDA error: invalid argument".
4.2.3 Supported LoRA Registry
| origin model | supported LoRA |
|---|---|
| Wan-AI/Wan2.2-I2V-A14B-Diffusers | lightx2v/Wan2.2-Distill-Loras |
| Wan-AI/Wan2.2-T2V-A14B-Diffusers | Cseti/wan2.2-14B-Arcane_Jinx-lora-v1 |
| Example: |
sglang serve --model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers --port 3000 \
--lora-path Cseti/wan2.2-14B-Arcane_Jinx-lora-v1
5. Benchmark
Test Environment:
- Hardware: NVIDIA B200 GPU (1x)
- Model: Wan-AI/Wan2.2-T2V-A14B-Diffusers
- sglang diffusion version: 0.5.6.post2
5.1 Speedup Benchmark
5.1.1 Generate a video
Server Command:
sglang serve --model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers
Benchmark Command:
python3 -m sglang.multimodal_gen.benchmarks.bench_serving \
--backend sglang-video --dataset vbench --task t2v --num-prompts 1 --max-concurrency 1
Result:
================= Serving Benchmark Result =================
Backend: sglang-video
Model: Wan-AI/Wan2.2-T2V-A14B-Diffusers
Dataset: vbench
Task: t2v
--------------------------------------------------
Benchmark duration (s): 630.43
Request rate: inf
Max request concurrency: 1
Successful requests: 1/1
--------------------------------------------------
Request throughput (req/s): 0.00
Latency Mean (s): 630.4277
Latency Median (s): 630.4277
Latency P99 (s): 630.4277
--------------------------------------------------
Peak Memory Max (MB): 62627.41
Peak Memory Mean (MB): 62627.41
Peak Memory Median (MB): 62627.41
============================================================
5.1.2 Generate videos with high concurrency
Server Command:
SGLANG_CACHE_DIT_ENABLED=true \
SGLANG_CACHE_DIT_FN=2 \
SGLANG_CACHE_DIT_BN=1 \
SGLANG_CACHE_DIT_WARMUP=4 \
SGLANG_CACHE_DIT_RDT=0.4 \
SGLANG_CACHE_DIT_MC=4 \
SGLANG_CACHE_DIT_TAYLORSEER=true \
SGLANG_CACHE_DIT_TS_ORDER=2 \
sglang serve --model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers
Benchmark Command:
python3 -m sglang.multimodal_gen.benchmarks.bench_serving \
--backend sglang-video --dataset vbench --task t2v --num-prompts 20 --max-concurrency 20
Result:
================= Serving Benchmark Result =================
Backend: sglang-video
Model: Wan-AI/Wan2.2-T2V-A14B-Diffusers
Dataset: vbench
Task: t2v
--------------------------------------------------
Benchmark duration (s): 5163.21
Request rate: inf
Max request concurrency: 20
Successful requests: 20/20
--------------------------------------------------
Request throughput (req/s): 0.00
Latency Mean (s): 2739.7695
Latency Median (s): 2742.0673
Latency P99 (s): 5121.6331
--------------------------------------------------
Peak Memory Max (MB): 72523.56
Peak Memory Mean (MB): 70253.34
Peak Memory Median (MB): 70824.46
============================================================