Skip to main content

FLUX

1. Model Introduction

FLUX is a family of rectified flow transformer models developed by Black Forest Labs for high-quality image generation from text descriptions.

FLUX.1-dev is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions.

Key Features:

  • Cutting-edge Output Quality: Second only to the state-of-the-art FLUX.1 [pro] model
  • Competitive Prompt Following: Matches the performance of closed-source alternatives
  • Guidance Distillation: Trained using guidance distillation for improved efficiency
  • Open Weights: Available for personal, scientific, and commercial purposes under the FLUX [dev] Non-Commercial License

FLUX.2-dev is a 32 billion parameter rectified flow transformer capable of generating, editing, and combining images based on text instructions.

Key Features:

  • State-of-the-art Performance: Leading open model in text-to-image generation, single-reference editing, and multi-reference editing
  • No Finetuning Required: Character, object, and style reference without additional training in one model
  • Guidance Distillation: Trained using guidance distillation for improved efficiency
  • Open Weights: Available for personal, scientific, and commercial purposes under the FLUX [dev] Non-Commercial License

For more details, please refer to the FLUX.1-dev HuggingFace page, FLUX.2-dev HuggingFace page, and the official blog post.

2. SGLang-diffusion Installation

SGLang-diffusion offers multiple installation methods. You can choose the most suitable installation method based on your hardware platform and requirements.

Please refer to the official SGLang-diffusion installation guide for installation instructions.

3. Model Deployment

This section provides deployment configurations optimized for different hardware platforms and use cases.

3.1 Basic Configuration

FLUX models are optimized for high-quality image generation. The recommended launch configurations vary by hardware and model version.

Interactive Command Generator: Use the configuration selector below to automatically generate the appropriate deployment command for your hardware platform and model version.

Hardware Platform
Model Version
Generated Command
sglang serve \
  --model-path black-forest-labs/FLUX.1-dev \
  --ulysses-degree=1 \
  --ring-degree=1

3.2 Configuration Tips

Current supported optimization all listed here.

  • --vae-path: Path to a custom VAE model or HuggingFace model ID (e.g., fal/FLUX.2-Tiny-AutoEncoder). If not specified, the VAE will be loaded from the main model path.
  • --num-gpus: Number of GPUs to use
  • --tp-size: Tensor parallelism size (only for the encoder; should not be larger than 1 if text encoder offload is enabled, as layer-wise offload plus prefetch is faster)
  • --sp-degree: Sequence parallelism size (typically should match the number of GPUs)
  • --ulysses-degree: The degree of DeepSpeed-Ulysses-style SP in USP
  • --ring-degree: The degree of ring attention-style SP in USP

4. API Usage

For complete API documentation, please refer to the official API usage guide.

4.1 Generate an Image

import base64
from openai import OpenAI

client = OpenAI(api_key="EMPTY", base_url="http://localhost:3000/v1")

response = client.images.generate(
model="black-forest-labs/FLUX.1-dev",
prompt="A cat holding a sign that says hello world",
size="1024x1024",
n=1,
response_format="b64_json",
)

# Save the generated image
image_bytes = base64.b64decode(response.data[0].b64_json)
with open("output.png", "wb") as f:
f.write(image_bytes)

4.2 Advanced Usage

4.2.1 Cache-DiT Acceleration

SGLang integrates Cache-DiT, a caching acceleration engine for Diffusion Transformers (DiT), to achieve up to 7.4x inference speedup with minimal quality loss. You can set SGLANG_CACHE_DIT_ENABLED=True to enable it. For more details, please refer to the SGLang Cache-DiT documentation.

Basic Usage

SGLANG_CACHE_DIT_ENABLED=true sglang serve --model-path black-forest-labs/FLUX.1-dev

Advanced Usage

  • DBCache Parameters: DBCache controls block-level caching behavior:

    ParameterEnv VariableDefaultDescription
    FnSGLANG_CACHE_DIT_FN1Number of first blocks to always compute
    BnSGLANG_CACHE_DIT_BN0Number of last blocks to always compute
    WSGLANG_CACHE_DIT_WARMUP4Warmup steps before caching starts
    RSGLANG_CACHE_DIT_RDT0.24Residual difference threshold
    MCSGLANG_CACHE_DIT_MC3Maximum continuous cached steps
  • TaylorSeer Configuration: TaylorSeer improves caching accuracy using Taylor expansion:

    ParameterEnv VariableDefaultDescription
    EnableSGLANG_CACHE_DIT_TAYLORSEERfalseEnable TaylorSeer calibrator
    OrderSGLANG_CACHE_DIT_TS_ORDER1Taylor expansion order (1 or 2)

    Combined Configuration Example:

SGLANG_CACHE_DIT_ENABLED=true \
SGLANG_CACHE_DIT_FN=2 \
SGLANG_CACHE_DIT_BN=1 \
SGLANG_CACHE_DIT_WARMUP=4 \
SGLANG_CACHE_DIT_RDT=0.4 \
SGLANG_CACHE_DIT_MC=4 \
SGLANG_CACHE_DIT_TAYLORSEER=true \
SGLANG_CACHE_DIT_TS_ORDER=2 \
sglang serve --model-path black-forest-labs/FLUX.1-dev

4.2.2 CPU Offload

  • --dit-cpu-offload: Use CPU offload for DiT inference. Enable if run out of memory.
  • --text-encoder-cpu-offload: Use CPU offload for text encoder inference.
  • --vae-cpu-offload: Use CPU offload for VAE.
  • --pin-cpu-memory: Pin memory for CPU offload. Only added as a temp workaround if it throws "CUDA error: invalid argument".

5. Benchmark

5.1 Speedup Benchmark

5.1.1 Generate a image

Test Environment:

  • Hardware: NVIDIA B200 GPU (1x)
  • Model: black-forest-labs/FLUX.1-dev
  • sglang diffusion version: 0.5.6.post2

Server Command:

sglang serve --model-path black-forest-labs/FLUX.1-dev --port 30000

Benchmark Command:

python3 -m sglang.multimodal_gen.benchmarks.bench_serving \
--backend sglang-video --dataset vbench --task t2v --num-prompts 1 --max-concurrency 1

Result:

================= Serving Benchmark Result =================
Backend: sglang-image
Model: black-forest-labs/FLUX.1-dev
Dataset: vbench
Task: t2v
--------------------------------------------------
Benchmark duration (s): 50.97
Request rate: inf
Max request concurrency: 1
Successful requests: 1/1
--------------------------------------------------
Request throughput (req/s): 0.02
Latency Mean (s): 50.9681
Latency Median (s): 50.9681
Latency P99 (s): 50.9681
--------------------------------------------------
Peak Memory Max (MB): 27905.19
Peak Memory Mean (MB): 27905.19
Peak Memory Median (MB): 27905.19
============================================================

5.1.2 Generate images with high concurrency

Server Command :

sglang serve --model-path black-forest-labs/FLUX.1-dev --port 30000

Benchmark Command :

python3 -m sglang.multimodal_gen.benchmarks.bench_serving \
--backend sglang-image --dataset vbench --task t2v --num-prompts 20 --max-concurrency 20

Result :

================= Serving Benchmark Result =================
Backend: sglang-image
Model: black-forest-labs/FLUX.1-dev
Dataset: vbench
Task: t2v
--------------------------------------------------
Benchmark duration (s): 111.79
Request rate: inf
Max request concurrency: 20
Successful requests: 20/20
--------------------------------------------------
Request throughput (req/s): 0.18
Latency Mean (s): 67.0646
Latency Median (s): 66.9691
Latency P99 (s): 110.8949
--------------------------------------------------
Peak Memory Max (MB): 27917.19
Peak Memory Mean (MB): 27916.59
Peak Memory Median (MB): 27917.19
============================================================