Qwen-Image-2512 was released on the last day of 2025 and it offers:
It’s been tested in over 10,000 blind rounds on AI Arena. Qwen-Image-2512 at the time of writing leads open-source image models and holds up well against paid ones.
Qwen Image 2512 needs about 42 GB VRAM for the BF16 type.
The FP8 type cuts that in half, using around 22 GB.
BF16 gives higher accuracy but takes more VRAM.
FP8 is lighter and uses less memory but accuracy drops a bit.
GGUFF versions are available form day 1 (see our useful links section below)





If you'd like to access this model, you can explore the following possibilities:
LoRA
Qwen-Image-2512-Turbo-LoRA is a 4 or 8-steps turbo LoRA for Qwen Image 2512 trained by Wuli Team. This LoRA matches the original model's ouput quality but is over 20x faster⚡️, 2x from CFG-distillation and others from reduced number of inference steps.
Workflow
Use the Qwen Image workflow in Template Library or this Qwen Image 2512 workflow