Patch release 0.18.1: Stable Diffusion XL 0.9 Research Release
Stable Diffusion XL 0.9 is now fully supported under the **SDXL 0.9 Research License** license [here](https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9).
Having received access to [`stabilityai/stable-diffusion-xl-base-0.9`](https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9), you can easily use it with `diffusers`:
Text-to-Image
py
from diffusers import StableDiffusionXLPipeline
import torch
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-0.9", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipe.to("cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt=prompt).images[0]

Refining the image output
py
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
import torch
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-0.9", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipe.to("cuda")
refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-0.9", torch_dtype=torch.float16, use_safetensors=True, variant="fp16"
)
refiner.to("cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt=prompt, output_type="latent" if use_refiner else "pil").images[0]
image = refiner(prompt=prompt, image=image[None, :]).images[0]
Loading single file checkpoitns / original file format
py
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
import torch
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-0.9", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipe.to("cuda")
refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-0.9", torch_dtype=torch.float16, use_safetensors=True, variant="fp16"
)
refiner.to("cuda")
Memory optimization via model offloading
diff
- pipe.to("cuda")
+ pipe.enable_model_cpu_offload()
and
diff
- refiner.to("cuda")
+ refiner.enable_model_cpu_offload()
Speed-up inference with torch.compile
diff
+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
+ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True)
**Note**: If you're running the model with < torch 2.0, please make sure to run:
diff
+pipe.enable_xformers_memory_efficient_attention()
+refiner.enable_xformers_memory_efficient_attention()
For more details have a look at the [official docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl).
All commits
* typo in safetensors (safetenstors) by YoraiLevi in 3976
* Fix code snippet for Audio Diffusion by osanseviero in 3987
* feat: add `Dropout` to Flax UNet by SauravMaheshkar in 3894
* Add 'rank' parameter to Dreambooth LoRA training script by isidentical in 3945
* Don't use bare prints in a library by cmd410 in 3991
* [Tests] Fix some slow tests by patrickvonplaten in 3989
* Add sdxl prompt embeddings by patrickvonplaten in 3995