Change Image Background With Stable Diffusion In Python

In this hands-on tutorial, you will learn to change the background in an image using Stable Diffusion inpainting. I will be using the Diffusers Python library to work with the diffusion model and other AI models.

You can use a real image or an AI-generated image for this task. I am going to use an image of a cute dog and will try to change its background using a Stable Diffusion model. In other words, I will generate a new background for this image.

Ai image of a dog

Overview of background changing approach

The idea of replacing background of the above image is quite simple.

First I will extract a mask of the given image with the help of the ‘rembg’ Python library. It will produce a binary image of black and white pixels, where the background will be black pixels and the foreground object (the dog) will be in white pixels.

In Stable Diffusion inpainting, both the original image and the mask are used as inputs to the diffusion model. Only those regions in the input image are modified by the model that have corresponding white pixels in the mask.

Since in the mask, the background is represented by the black pixels, therefore this region will remain unchanged in the original image and the foreground object (white pixels) will be altered by the model.

But the objective was to change the background of the image and keep the foreground object, the dog, as it is, right? So, here I will use a trick by inverting the colors of the binary mask. So, the black pixels will replace the white pixels and vice-versa.

Now instead of the previous mask, this inverted mask will be passed to the Stable Diffusion inpainting model along with the original image.

Install required libraries

!pip install rembg
!pip install accelerate
!pip install diffusers==0.24

Library rembg will be used for extracting the binary mask. The other libraries, Accelerate and Diffusers, will be used for working with the Stable Diffusion model.

import rembg
import torch
import numpy as np
from PIL import Image,ImageOps
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

Mask extraction from input image

Let’s load the input image and extract its binary mask.

# Load the input image
init_image = load_image("ai_image2.png").resize((512,768))

# Convert the input image to a numpy array
input_array = np.array(init_image)

# Extract mask using rembg
mask_array = rembg.remove(input_array, only_mask=True)

# Create a PIL Image from the output array
mask_image = Image.fromarray(mask_array)

Now invert the mask colors.

mask_image_inverted = ImageOps.invert(mask_image)

# Display inverted mask
mask_image_inverted
inverted mask stable diffusion inpainting

Load Stable Diffusion Inpainting model

There are many Stable Diffusion inpainting models available for free and more new models are getting released. I have decided to use ReV_Animated_Inpainting model for image background generation. Feel free to use any other model of your choice.

# Create inpainting pipeline
pipeline = AutoPipelineForInpainting.from_pretrained(
    "redstonehero/ReV_Animated_Inpainting", 
    torch_dtype=torch.float16
)

pipeline.enable_model_cpu_offload()

Change background of the input image

Now all I have to do is provide a text prompt describing the background for the given image and set the parameters of the inpainting diffusion pipeline.

prompt = """labrador in the forest, raw photo, surreal photo, 
unreal engine 4, 8k, super sharp, detailed colors
"""

negative_prompt = "tail, deformed, mutated, ugly, disfigured"

image = pipeline(prompt=prompt,
             negative_prompt=negative_prompt,
             width=512,
             height=768,
             num_inference_steps=20,
             image=init_image, 
             mask_image=mask_image_inverted,
             guidance_scale=1,
             strength=0.7, 
             generator=torch.manual_seed(189018)).images[0]

You can play around with the parameters guidance_scale and strength to control the changes in the input image’s background and foreground.

guidance_scale can vary from 0 to 15 and strength varies from 0 to 1.

The parameter generator is a random seed number that is used to reproduce the same output or images. Feel free to change seed numbers. Each seed value will change the generated image slightly in some way or the other.

Once the model has generated the output image, we can display and compare it with the input image.

make_image_grid([init_image, image], rows=1, cols=2)
background replacement using stable diffusion in python

As you can see, the background in the second image (output) is quite different from the first image (input) while the foreground object is pretty much unchanged.

We can further make improvements by using better prompts, using different diffusion-inpainting models, and using a different set of values for the inpainting-pipeline parameters.

Leave a Reply

Your email address will not be published. Required fields are marked *