Debian/Ubuntu Guide: Self-hosting advanced AI text-to-art with Stable Diffusion

Self-hosting advanced AI text-to-art

As of writing Stable Diffusion combined with AUTOMATIC1111’s stable-diffusion-webui provides the most advanced AI text-to-art experience that can be self-hosted and many of it’s features don’t exist on any other free or proprietary platforms.

Note: Make sure to install ControlNet below to double the value of stable-diffusion.

Why does this guide use conda?

conda allows you to run python apps with separate python and library versions without conflicts with the OS.


This section is a summary limited to my research and experience and shouldn’t be considered the final word.

Recommended: Nvidia GPU with 6GB of VRAM or higher. >=8GB VRAM recommended.

AMD and Intel

While you can run sdw (stable-diffusion-webui) on AMD GPUs, Intel GPUs and Intel CPUs i’ve never done so. AMD GPUs may also need double the VRAM to perform the same actions (citation needed). See the sdw git docs for details.


Ampere series (latest) GPUs are the most recommended for both cost and features but Turing, Volta, Pascal and Maxwell series GPUs should all work. See: cuda support

I recommend the following cards and have used both with stable-diffusion-webui

Cheapest card: Nvidia Quadro P4000 8GB
Best AI all-rounder and value for money (by far): Nvidia RTX 3060 12GB


Installing drivers

Blacklisting nouveau

# Check if you have nouveau drivers installed
lsmod | grep nouveau

If this command prints anything follow this guide to blacklist nouveau.

Installing drivers

# Check if your card is recognized
lspci | grep VGA

If Linux isn’t pickup up the card make sure the hardware is physically installed correctly with enough power and is operational.

# Check if your card is using nvidia drivers
lsmod | grep nvidia

If there’s no output, follow this install guide for ubuntu or run this for Debian: sudo apt install nvidia-driver

Install cuda and test

sudo apt install nvidia-cuda-toolkit

# Confirm your card is visible

# Confirm cuda drivers are installed
nvcc --version

# Reboot

Install stable-diffusion-webui in a conda environment

Install dependencies

sudo apt update

# Packages used in this guide
sudo apt install wget curl

# Packages used by stable-diffusion-webui
sudo apt install git libgl1 libglib2.0-0 -y

# Optional packages
# (Resolves error: Cannot locate TCMalloc (improves CPU memory usage))
sudo apt install libgoogle-perftools4 libtcmalloc-minimal4 -y

Install miniconda

curl -sL "" > ""
bash ./
	# Press Enter to continue
	# Press q to quit license
	# Type yes to accept license
	# Press Enter to confirm install location
	# Type no to not have conda auto-initialize on opening a shell

Create a conda environment for stable-diffusion-webui running python 3.10 and activate it

source ~/miniconda3/etc/profile.d/
conda create -n sd-webui python=3.10 -y
conda activate sd-webui

Install stable-diffusion-webui

wget -q
bash ./

Wait for downloads to finish, this will take a long time and requires stable internet.

Resolving errors (skip if there’s none)

# Cloning Taming Transformers
	# For error: "error: RPC failed; curl 56 GnuTLS recv error (-54): Error in the pull function."
		cd ~/stable-diffusion-webui/
		git config --global http.postBuffer 1048576000
		cd ~

# If internet is unstable, loop installation till it completes
	while ! bash ~/; do sleep 10; done


In another terminal, test if the site is up. You should see pages of HTML.


Exit the active stable-diffusion-webui instance using Ctrl+C

How to launch stable-diffusion-webui in future

The conda environment needs to be activated, then can be run.

OPTION 1: Run these commands each time

source "$HOME/miniconda3/etc/profile.d/"
conda activate sd-webui
bash "$HOME/" -f --port 7860 --xformers

OPTION 2: Make a launch function in your .bashrc

# Paste this block into terminal and press enter
cat <<-'SDLauncher' >> ~/.bashrc
	sd-webui() {
		source "$HOME/miniconda3/etc/profile.d/"
		conda activate sd-webui
		bash "$HOME/" -f --port 7860 --xformers

Close and re-open your terminal, now all you need to run is:


Optional installation of models and enhancers (highly recommended)

# Models
cd $HOME/stable-diffusion-webui/models/Stable-diffusion
wget -c
wget -c

cd $HOME/stable-diffusion-webui/models/VAE
wget -c

# Pre-download upscaler models
cd $HOME/stable-diffusion-webui/models/ESRGAN/
wget -c
cd $HOME/stable-diffusion-webui/models/RealESRGAN/
wget -c

# Embeds for prompt enhancement
cd $HOME/stable-diffusion-webui/embeddings
wget -c
wget -c
wget -c
wget -c

Optional: GUI Configuration

Open a browser and go to:

Enable a VAE (if you installed it above):

  1. On the top bar, click “Settings”
  2. On the side bar, click “Stable Diffusion”
  3. Click the “SD VAE” dropdown and choose: vae-ft-mse-840000-ema-pruned.ckpt
  4. Click “Apply settings”

Install ControlNet (and other extensions):

In the txt2img and img2img sections you’ll now have ControlNet

Privacy/Security note

The web interface uses CSS from and a script from These online dependencies can be removed by downloading and localizing the references. I’ll write a script to do this automatically if there’s demand.

Using Stable Diffusion

Renders are stored in: ~/stable-diffusion-webui/outputs/

Example Prompt

AI model: revAnimated_v11.safetensors
Sampling: Euler a
CFG Scale: 6
Pixel range: 1080x720

Positive Prompt

((best quality)), ((masterpiece)), ((realistic)), (detailed), (cyberpunk), (Ferrari), race cars, blender, intricate background, intricate buildings, ((masterpiece)), HDR

Negative prompt

(semi-realistic:1.4), (worst quality, low quality:1.3), (depth of field, blurry:1.2), (greyscale, monochrome:1.1), nose, cropped, lowres, text, jpeg artifacts, signature, watermark, username, blurry, artist name, trademark, watermark, title, multiple view, Reference sheet

Example renders:


Any idea if this would work in a Ubuntu image ran through distrobox with Fedora as the host?

Thanks for the tutorial. Good work!

1 Like

If you run nvidia-smi in the environment you plan to use and it shows your Nvidia card, the rest of the guide should work.

I listed it as an Ubuntu guide (and i’ll be adding Debian) because they have 1st-class Nvidia support and having burned a day getting things 1/2 working on Fedora i’d need to deep dive it more.

1 Like

Pop!_OS also has Nvidia support out of the box when you use the correct ISO.

Here’s my stable defusion running on my machine and with finding the right model I can have it draw a field :smiley: its pretty dope stuff esp for inspiration


Nevermind, just saw that Pop_OS wasn’t one of the compatible distros.

Installing Comfy UI is super simple. It’s like three steps in Linux.

Now there is also this option for installing quite a few different types of AIs.

1 Like

Major refresh of the entire guide. It now uses conda instead of venv and includes instructions on installing ControlNET (see: picture instruction portion) which practically doubles the capability of stable-diffusion.

1 Like