Guide: Self-hosting Stable Diffusion on Ubuntu using an Nvidia GPU

This guide is for self-hosting Stable Diffusion on an Nvidia GPU with >=8GB of VRAM.

Hardware recommendations

The following is strictly my opinion at time of writing…

Ampere series (latest) GPUs are the most recommended for both cost and features but Turing, Volta, Pascal and Maxwell series GPUs should all work. See: cuda support

Cheapest recommendation: Nvidia Quadro P4000 8GB (i’ve confirmed it works with this guide)
Best all-rounder and value for money: Nvidia RTX 3060 12GB (i’ve confirmed it works with this guide)
Best consumer card: Nvidia RTX 3060 16GB (or 4090 16GB when avail.)

While it’s possible to run Stable Diffusion on AMD GPUs and GPU’s with 4GB of RAM but it’s complicated and not recommended.

Resources:
https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units
https://en.wikipedia.org/wiki/CUDA

Install and confirm the presence of an Nvidia card


Follow this guide to install nvidia drivers: https://linuxhint.com/install-nvidia-drivers-on-ubuntu/

# Install the nvidia cli
sudo apt install nvidia-smi

# Confirm your card(s) are recognized
nvidia-smi

# Reboot
reboot

# Note: If nvidia drivers are included in an update the OS must be reset to using Stable Diffusion

Install stable-diffusion-webui


sudo apt install git pip wget python3 python3-venv -y

# Download and initalize the stable-diffusion cloud install package
cd $HOME
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui

# Optional: localize the python env directory for portability
python3 -m venv $HOME/stable-diffusion-webui/venv/
source $HOME/stable-diffusion-webui/venv/bin/activate # <== Note: needs to be run every new shell session

# Optional: Download models (highly recommended)
cd $HOME/stable-diffusion-webui/models/Stable-diffusion
wget -c https://huggingface.co/ckpt/rev-animated/resolve/main/revAnimated_v11.safetensors
wget -c https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors

# Optional: Download a VAE (highly recommended)
cd $HOME/stable-diffusion-webui/models/VAE
wget -c https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt

# Pre-download upscaler models
cd $HOME/stable-diffusion-webui/models/ESRGAN/
wget -c https://github.com/cszn/KAIR/releases/download/v1.0/ESRGAN.pth
cd $HOME/stable-diffusion-webui/models/RealESRGAN/
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth
wget -c https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth

# Optional: Download embeds for prompt enhancement (highly recommended)
cd $HOME/stable-diffusion-webui/embeddings
wget -c https://huggingface.co/yesyeahvh/bad-hands-5/resolve/main/bad-hands-5.pt
wget -c https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/main/bad_prompt_version2.pt
wget -c https://huggingface.co/Xynon/models/resolve/main/experimentals/TI/bad-image-v2-39000.pt
wget -c https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/EasyNegative.pt

# Start the webui which will pull in remaining dependencies
# Note: This will take a long time and if things break it'll be here, see: Resolving webui.sh errors
# If successful, it'll stay open with "Running on local URL:  http://127.0.0.1:7860"
bash $HOME/stable-diffusion-webui/webui.sh

Resolving webui.sh errors (if any)


# Cloning Taming Transformers
	# For error: "error: RPC failed; curl 56 GnuTLS recv error (-54): Error in the pull function."
		cd $HOME/stable-diffusion-webui/
		git config --global http.postBuffer 1048576000

# If internet is unstable, loop installation till it completes
	while ! bash $HOME/stable-diffusion-webui/webui.sh; do sleep 1; done

# In another terminal, test if the site is up
curl http://127.0.0.1:7860

GUI Configuration


Open a browser and go to: http://127.0.0.1:7860

Enable the VAE:

  1. On the top bar, click “Settings”
  2. On the side bar, click “Stable Diffusion”
  3. Click the “SD VAE” dropdown and choose: vae-ft-mse-840000-ema-pruned.ckpt
  4. Click “Apply settings”

Privacy/Security note


The web interface uses CSS from fonts.googleapis.com and a script from cdnjs.cloudflare.com. These online dependencies can be removed by downloading and localizing the references. I’ll write a script to do this automatically if there’s demand.

Using Stable Diffusion


Renders are stored in: $HOME/stable-diffusion/outputs/

Example Prompt

AI model: revAnimated_v11.safetensors
Sampling: Euler a
CFG Scale: 6
Pixel range: 1080x720

Positive Prompt

((best quality)), ((masterpiece)), ((realistic)), (detailed), (cyberpunk), (Ferrari), race cars, blender, intricate background, intricate buildings, ((masterpiece)), HDR

Negative prompt

(semi-realistic:1.4), (worst quality, low quality:1.3), (depth of field, blurry:1.2), (greyscale, monochrome:1.1), nose, cropped, lowres, text, jpeg artifacts, signature, watermark, username, blurry, artist name, trademark, watermark, title, multiple view, Reference sheet

Example renders:

3 Likes

Any idea if this would work in a Ubuntu image ran through distrobox with Fedora as the host?

Thanks for the tutorial. Good work!

1 Like

If you run nvidia-smi in the environment you plan to use and it shows your Nvidia card, the rest of the guide should work.

I listed it as an Ubuntu guide (and i’ll be adding Debian) because they have 1st-class Nvidia support and having burned a day getting things 1/2 working on Fedora i’d need to deep dive it more.

1 Like

Pop!_OS also has Nvidia support out of the box when you use the correct ISO.

Here’s my stable defusion running on my machine and with finding the right model I can have it draw a field :smiley: its pretty dope stuff esp for inspiration

2 Likes

Nevermind, just saw that Pop_OS wasn’t one of the compatible distros.

Installing Comfy UI is super simple. It’s like three steps in Linux.

Now there is also this option for installing quite a few different types of AIs.

1 Like