Skip to content

Latest commit

 

History

History
55 lines (35 loc) · 3.15 KB

README.md

File metadata and controls

55 lines (35 loc) · 3.15 KB

Stable Diffusion on Kubernetes with Helm

Run Stable Diffusion with companion models on a GPU-enabled Kubernetes Cluster - complete with a WebUI and automatic model fetching for a 2 step install that takes less than 2 minutes (excluding download times).

Uses the nvidia/cuda image as a base.

Screenshot of the Stable Diffusion UI

Features

  • Automatic Model Fetching
  • Works with gpu-operator, bundling CUDA libraries
  • Interactive UI with many features, and more on the way!
  • GFPGAN for face reconstruction, RealESRGAN for super-sampling.
  • Textual Inversion
  • many more!

Prerequisites

  • Kubernetes Cluster with GPUs attached to atleast one node, and NVIDIA's gpu-operator set up successfully
  • helm installed locally

Setup

  • Add the helm repo with helm repo add amithkk-sd https://amithkk.github.io/stable-diffusion-k8s
  • Fetch latest charts with helm repo update
  • (Optional) Create your own values.yaml with customized settings
    • Some things that you might want to change could include the nodeAffinity, cliArgs (see below) and ingress settings (that will allow you to access this externally without needing to kubectl port-forward)
  • Install with helm install --generate-name amithkk-sd/stable-diffusion -f <your-values.yaml>

Wait for the containers to come up and follow the instructions returned by Helm to connect. This may take a while as it has to download a ~5GiB docker image and ~5GiB of models

Config

By extending your values.yaml you can change the cliArgs key, which contains the arguments that will be passed to the WebUI. By default: --extra-models-cpu --optimized-turbo are given, which allow you to use this model on a 6GB GPU. However, some features might not be available in the mode.

You can find the full list of arguments here

FAQ

Disclaimer

The author(s) of this project are not responsible for any content generated using this interface.

Thanks

Special thanks to everyone behind these awesome projects, without them, none of this would have been possible: