A Technical Analysis of ComfyUI Dockerfile



# Use NVIDIA CUDA base image with Alpine Linux

FROM nvidia/cuda:12.2.0-devel-alpine3.18

# Set environment variables

ENV PYTHONUNBUFFERED=1 \

PYTHONDONTWRITEBYTECODE=1 \

PIP_NO_CACHE_DIR=off \

PIP_DISABLE_PIP_VERSION_CHECK=on \

PIP_DEFAULT_TIMEOUT=100

# Install system dependencies

RUN apk add --no-cache \

python3 \

py3-pip \

git \

build-base \

python3-dev \

openblas-dev \

ffmpeg \

libsndfile

# Upgrade pip and install PyTorch with CUDA support

RUN pip3 install --upgrade pip && \

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124

# Clone ComfyUI repository

WORKDIR /app

RUN git clone https://github.com/comfyanonymous/ComfyUI .

# Install Python dependencies

RUN pip3 install \

torchsde \

einops \

transformers>=4.28.1 \

tokenizers>=0.13.3 \

sentencepiece \

safetensors>=0.4.2 \

aiohttp \

pyyaml \

Pillow \

scipy \

tqdm \

psutil \

kornia>=0.7.1 \

spandrel \

soundfile

# Create directories for models

RUN mkdir -p models/checkpoints models/vae

# Set the working directory

WORKDIR /app

# Expose the default port

EXPOSE 8188

# Run ComfyUI

CMD ["python3", "main.py", "--listen"]

This article examines the structure and components of a Dockerfile designed for ComfyUI, a graphical interface for Stable Diffusion. Let’s analyze each section with its corresponding code.

Base Image and Environment Setup


FROM nvidia/cuda:12.2.0-devel-alpine3.18

ENV PYTHONUNBUFFERED=1 \

PYTHONDONTWRITEBYTECODE=1 \

PIP_NO_CACHE_DIR=off \

PIP_DISABLE_PIP_VERSION_CHECK=on \

PIP_DEFAULT_TIMEOUT=100

The Dockerfile uses NVIDIA’s CUDA base image with Alpine Linux. The environment variables configure Python and pip behavior within the container. PYTHONUNBUFFERED ensures direct terminal output, while PIP_NO_CACHE_DIR maintains the pip cache.

System Dependencies


RUN apk add --no-cache \

python3 \

py3-pip \

git \

build-base \

python3-dev \

openblas-dev \

ffmpeg \

libsndfile

This section installs necessary system packages through Alpine’s package manager. The –no-cache flag helps maintain a smaller image size by not storing the package index.

PyTorch Installation


RUN pip3 install --upgrade pip && \

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124

The installation of PyTorch and related packages is configured specifically for CUDA 12.4 compatibility using NVIDIA’s package repository.

Application Setup and Dependencies


WORKDIR /app

RUN git clone https://github.com/comfyanonymous/ComfyUI .

RUN pip3 install \

torchsde \

einops \

transformers>=4.28.1 \

tokenizers>=0.13.3 \

sentencepiece \

safetensors>=0.4.2 \

aiohttp \

pyyaml \

Pillow \

scipy \

tqdm \

psutil \

kornia>=0.7.1 \

spandrel \

soundfile

The code clones ComfyUI into the /app directory and installs required Python packages. Version constraints are specified for certain packages to ensure compatibility.

Directory Structure and Port Configuration


RUN mkdir -p models/checkpoints models/vae

WORKDIR /app

EXPOSE 8188

CMD ["python3", "main.py", "--listen"]

This section creates necessary directories for model storage, sets the working directory, exposes port 8188 for web interface access, and defines the container’s entry point.

Deployment Instructions

The accompanying README.md provides deployment guidance:


To use this Dockerfile:

1. Save the Dockerfile in your project directory.

2. Build the Docker image:

docker build -t comfyui .

3. Run the container:

docker run -it --gpus all -p 8188:8188 -v /path/to/your/models:/app/models comfyui

Implementation Notes:

  • The container requires NVIDIA GPU support on the host system

  • Model persistence is achieved through volume mounting

  • The web interface is accessible on port 8188

  • Alpine Linux base provides a smaller footprint compared to Ubuntu-based alternatives

The Dockerfile creates a contained environment for ComfyUI operation with GPU support, balancing functionality with resource efficiency. Users should consider security implications of exposed ports and proper volume mounting for model storage when deploying in production environments.