web UI for GPU-accelerated ONNX pipelines like Stable Diffusion, even on Windows and AMD
MIT License
This release has a relatively short list, so the features and fixes will be combined.
--networks
flag was missingdist.onnx-files.com
mirror
e214343714a5562062414a41a318b0d8df756759e3261a8db5b85cf7572cf3ac
podman pull docker.io/ssube/onnx-web-api:v0.12.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.12.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.12.0-rocm-ubuntu
podman pull docker.io/ssube/onnx-web-gui:v0.12.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.12.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.12.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.12.0-node-bullseye
yarn add @apextoaster/[email protected]
pip install onnx-web==0.12.0
Release checklist: https://github.com/ssube/onnx-web/issues/458
Release milestone: https://github.com/ssube/onnx-web/milestone/12
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/55344
using Harrlogos XL with SereneXL and PixelSharpen
Published by ssube 10 months ago
This release adds support for SDXL and SDXL Turbo, allowing you to generate higher-quality images than ever before, or generate tons of images very quickly.
Using SDXL Turbo, images come back almost as fast as you can click Generate:
This release comes with a new documentation and help site: https://www.onnx-web.ai/docs
This is hosted on Github Pages alongside the web UI and refers to the latest release, although I hope to add a version selector at some point.
If you have any questions that are not answered on the help site, please join the Discord server and ask: https://discord.gg/7CdQmutGuw
Grid mode allows you to generate more than one image in a single run, with parameters that change for each column or row. You can change the CFG, steps, or replace part of the prompt for each image.
See the user guide for more details.
Region prompts allow you to change the prompt for part of a panorama, seamlessly blending multiple concepts across the image with one button click.
See the user guide for more details.
Region prompts and grid mode work great together:
df23170d89503b5f5de620707ea3540a34749ef6a69285e4a51a83c488040875
podman pull docker.io/ssube/onnx-web-api:v0.11.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.11.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.11.0-rocm-ubuntu
podman pull docker.io/ssube/onnx-web-gui:v0.11.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.11.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.11.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.11.0-node-bullseye
yarn add @apextoaster/[email protected]
pip install onnx-web==0.11.0
Release checklist: https://github.com/ssube/onnx-web/issues/418
Release milestone: https://github.com/ssube/onnx-web/milestone/11
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/55107
<lora:sdxl-harrlogos:0.9> onnx text logo, onyx gemstone logo, stone background, scattered rocks, gems, mountainside
using DynaVision and Harrlogos XL
Published by ssube about 1 year ago
This release has been a long time coming and accumulated a lot of features along the way, but it needs to be released to make way for SDXL.
__file/name__
tokensAn ONNX version of the ControlNet pipeline for SD v1 and v2 has been added. You can use it by selecting the ControlNet pipeline and using the img2img tab normally.
There are additional parameters for the ControlNet model and source image filter, which can be used to pre-process images into a pose or depth map or run edge detection.
The highres mode allows you to create much larger and more detailed images without increasing GPU memory usage, by running an initial txt2img stage followed by upscaling and img2img. The highres stages can be repeated more than once with a low strength, gradually increasing the amount of detail in the image while also upscaling it. This helps correct for any loss of detail that upscaling may introduce and can easily produce 6-8k backgrounds.
The <lora:name:weight>
tokens now support most LyCORIS networks as well, especially LoCON and LoHA. You should download the networks into the same models/lora
folder as other LoRA networks.
You can now use wildcard files to add some variety to your prompts. This supports most .txt
files and some .yaml
wildcards. After extracting any archives, place the wildcard files into the models/wildcard
folder and use them by surrounding the filename with two underscores, like __test-wildcards__
(omit the file extension).
Wildcards can be placed into sub-folders and can refer to each other and themselves. Each item will only be used once per prompt, so infinite recursion is not possible. The wildcards are selected based on the seed, so using the same seed will produce the same prompt.
You can find some wildcard collections at:
podman pull docker.io/ssube/onnx-web-api:v0.10.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.10.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.10.0-rocm-ubuntu
podman pull docker.io/ssube/onnx-web-gui:v0.10.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.10.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.10.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.10.0-node-bullseye
yarn add @apextoaster/[email protected]
pip install onnx-web==0.10.0
Release checklist: https://github.com/ssube/onnx-web/issues/368
Release milestone: https://github.com/ssube/onnx-web/milestone/10
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/53887
Work has already started on v0.11, featuring support for SDXL and a way to provide additional prompts for XL and highres. Combining highres with XL provides a whole new level of detail.
Published by ssube over 1 year ago
You can now blend additional networks with the diffusion model at runtime, rather than including them during conversion, using <type:name:weight>
tokens. I've tried to keep these compatible with the Auto1111 prompt syntax and other Stable Diffusion UIs, but some tokens depend on the filename, all of which is explained in the user guide.
You can still permanently blend the additional models by including them in your extras.json
file.
Using ONNX for inference requires a little bit more memory than some other runtimes, but offers some optimizations to help counter that. This release adds broad support for FP16 models, using both the ONNX runtime's optimization tools and PyTorch's native support. This should expand support to 8GB cards and may work on 6GB cards, although 4GB is not quite there yet.
The ONNX optimizations are supported on both AMD and Nvidia, while the PyTorch fp16 mode only works with CUDA on Nvidia.
podman pull docker.io/ssube/onnx-web-api:v0.9.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.9.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.9.0-rocm-ubuntu
podman pull docker.io/ssube/onnx-web-gui:v0.9.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.9.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.9.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.9.0-node-bullseye
yarn add @apextoaster/[email protected]
pip install onnx-web==0.9.0
Release checklist: https://github.com/ssube/onnx-web/issues/261
Release milestone: https://github.com/ssube/onnx-web/milestone/8?closed=1
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/50223
Published by ssube over 1 year ago
This should restore normal functionality to the model cache. The default cache limit is still fairly low, 2 models, and can be raised by setting the ONNX_WEB_CACHE_MODELS
environment variable:
# on linux:
> export ONNX_WEB_CACHE_MODELS=5
# on windows:
> set ONNX_WEB_CACHE_MODELS=5
podman pull docker.io/ssube/onnx-web-api:v0.8.1-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.8.1-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.8.1-rocm-ubuntu
podman pull docker.io/ssube/onnx-web-gui:v0.8.1-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.8.1-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.8.1-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.8.1-node-bullseye
yarn add @apextoaster/[email protected]
pip install onnx-web==0.8.1
Release checklist: https://github.com/ssube/onnx-web/issues/240
Release milestone: https://github.com/ssube/onnx-web/milestone/9?closed=1
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/49388
Published by ssube over 1 year ago
This is the largest release yet, on both the client and server:
The device worker pool, which manages the background workers used to generate images, has been completely rewritten to help manage some fairly severe memory leaks in the ONNX runtime. Each worker should keep its own cache of models that have been uploaded to VRAM, and workers will be recycled after 10 jobs or when they encounter a memory allocation error.
This is making the model cache less effective, which I hope to fix in a future patch, but the previous method was consistently running out of memory after 95-100 images. This one has been tested past 1000.
The client now supports localization, using the excellent i18next project, and should detect your browser's locale. There are initial machine translations into French, German, and Spanish. You can set the translation for custom models and Inversions in your extras file.
This release also completes ONNX acceleration for the Real ESRGAN family of models and adds some missing parameters to the diffusion pipelines, including image batch size and DDIM eta. Since memory consumption is somewhat higher with ONNX, it seems like 3-4 images is the maximum batch size for most commonly-available cards.
podman pull docker.io/ssube/onnx-web-api:v0.8.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.8.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.8.0-rocm-ubuntu
podman pull docker.io/ssube/onnx-web-gui:v0.8.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.8.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.8.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.8.0-node-bullseye
yarn add @apextoaster/[email protected]
pip install onnx-web==0.8.0
Release checklist: https://github.com/ssube/onnx-web/issues/217
Release milestone: https://github.com/ssube/onnx-web/milestone/7?closed=1
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/49361
Published by ssube over 1 year ago
Launching this version will download some new files into your models directory and create a .cache directory within that for downloads and temporary files. Please make sure you have enough disk space:
You should delete the old cache files from the models directory first: any .PTH files and intermediate Torch directories in your models directory. This should prevent temporary files from appearing in the client menus, and help to ensure all of the models are downloaded and converted before the server starts. Models you have already downloaded from the Huggingface hub will be loaded from their cache, which is shared with the diffusers
library and other tools.
There are now two launch scripts, launch.bat
and launch.sh
will only convert the base models for users with limited disk space, while launch-extras.bat
and launch-extras.sh
will convert both the base models and the extras. SD v2.1 may be moved into the extras file in the future, as one of the larger models.
podman pull docker.io/ssube/onnx-web-api:v0.7.1-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.7.1-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.7.1-rocm-ubuntu
podman pull docker.io/ssube/onnx-web-gui:v0.7.1-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.7.1-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.7.1-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.7.1-node-bullseye
yarn add @apextoaster/[email protected]
pip install onnx-web==0.7.1
Release checklist: https://github.com/ssube/onnx-web/issues/143
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/48482
Published by ssube over 1 year ago
Features:
This release removes the deprecated vendor platforms (AMD and Nvidia) in favor of the more accurate provider names (CUDA, DirectML, and ROCm). Hardware acceleration is still available for those platforms. The client should only show platforms that are available on the current server.
Artifacts:
podman pull docker.io/ssube/onnx-web-api:v0.6.1-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.6.1-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.6.1-rocm-ubuntu
podman pull docker.io/ssube/onnx-web-gui:v0.6.1-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.6.1-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.6.1-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.6.1-node-bullseye
yarn add @apextoaster/[email protected]
pip install onnx-web==0.6.1
Release checklist: https://github.com/ssube/onnx-web/issues/105
Published by ssube over 1 year ago
Features:
Artifacts:
podman pull docker.io/ssube/onnx-web-api:v0.5.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.5.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.5.0-rocm-ubuntu
podman pull docker.io/ssube/onnx-web-gui:v0.5.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.5.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.5.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.5.0-node-bullseye
yarn add @apextoaster/[email protected]
pip install onnx-web==0.5.0
Published by ssube almost 2 years ago
Features:
Artifacts:
podman pull docker.io/ssube/onnx-web-api:v0.4.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.4.0-cuda-buster
podman pull docker.io/ssube/onnx-web-gui:v0.4.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.4.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.4.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.4.0-node-bullseye
yarn add @apextoaster/[email protected]
pip install onnx-web==0.4.0
Published by ssube almost 2 years ago
Second release with img2img, Nvidia support, and negative prompts.
podman pull docker.io/ssube/onnx-web-api:v0.2.1-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.2.1-cuda-buster
docker pull ssube/onnx-web-api:v0.2.1-cpu-buster
docker pull ssube/onnx-web-api:v0.2.1-cuda-buster
podman pull docker.io/ssube/onnx-web-gui:v0.2.1-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.2.1-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.2.1-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.2.1-node-bullseye
docker pull ssube/onnx-web-gui:v0.2.1-nginx-alpine
docker pull ssube/onnx-web-gui:v0.2.1-nginx-bullseye
docker pull ssube/onnx-web-gui:v0.2.1-node-alpine
docker pull ssube/onnx-web-gui:v0.2.1-node-bullseye
yarn add @apextoaster/[email protected]
npm install @apextoaster/[email protected]
Published by ssube almost 2 years ago
First release with basic txt2img functionality and OCI containers.
podman pull docker.io/ssube/onnx-web-api:v0.1.0-buster
docker pull ssube/onnx-web-api:v0.1.0-buster
podman pull docker.io/ssube/onnx-web-gui:v0.1.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.1.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.1.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.1.0-node-bullseye
docker pull ssube/onnx-web-gui:v0.1.0-nginx-alpine
docker pull ssube/onnx-web-gui:v0.1.0-nginx-bullseye
docker pull ssube/onnx-web-gui:v0.1.0-node-alpine
docker pull ssube/onnx-web-gui:v0.1.0-node-bullseye
yarn add @apextoaster/[email protected]
npm install @apextoaster/[email protected]