ENFUGUE is an open-source web app for making studio-grade images and video using generative AI.
GPL-3.0 License
Bot releases are visible (Hide)
A script is provided for Windows and Linux machines to install, update, and run ENFUGUE. Copy the relevant command below and answer the on-screen prompts to choose your installation type and install optional dependencies.
Access the command prompt from the start menu by searching for "command." Alternatively, hold the windows key on your keyboard and click x
, then press r
or click run
, then type cmd
and press enter or click ok
.
curl https://raw.githubusercontent.com/painebenjamin/app.enfugue.ai/main/enfugue.bat -o enfugue.bat
.\enfugue.bat
Access a command shell using your preferred method and execute the following.
curl https://raw.githubusercontent.com/painebenjamin/app.enfugue.ai/main/enfugue.sh -o enfugue.sh
chmod u+x enfugue.sh
./enfugue.sh
Both of these commands accept the same flags.
USAGE: enfugue.(bat|sh) [OPTIONS]
Options:
--help Display this help message.
--conda / --portable Automatically set installation type (do not prompt.)
--update / --no-update Automatically apply or skip updates (do not prompt.)
--mmpose / --no-mmpose Automatically install or skip installing MMPose (do not prompt.)
If you want to install without using the installation scripts, see this Wiki page.
Automatic installers are coming! For now, please follow this manual installation method.
Download enfugue-server-0.3.3-macos-ventura-mps-x86_64.tar.gz, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:
./enfugue-server/unquarantine.sh
This command finds all the files in the installation and removes the com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:
./enfugue-server/enfugue.sh
Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
DragNUWA is an exciting new way to control Stable Video Diffusion released by ProjectNUWA and Microsoft. It allows you to draw the direction and speed of motion over the course of an animation. An entirely new motion vector interface has been created to allow for easy input into this complicated system.
Review the video below for information on how to use DragNUWA, include controls for creating and modifying motions.
To go along with the above, Stable Video Diffusion has been removed from the "Extras" menu and added to the main sidebar. When you enable animation, you will now be able to select between SVD and AnimateDiff/HotshotXL.
At the moment, SVD is treated as a post-processing step. Because there is no text-to-video yet for SVD, it will be treated as if you are making an image, and the image-to-video portion will be executed afterwards.
To better facilitate sharing between ENFUGUE and other Stable Diffusion web applications, a small handful of changes have been made.
models
folder in a stable-diffusion-webui
installation, the checkpoint
directory will configure itself to be the same as that applications' Stable-diffusion
directory.NOTE! As a result of the new structure, all of the files in the /cache
folder that begin with models--
may be deleted.
To help facilitate users running ENFUGUE on shared or on-demand server resources, networking has been improved for when you must communicate with ENFUGUE through a proxy. You should no longer need to configure a domain or paths for such situations explicitly, ENFUGUE should be able to determine based on the headers of the request that you are using a proxy, and the UI will adjust paths accordingly, both wioth and without SSL.
If you previously were configuring server.domain
or server.cms.path.root
manually, you can set server.domain
to null
and remove server.cms.path.root
to enable flexible domain routing. You should find that simply accessing the reported proxy URL should work with no further configuration needed.
Full Changelog: https://github.com/painebenjamin/app.enfugue.ai/compare/0.3.2...0.3.3
These are all repeats from the previous release - DragNUWA was a surprise and diverted attention!
The Prompt Travel interface will be expanded to allow images and video to be manually placed on them.
Audio will additionally be added to the timeline, and will be an input for audio-reactive diffusion.
Support Stable123 and other 3D model generators.
Published by painebenjamin 10 months ago
A script is provided for Windows and Linux machines to install, update, and run ENFUGUE. Copy the relevant command below and answer the on-screen prompts to choose your installation type and install optional dependencies.
Access the command prompt from the start menu by searching for "command." Alternatively, hold the windows key on your keyboard and click x
, then press r
or click run
, then type cmd
and press enter or click ok
.
curl https://raw.githubusercontent.com/painebenjamin/app.enfugue.ai/main/enfugue.bat -o enfugue.bat
.\enfugue.bat
Access a command shell using your preferred method and execute the following.
curl https://raw.githubusercontent.com/painebenjamin/app.enfugue.ai/main/enfugue.sh -o enfugue.sh
chmod u+x enfugue.sh
./enfugue.sh
Both of these commands accept the same flags.
USAGE: enfugue.(bat|sh) [OPTIONS]
Options:
--help Display this help message.
--conda / --portable Automatically set installation type (do not prompt.)
--update / --no-update Automatically apply or skip updates (do not prompt.)
--mmpose / --no-mmpose Automatically install or skip installing MMPose (do not prompt.)
If you want to install without using the installation scripts, see this Wiki page.
Automatic installers are coming! For now, please follow this manual installation method.
Download enfugue-server-0.3.2-macos-ventura-mps-x86_64.tar.gz, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:
./enfugue-server/unquarantine.sh
This command finds all the files in the installation and removes the com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:
./enfugue-server/enfugue.sh
Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
https://app.enfugue.ai:45554
does not work for your networking setup, you can now connect to enfugue using http://127.0.0.1:45555
or any other IP address/hostname that resolves to the machine running ENFUGUE.host
, domain
, port
, secure
, cert
, and key
keys can now accept lists/arrays.https://github.com/painebenjamin/app.enfugue.ai/assets/57536852/78ba6bd8-af48-453c-b6ab-115ac3145cd4
https://github.com/painebenjamin/app.enfugue.ai/assets/57536852/5d26fd0b-656c-4852-b87b-cfb6861e1bae
https://github.com/painebenjamin/app.enfugue.ai/assets/57536852/004d53ee-5c98-4947-97ae-bef8565f66be
The Prompt Travel interface will be expanded to allow images and video to be manually placed on them.
Audio will additionally be added to the timeline, and will be an input for audio-reactive diffusion.
Support Stable123 and other 3D model generators.
Published by painebenjamin 11 months ago
To help ease the difficulties of downloading, installing and updating enfugue, a new installation method and execution method has been developed. This script is a one-and-done shell script that will prompt you for any options you will need to set. Installation is as follows:
curl https://raw.githubusercontent.com/painebenjamin/app.enfugue.ai/main/enfugue.sh -o enfugue.sh
chmod u+x enfugue.sh
./enfugue.sh
You will be prompted when a new version of enfugue is available, and it will be automatically downloaded for you. Execute enfugue.sh -h
to see command-line options. Open the file with a text editor to view configuration options and additional instructions.
Latent Consistency Models are a method for performing inference in only a small handful of steps, with minimal reduction in quality.
To use LCM in Enfugue, take the following steps:
1.1
and 1.4
- 1.2
is a good start.3
and 8
- 4
is a good start.You may find LCM does not do well with fine structures like faces and hands. To help address this, you can either upscale as I have here, or use next new feature.
Enfugue now has a version of Automatic1111's ADetailer (After Detailer.) This allows you to configure a detailing pass after each image generation that can:
This works very well when combined with LCM, which can perform the inpainting and final denoising passes in a single step, offsetting the difficulty that LCM sometimes has with these subjects.
Enfugue now has themes. These are always available from the menu.
Select from the original enfugue theme, five different colored themes, two monochrome themes, and the ability to set your own custom theme.
An opacity slider has been added to the layer options menu. When used, this will make the image or video partially transparent in the UI. In addition, if the image is in the visible input layer, it will be made transparent when merged there, as well.
To make it more clear what images are and are not visible to Stable Diffusion, the "Denoising" image role has been replaced with a "Visibility" dropdown. This has three options:
To help illustrate these options and how inpainting/outpainting work, consider the following examples.
To help bridge the gap when it comes to external service integrations, there is now a generic "Download Models" menu in Enfugue. This will allow you to enter a URL to a model hosted anywhere on the internet, and have Enfugue download it to the right location for that model type.
When using any field that allows selecting from different AI models, there is now a magnifying glass icon. When clicked, this will present you with a window containing the CivitAI metadata for that model.
This does not require the metadata be saved prior to viewing. If the model does not exist in CivitAI's database, no metadata will be available.
Next to the scheduler selector is a small gear icon. When clicked, this will present you with a window allowing for advanced scheduler configuration.
These values should not need to be tweaked in general. However, some new animation modules are trained using different values for these configurations, so they have been exposed to allow using these models effectively in Enfugue.
Full Changelog: https://github.com/painebenjamin/app.enfugue.ai/compare/0.3.0...0.3.1
If you're on Linux, it's recommended to use the new automated installer. See the top of this document for those instructions. For Windows users or anyone not using the automated installer, read below.
First decide how you'd like to install, either a portable distribution, or through conda.
Platform | Graphics API | File(s) | CUDA Version | Torch Version |
---|---|---|---|---|
Windows | CUDA | enfugue-server-0.3.1-win-cuda-x86_64.zip.001enfugue-server-0.3.1-win-cuda-x86_64.zip.002 | 11.8.0 | 2.1.0 |
Linux | CUDA | enfugue-server-0.3.1-manylinux-cuda-x86_64.tar.gz.0enfugue-server-0.3.1-manylinux-cuda-x86_64.tar.gz.1enfugue-server-0.3.1-manylinux-cuda-x86_64.tar.gz.2 | 11.8.0 | 2.1.0 |
Download the three files above that make up the entire archive, then extract them. To extract these files, you must concatenate them. Rather than taking up space in your file system, you can simply stream them together to tar
. A console command to do that is:
cat enfugue-server-0.3.1* | tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C
to exit.
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit
.
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
, linux-
or macos-
based on your platform.cuda
.rocm
and directml
) are being added and will be made available as they are developed. Please voice your desire for these to prioritize their development.Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
python -m enfugue run
To install DW Pose support (a better, faster pose and face detection model), after installing Enfugue, execute the following (MacOS, Linux or Windows):
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
To install dependencies for GPU-accelerated frame interpolation, execute the following command (Linux, Windows):
pip install tensorflow[and-cuda] --ignore-installed
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip
. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
Published by painebenjamin 12 months ago
ENFUGUE now supports animation. A huge array of changes have been made to accommodate this, including new backend pipelines, new downloadable model support, new interface elements, and a rethought execution planner.
Most importantly, all features available for images work for animation as well. This includes IP adapters, ControlNets, your custom models, LoRA, inversion, and anything else you can think of.
The premiere animation toolkit for Stable Diffusion is AnimateDiff for Stable Diffusion 1.5. When using any Stable Diffusion 1.5 model and enabling animation, AnimateDiff is loaded in the backend.
Motion modules are AI models that are injected into the Stable Diffusion UNet to control how that model interprets motion over time. When using AnimateDiff, you will by default use mm_sd_15_v2.ckpt
, the latest base checkpoint. However, fine-tuned checkpoints are already available from the community, and these are supported in pre-configured models and on-the-fly configuration.
In addition, these are downloadable through the CivitAI download browser.
Motion LoRA are additional models available to steer AnimateDiff; these were trained on specific camera motions, and can replicate them when using them.
These are always available in the UI, select them from the LoRA menu and they will be downloaded as needed.
HotshotXL is a recently released animation toolkit for Stable Diffusion XL. When you use any Stable Diffusion XL model and enable animation, Hotshot will be loaded in the backend.
AnimateDiff and Hotshot XL both have limitations on how long they can animate for before losing coherence. To mitigate this, we can only ever attempt to animate a certain number of frames at a time, and blend these frame windows into one another to produce longer coherent motions. Use the Frame Window Size parameter to determine how many frames are used at once, and Frame Window Stride to indicate how many frames to step for the next window.
Both HotshotXL and AnimateDiff use 24-frame position encoding. If we cut that encoding short and interpolate the sliced encoding to a new length, we can effectively "slow down" motion. This is an experimental feature.
The application of motion during the inference process is a distinct step, and as a result of this we can apply a multiplier to how much effect that has on the final output. Using a small bit of math, we can determine at runtime the difference between the trained dimensions of the motion module and the current dimensions of your image, and use that to scale the motion. Enabling this in the UI also gives you access to a motion modifier which you can use to broadly control the "amount" of motion in a resulting video.
Instead of merely offering one prompt during animation, we can interpolate between multiple prompts to change what is being animated at any given moment. Blend action words into one another to steer motion, or use entirely different prompts for morphing effects.
Both HotshotXL and AnimateDiff were trained on 8 frames per second animations. In order to get higher framerates, we must create frames inbetween the AI-generated frames to smooth the motion out over more frames. Simply add a multiplication factor to create in-between frames - for example, a factor of 2
will double (less one) the total frame count by adding one frame in-between every other frame. Adding another factor will interpolate on the interpolated images, so adding a second factor of 2
will re-double (less one).
If you are upscaling and interpolating, the upscaling will be performed first.
There are two options available to make an animation repeat seamlessly.
In order to accommodate animation, and as a refresher over the original design, the GUI has been entirely re-configured. The most significant changes are enumerated below.
The original sidebar has been moved from the right to the left. As the sidebar represented global options, it was decided the left-hand side was the better place for this to follow along the lines of photo manipulation programs like GIMP or Photoshop.
The chooser that allows you to switch between viewing results and viewing the canvas has been moved to it's own dedicated bar.
In addition, this form takes two forms:
A layers menu has been added in the sidebar's place. This contains active options for your current layer.
As all invocations are now performed in a single inference step, there can only be one mask and one denoising strength. These have been moved to the global menu as a result. They will appear when there is any media on the canvas. Check the "Enable Inpainting" option to show the inpainting toolbar.
In addition, inpainting has been inverted from ENFUGUE's previous incarnation: black represents portions of the image left untouched, and white represents portions of the image denoised. This was changed to be more in line with how other UI's display inpainting masks and how they are used in the backend.
Tiling has been added to ENFUGUE. Select between horizontally tiling, vertically tiling, or both. It even works with animation!
Select the "display tiled" icon in the sample chooser to see what the image looks like next to itself.
There are many, many changes in this release, it is likely that there will be bugs encountered on different operating systems, browsers, GPUs, and workflows. Please see this Wiki page for requested information when submitting bug reports, as well as where logs can be located to do some self-diagnosing.
TensorRT-specific builds will no longer be released. These have led to significant amounts of confusion over the months, with very few people being able to make use of TensorRT.
It will remain available for the workflows it was previously available for, but you will need to install enfugue using one of the provided conda environment or into a different latent diffusion python environment via pip
- see below for full instructions.
The MacOS build of v0.3.0 is pending. There has been difficulties finding a set of compatible dependencies, but it will be done soon. I apologize for the delay. You are welcome to try installing using the provided conda environment - full instructions below.
Full Changelog: https://github.com/painebenjamin/app.enfugue.ai/compare/0.2.5...0.3.0
Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.
Platform | Graphics API | File(s) | CUDA Version | Torch Version |
---|---|---|---|---|
Windows | CUDA | enfugue-server-0.3.0-win-cuda-x86_64.zip.001enfugue-server-0.3.0-win-cuda-x86_64.zip.002 | 11.8.0 | 2.1.0 |
Linux | CUDA | enfugue-server-0.3.0-manylinux-cuda-x86_64.tar.gz.0enfugue-server-0.3.0-manylinux-cuda-x86_64.tar.gz.1enfugue-server-0.3.0-manylinux-cuda-x86_64.tar.gz.2 | 11.8.0 | 2.1.0 |
To extract these files, you must concatenate them. Rather than taking up space in your file system, you can simply stream them together to tar
. A console command to do that is:
cat enfugue-server-0.3.0* | tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C
to exit.
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit
.
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
, linux-
or macos-
based on your platform.cuda
.rocm
and directml
) are being added and will be made available as they are developed. Please voice your desire for these to prioritize their development.Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
python -m enfugue run
NOTE: The previously recommended command, enfugue run
, has been observed to fail in certain environments. For this reason it is recommended to use the above more universally-compatible command.
To install DW Pose support (a better, faster pose and face detection model), after installing Enfugue, execute the following (MacOS, Linux or Windows):
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip
. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
Published by painebenjamin almost 1 year ago
sdxl-1.0-inpainting-0.1
and automatic XL inpainting checkpoint merging when enabled.Create Inpainting Checkpoint when Available
checked in the settings menu.)Blue | Brownian Fractal | Crosshatch | Default (CPU Random) | Green | Grey | Pink | Simplex | Velvet | Violet | White |
---|---|---|---|---|---|---|---|---|---|---|
<Role>
Pipeline" phase.Added the following schedulers:
AttributeError
or KeyError
occurs, the user will be asked to ensure they are 1.5 adaptations with 1.5 models and XL adaptations with XL models.Ctrl
or Cmd
and then performing a right-click
(context menu) will copy the tooltip to the clipboard.sd_xl_base_1.0.safetensors
to sd_xl_base_1.0_fp16_vae.safetensors
.40
to 20
.true
to false
..json
files would not work with some browsers.Denoising Strength
slider would not appear when enabling inpainting.Full Changelog: https://github.com/painebenjamin/app.enfugue.ai/compare/0.2.4...0.2.5
Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.
Platform | Graphics API | File(s) | CUDA Version | Torch Version |
---|---|---|---|---|
MacOS | MPS | enfugue-server-0.2.5-macos-ventura-x86_64.tar.gz | N/A | 2.2.0.dev20230928 |
Windows | CUDA | enfugue-server-0.2.5-win-cuda-x86_64.zip.001enfugue-server-0.2.5-win-cuda-x86_64.zip.002 | 12.1.1 | 2.2.0.dev20230928 |
Windows | CUDA+TensorRT | enfugue-server-0.2.5-win-tensorrt-x86_64.zip.001enfugue-server-0.2.5-win-tensorrt-x86_64.zip.002 | 11.7.1 | 1.13.1 |
Linux | CUDA | enfugue-server-0.2.5-manylinux-cuda-x86_64.tar.gz.0enfugue-server-0.2.5-manylinux-cuda-x86_64.tar.gz.1enfugue-server-0.2.5-manylinux-cuda-x86_64.tar.gz.2 | 12.1.1 | 2.2.0.dev20230928 |
Linux | CUDA+TensorRT | enfugue-server-0.2.5-manylinux-tensorrt-x86_64.tar.gz.0enfugue-server-0.2.5-manylinux-tensorrt-x86_64.tar.gz.1enfugue-server-0.2.5-manylinux-tensorrt-x86_64.tar.gz.2 | 11.7.1 | 1.13.1 |
The primary differences between the TensorRT and CUDA packages are CUDA version (11.7 vs. 12.1) and Torch version (1.13.1 vs. 2.2.0).
For general operation, Torch 2 and CUDA 12 will outperform Torch 1 and CUDA 11 for almost all operations. However, a TensorRT engine compiled in CUDA 11.7 and Torch 1.13.1 will outperform Torch 2 inference by a factor of up to 100%.
In essence,
After choosing TensorRT or CUDA, download the appropriate manylinux
files here, concatenate them and extract them. A console command to do that is:
cat enfugue-server-0.2.5* | tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C
to exit.
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
If you are also choosing to use TensorRT, you must perform some additional steps on Windows. Follow the steps detailed here.
Locate the file enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit
.
Download the the macos
file here, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:
./enfugue-server/unquarantine.sh
This command finds all the files in the installation and removes the com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:
./enfugue-server/enfugue.sh
Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
, linux-
or macos-
based on your platform.tensorrt
for all of the capabilities of cuda
with the added ability to compile TensorRT engines. If you do not plan on using TensorRT, select cuda
for the most optimized build for this API.cuda
.rocm
and directml
) are being added and will be available soon.Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
enfugue run
To install DW Pose support (a better, faster pose and face detection model), after installing Enfugue, execute the following:
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip
. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
Published by painebenjamin about 1 year ago
The IP adapter integration has been overhauled with the following:
The backend model merger has been made available in the frontend to use as desired. Select Merge Models under the Models menu to get started.
There are two modes of operation:
(a + (b - c))
for all weights common between them.alpha
parameter from 0 to 1, where 0 would produce entirely the first checkpoint, 1 would produce entirely the second checkpoint, and 0.5 would produce the exact mean between the two.Finally, model loading has been made significantly more flexible, to better facilitate sharing of resources between Enfugue and other stable diffusion applications. To this end, Enfugue will now search in configured directories to an arbitrarily nested level of directories to find versions of models before attempting to download them itself. The known filenames for each scenario have been expanded as well, see the wiki for more details.
Full Changelog: https://github.com/painebenjamin/app.enfugue.ai/compare/0.2.3...0.2.4
Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.
Platform | Graphics API | File(s) | CUDA Version | Torch Version |
---|---|---|---|---|
MacOS | MPS | enfugue-server-0.2.4-macos-ventura-x86_64.tar.gz | N/A | 2.2.0.dev20230928 |
Windows | CUDA | enfugue-server-0.2.4-win-cuda-x86_64.zip.001enfugue-server-0.2.4-win-cuda-x86_64.zip.002 | 12.1.1 | 2.2.0.dev20230928 |
Windows | CUDA+TensorRT | enfugue-server-0.2.4-win-tensorrt-x86_64.zip.001enfugue-server-0.2.4-win-tensorrt-x86_64.zip.002 | 11.7.1 | 1.13.1 |
Linux | CUDA | enfugue-server-0.2.4-manylinux-cuda-x86_64.tar.gz.0enfugue-server-0.2.4-manylinux-cuda-x86_64.tar.gz.1enfugue-server-0.2.4-manylinux-cuda-x86_64.tar.gz.2 | 12.1.1 | 2.2.0.dev20230928 |
Linux | CUDA+TensorRT | enfugue-server-0.2.4-manylinux-tensorrt-x86_64.tar.gz.0enfugue-server-0.2.4-manylinux-tensorrt-x86_64.tar.gz.1enfugue-server-0.2.4-manylinux-tensorrt-x86_64.tar.gz.2 | 11.7.1 | 1.13.1 |
First, decide which version you want - with or without TensorRT support. TensorRT requires a powerful, modern Nvidia GPU.
Then, download the appropriate manylinux
files here, concatenate them and extract them. A console command to do that is:
cat enfugue-server-0.2.4* | tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C
to exit.
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit
.
Download the the macos
file here, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:
./enfugue-server/unquarantine.sh
This command finds all the files in the installation and removes the com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:
./enfugue-server/enfugue.sh
Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
To upgrade any distribution, download and extract the appropriate upgrade package on this release. Copy all files in the upgrade package into your Enfugue installation directory, overwriting any existing files.
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
, linux-
or macos-
based on your platform.tensorrt
for all of the capabilities of cuda
with the added ability to compile TensorRT engines. If you do not plan on using TensorRT, select cuda
for the most optimized build for this API.cuda
.rocm
and directml
) are being added and will be available soon.Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
enfugue run
To install DW Pose support (a better, faster pose and face detection model), after installing Enfugue, execute the following:
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip
. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
Published by painebenjamin about 1 year ago
Tencent's AI Lab has released Image Prompt (IP) Adapter, a new method for controlling Stable Diffusion with an input image that provides a huge amount of flexibility, with more consistency than standard image-based inference, and more freedom than than ControlNet images. The best part about it - it works alongside all other control techniques, giving us dozens of new combinations of control methods users can employ.
*DWPose is currently only available for users managing their own environments. Portable and docker users can still use OpenPose as before.
IDEA Research has released DWPose, a new AI model for detecting human poses, including fingers and faces, faster and more accurately than ever before.
In addition, a community member named Thibaud Zamora has released OpenPose ControlNet for SDXL, which is now the third SDXL ControlNet after Canny Edge and Depth.
You only need to select ControlNet pose to use it. In order to use DWPose, users managing their own environments must execute the following:
#!/usr/bin/env sh
pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
You can now merge images together on the canvas, giving each one it's own assignment(s) toward the overall diffusion plan. Click and drag one image onto another to merge them. You'll be presented the option to drop the image when you bring their headers together (i.e. bring the top of the dragged image to the top of the target image.)
Multi-diffusion speed has been improved by as much as 5 iterations per second, thanks to better algorithms for merging chunks. With this comes new options for how these chunks are masked onto each other, blending edges together. The options available as constant, bilinear and gaussian, with the default being bilinear. These images were all generated in 40 steps with a chunking size of 64.
With the rising popularity of UnaestheticXL, a negative textual inversion for SDXL by Aikimi, an implementation has been added to Enfugue for loading SDXL TI's. Add them just as you would add other Textual Inversion.
These are a little slow to load at the moment, as this is a temporary workaround pending official implementation into Diffusers.
Better options have been provided for the refining method. Use the slider at the top to control the step at which the configured refiner takes over denoising, providing a better end result than executing refining as a distinct step.
Upscaling has been made much more flexible by permitting you to select any number of steps with any set of configuration, rather than only permitting you one upscaling step.
The upscaling amount has additionally been unconstrained, allowing you to use an upscaling algorithm to modify the dimensions of an image by anywhere between 0.5× and 16×.
Full Changelog: https://github.com/painebenjamin/app.enfugue.ai/compare/0.2.2...0.2.3
Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.
Platform | Graphics API | File(s) | CUDA Version | Torch Version |
---|---|---|---|---|
MacOS | MPS | enfugue-server-0.2.3-macos-ventura-x86_64.tar.gz | N/A | 2.2.0.dev20230910 |
Windows | CUDA | enfugue-server-0.2.3-win-cuda-x86_64.zip.001enfugue-server-0.2.3-win-cuda-x86_64.zip.002 | 12.1.1 | 2.2.0.dev20230910 |
Windows | CUDA+TensorRT | enfugue-server-0.2.3-win-tensorrt-x86_64.zip.001enfugue-server-0.2.3-win-tensorrt-x86_64.zip.002 | 11.7.1 | 1.13.1 |
Linux | CUDA | enfugue-server-0.2.3-manylinux-cuda-x86_64.tar.gz.0enfugue-server-0.2.3-manylinux-cuda-x86_64.tar.gz.1enfugue-server-0.2.3-manylinux-cuda-x86_64.tar.gz.2 | 12.1.1 | 2.2.0.dev20230910 |
Linux | CUDA+TensorRT | enfugue-server-0.2.3-manylinux-tensorrt-x86_64.tar.gz.0enfugue-server-0.2.3-manylinux-tensorrt-x86_64.tar.gz.1enfugue-server-0.2.3-manylinux-tensorrt-x86_64.tar.gz.2 | 11.7.1 | 1.13.1 |
First, decide which version you want - with or without TensorRT support. TensorRT requires a powerful, modern Nvidia GPU.
Then, download the appropriate manylinux
files here, concatenate them and extract them. A console command to do that is:
cat enfugue-server-0.2.3* | tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C
to exit.
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit
.
Download the the macos
file here, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:
./enfugue-server/unquarantine.sh
This command finds all the files in the installation and removes the com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:
./enfugue-server/enfugue.sh
Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
To upgrade any distribution, download and extract the appropriate upgrade package on this release. Copy all files in the upgrade package into your Enfugue installation directory, overwriting any existing files.
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
, linux-
or macos-
based on your platform.tensorrt
for all of the capabilities of cuda
with the added ability to compile TensorRT engines. If you do not plan on using TensorRT, select cuda
for the most optimized build for this API.cuda
.rocm
and directml
) are being added and will be available soon.Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
enfugue run
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip
. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
Published by painebenjamin about 1 year ago
ControlNet Depth XL has been added. Additionally, settings have been exposed that allow users to specify the path to HuggingFace repositories for all ControlNets.
Improved visibility into what the back-end is doing by added the current task to the UI and API.
Contextual menus have been added for the currently active canvas node and/or the currently visible image sample.
Keyboard shortcuts have also been added for all menu items, including the contextual menus added above.
Iterations has been added as an invocation parameter in the UI and API. This allows you to generate more images using the same settings, without needing the VRAM to generate multiple samples.
Secondary prompts have been added to all prompt inputs. This allows you to enter two separate prompts into the primary and secondary text encodes of SDXL base models.
More inpainting options have been made available to better control how you want to inpaint.
You can now select multiple VAE for various pipelines, to further enable mixing SD 1.5 and XL. Additionally, an "other" option is provided to allow selecting any VAE hosting on HuggingFace.
Metadata has been added to images generated by Enfugue. Drag and drop the image into Enfugue to load the same settings that generated that image into your UI.
Provided the ability to select the pipeline to use when upscaling.
Added an option for how often to decode latents and generate intermediate images.
Added a button to pause the log view, enabling you to take your time and read the entries rather than having to chase them when something is writing logs.
Further improved memory management resulting in lower VRAM overhead and overall faster inference. ControlNet's are now loaded to the GPU only when required, and VAE will be unloaded when no longer required. This means some users who have had issues with using the large XL ControlNet's may find them working better in this release.
Full Changelog: https://github.com/painebenjamin/app.enfugue.ai/compare/0.2.1...0.2.2
Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.
Platform | Graphics API | File(s) | CUDA Version | Torch Version |
---|---|---|---|---|
MacOS | MPS | enfugue-server-0.2.2-macos-ventura-x86_64.tar.gz | N/A | 2.1.0.dev20230720 |
Windows | CUDA | enfugue-server-0.2.2-win-cuda-x86_64.zip.001enfugue-server-0.2.2-win-cuda-x86_64.zip.002 | 12.1.1 | 2.1.0.dev20230720 |
Windows | CUDA+TensorRT | enfugue-server-0.2.2-win-tensorrt-x86_64.zip.001enfugue-server-0.2.2-win-tensorrt-x86_64.zip.002 | 11.7.1 | 1.13.1 |
Linux | CUDA | enfugue-server-0.2.2-manylinux-cuda-x86_64.tar.gz.0enfugue-server-0.2.2-manylinux-cuda-x86_64.tar.gz.1enfugue-server-0.2.2-manylinux-cuda-x86_64.tar.gz.2 | 12.1.1 | 2.1.0.dev20230720 |
Linux | CUDA+TensorRT | enfugue-server-0.2.2-manylinux-tensorrt-x86_64.tar.gz.0enfugue-server-0.2.2-manylinux-tensorrt-x86_64.tar.gz.1enfugue-server-0.2.2-manylinux-tensorrt-x86_64.tar.gz.2 | 11.7.1 | 1.13.1 |
First, decide which version you want - with or without TensorRT support. TensorRT requires a powerful, modern Nvidia GPU.
Then, download the appropriate manylinux
files here, concatenate them and extract them. A console command to do that is:
cat enfugue-server-0.2.2* | tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C
to exit.
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit
.
Download the the macos
file here, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:
./enfugue-server/unquarantine.sh
This command finds all the files in the installation and removes the com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:
./enfugue-server/enfugue.sh
Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
To upgrade any distribution, download and extract the appropriate upgrade package on this release. Copy all files in the upgrade package into your Enfugue installation directory, overwriting any existing files.
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
, linux-
or macos-
based on your platform.tensorrt
for all of the capabilities of cuda
with the added ability to compile TensorRT engines.cuda
.rocm
and directml
) are being added and will be available soon.Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
enfugue run
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip
. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
Published by painebenjamin about 1 year ago
Simply executing the following to pull and run:
docker pull ghcr.io/painebenjamin/app.enfugue.ai:latest
docker run --rm --gpus all --runtime nvidia -p 45554:45554 ghcr.io/painebenjamin/app.enfugue.ai:latest run
See here for more information. Unfortunately for the moment this is Linux-only.
/invoke
endpoint more flexible. See here for API documentation.Offload
pipeline switch mode, there is now no case where the CPU will have two pipelines in memory at once. Pipelines are now swapped one model at a time in order to avoid high peak memory usage.Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.
Platform | Graphics API | File(s) | CUDA Version | Torch Version |
---|---|---|---|---|
MacOS | MPS | enfugue-server-0.2.1-macos-ventura-x86_64.tar.gz | N/A | 2.1.0.dev20230720 |
Windows | CUDA | enfugue-server-0.2.1-win-cuda-x86_64.zip.001enfugue-server-0.2.1-win-cuda-x86_64.zip.002 | 12.1.1 | 2.1.0.dev20230720 |
Windows | CUDA+TensorRT | enfugue-server-0.2.1-win-tensorrt-x86_64.zip.001enfugue-server-0.2.1-win-tensorrt-x86_64.zip.002 | 11.7.1 | 1.13.1 |
Linux | CUDA | enfugue-server-0.2.1-manylinux-cuda-x86_64.tar.gz.0enfugue-server-0.2.1-manylinux-cuda-x86_64.tar.gz.1enfugue-server-0.2.1-manylinux-cuda-x86_64.tar.gz.2 | 12.1.1 | 2.1.0.dev20230720 |
Linux | CUDA+TensorRT | enfugue-server-0.2.1-manylinux-tensorrt-x86_64.tar.gz.0enfugue-server-0.2.1-manylinux-tensorrt-x86_64.tar.gz.1enfugue-server-0.2.1-manylinux-tensorrt-x86_64.tar.gz.2 | 11.7.1 | 1.13.1 |
First, decide which version you want - with or without TensorRT support. TensorRT requires a powerful, modern Nvidia GPU.
Then, download the appropriate manylinux
files here, concatenate them and extract them. A console command to do that is:
cat enfugue-server-0.2.1* | tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C
to exit.
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit
.
Download the the macos
file here, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:
./enfugue-server/unquarantine.sh
This command finds all the files in the installation and removes the com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:
./enfugue-server/enfugue.sh
Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
To upgrade any distribution, download and extract the appropriate upgrade package on this release. Copy all files in the upgrade package into your Enfugue installation directory, overwriting any existing files.
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
, linux-
or macos-
based on your platform.tensorrt
for all of the capabilities of cuda
with the added ability to compile TensorRT engines.cuda
.rocm
and directml
) are being added and will be available soon.Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
enfugue run
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip
. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
Published by painebenjamin about 1 year ago
Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.
Platform | Graphics API | File(s) | CUDA Version | Torch Version |
---|---|---|---|---|
MacOS | MPS | enfugue-server-0.2.0-macos-ventura-x86_64.tar.gz | N/A | 2.1.0.dev20230720 |
Windows | CUDA | enfugue-server-0.2.0-win-cuda-x86_64.zip.001enfugue-server-0.2.0-win-cuda-x86_64.zip.002 | 12.1.1 | 2.1.0.dev20230720 |
Windows | CUDA+TensorRT | enfugue-server-0.2.0-win-tensorrt-x86_64.zip.001enfugue-server-0.2.0-win-tensorrt-x86_64.zip.002 | 11.7.1 | 1.13.1 |
Linux | CUDA | enfugue-server-0.2.0-manylinux-cuda-x86_64.tar.gz.0enfugue-server-0.2.0-manylinux-cuda-x86_64.tar.gz.1enfugue-server-0.2.0-manylinux-cuda-x86_64.tar.gz.2 | 12.1.1 | 2.1.0.dev20230720 |
Linux | CUDA+TensorRT | enfugue-server-0.2.0-manylinux-tensorrt-x86_64.tar.gz.0enfugue-server-0.2.0-manylinux-tensorrt-x86_64.tar.gz.1enfugue-server-0.2.0-manylinux-tensorrt-x86_64.tar.gz.2 | 11.7.1 | 1.13.1 |
First, decide which version you want - with or without TensorRT support. TensorRT requires a powerful, modern Nvidia GPU.
Then, download the appropriate manylinux
files here, concatenate them and extract them. A console command to do that is:
cat enfugue-server-0.2.0* | tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C
to exit.
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit
.
Download the the macos
file here, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:
./enfugue-server/unquarantine.sh
This command finds all the files in the installation and removes the com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:
./enfugue-server/enfugue.sh
Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
, linux-
or macos-
based on your platform.tensorrt
for all of the capabilities of cuda
with the added ability to compile TensorRT engines.cuda
.rocm
and directml
) are being added and will be available soon.Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
enfugue run
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip
. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
Added the following ControlNets and their corresponding image processors:
Added a log glance view that is always visible when there are logs to be read to further improve transparency.
Published by painebenjamin over 1 year ago
Thanks again to everyone who has helped test Enfugue so far. I'm happy to release the third alpha package, which comes with more bug fixes, some hotly requested features, and improved stability and robustness.
First, decide which version you want - with or without TensorRT support. TensorRT requires a powerful, modern Nvidia GPU.
Then, download the appropriate manylinux
files here (3 for TensorRT, 2 for base,) place them in their own folder, concatenate them and extract them. A simple console command to do that is:
cat enfugue-server-0.1.3*.part | tar -xvz
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
To upgrade either distribution, download and extract the appropriate upgrade package on this release. Copy all files in the upgrade package into your Enfugue installation directory, overwriting any existing files.
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
or linux-
based on your platform.tensorrt
for all of the capabilities of cuda
with the added ability to compile TensorRT engines.cuda
.rocm
, mps
, and directml
) are being added and will be available soon.Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
enfugue run
To upgrade with the provided environment, use pip
like so:
conda activate enfugue
pip install enfugue --ugprade
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
pip install enfugue --upgrade
File > Save
dialog not working.Published by painebenjamin over 1 year ago
Thank you to everyone who has helped test so far, you've all been extremely helpful.
I hope this release corrects a lot of the issues people have been having!
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
pip install enfugue --upgrade
Download the manylinux
files here, concatenate them and extract them. A simple console command to do that is:
cat enfugue-server-0.1.2*.part | tar -xvz
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on it's own.
System > Installation Manager
to change directories after initialization.
System > Engine Logs
. This gives you a realtime view of the activities of the diffusion engine, which inclues all activities of Stable Diffusion itself, as well as any necessary downloads or longer-running processes like TensorRT engine builds.
/home/<youruser>
on linux, C:\Users<youruser>.cache
on windows, substitute your drive letter as needed.)Usage: enfugue dump-config [OPTIONS]
Dumps a copy of the configuration to the console or the specified path.
Options:
-f, --filename TEXT A file to write to instead of stdout.
-j, --json When passed, use JSON instead of YAML.
--help Show this message and exit.
Usage: enfugue run [OPTIONS]
Runs the server synchronously using cherrypy.
Options:
-c, --config TEXT An optional path to a configuration file to use instead
of the default.
--help Show this message and exit.
Documentation regarding what settings are available and what they do is up on the wiki.
~/.cache/enfugue.log
location, and engine logs are at ~/.cache/enfugue-engine.log
ERROR
to hide unhelpful messages, as the server is mostly stable.DEBUG
to give as much information as possible to the front-end. This may change in the future.Published by painebenjamin over 1 year ago
Thank you for trying out Enfugue!
This is the first alpha release, version 0.1.0.
For Linux users, download the manylinux
files here, concatenate them and extract them. A simple console command to do that is:
cat enfugue-server-0.1.0*.part | tar -xvz
For Windows users, download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on it's own.
Checksums:
16b6fcfe4a1e357c6619f55b266ce14b enfugue-server-0.1.0-manylinux.tar.gz.0.part
0adf7fe6b2a378a45212bcc4a4f86939 enfugue-server-0.1.0-manylinux.tar.gz.1.part
c1a226b07fe00aa7825a868254c4fc69 enfugue-server-0.1.0-manylinux.tar.gz.2.part
489ebf6d6a713abc463763a75bff7b4c enfugue-server-0.1.0-win64.zip.001
1d8ce3cdf6e9e5e71747f1e182f27e27 enfugue-server-0.1.0-win64.zip.002
After extraction, simply run the server - with enfugue-server.exe
on Windows, or enfugue.sh
on Linux.
Thank you again!