InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
APACHE-2.0 License
Published by Millu about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.
To learn more about InvokeAI, please visit our Documentation or join our Discord server!
To install version 3.3.0, please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you already have InvokeAI version 3.x installed, you can update by running invoke.sh
/ invoke.bat
and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.
**Download the installer:
InvokeAI-installer-v3.3.0post1.zip
**
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.2.0...v3.3.0
Published by Millu about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.
To learn more about InvokeAI, please visit our Documentation or join our Discord server!
To install version 3.3.0, please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you already have InvokeAI version 3.x installed, you can update by running invoke.sh
/ invoke.bat
and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.
Download the installer: InvokeAI-installer-v3.3.0rc1.zip
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.2.0...v3.3.0rc1
Published by Millu about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading Web Interface and also serves as the foundation for multiple commercial products.
To learn more about InvokeAI, please see our Documentation or join the Discord!
invokeai-db-maintenance
invokeai-metadata
commandIf you experience the server error, TypeError: Invoker.create_execution_state() got an unexpected keyword argument 'queue_id'
, try clearing your local browser cache or resetting the InvokAI UI (Settings -> Reset UI) before running a generation.
You might see a red alert icon on your nodes after loading a workflow. This indicates that the node in the workflow is from an older version of InvokeAI, or the node doesn't have a version. If your workflow runs, you may safely ignore this, and we will add functionality to "upgrade" the un-versioned nodes in a future update. If the workflow does not work, you will need to delete and add the nodes.
To get started with IP-Adapter, you'll need to download the image encoder and IP-Adapter for the desired based model. Once the models are installed, IP-Adapter is able to be used under the "Control Adapters" options.
Image Encoders:
IP-Adapter Models:
These can be installed from the Model Manager by choosing "Import Models" and pasting in the repoIDs of the desired model. Remember to install the model and the image encoder! For example to get started with IP-Adapter for SD1.5 these are the repo IDs:
or from the command line by starting the "Developer's Console" from the invoke.bat
launcher and pasting this command:
invokeai-model-install --add InvokeAI/ip_adapter_sd_image_encoder InvokeAI/ip_adapter_sdxl_image_encoder InvokeAI/ip_adapter_sd15 InvokeAI/ip_adapter_plus_sd15 InvokeAI/ip_adapter_plus_face_sd15 InvokeAI/ip_adapter_sdxl
To install v3.2.0, please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you already have InvokeAI v3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.
Download the installer: InvokeAI-installer-v3.2.0.zip
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please see How to Contribute or reach out to imic on Discord!
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.1...v3.2.0
Published by Millu about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading Web Interface and also serves as the foundation for multiple commercial products.
To learn more about InvokeAI, please see our Documentation or join the Discord!
invokeai-db-maintenance
invokeai-metadata
commandYou might see a red alert icon on your nodes after loading a workflow. This indicates that the node in the workflow is from an older version of InvokeAI, or the node doesn't have a version. If your workflow runs, you may safely ignore this, and we will add functionality to "upgrade" the un-versioned nodes in a future update. If the workflow does not work, you will need to delete and add the nodes.
To get started with IP-Adapter, you'll need to download the image encoder and IP-Adapter for the desired based model. These can be downloaded through the Model Manager. Once the models are installed, IP-Adapter is able to be used under the "Control Adapters" options.
Image Encoders:
IP-Adapter Models:
These can be installed from the Model Manager by choosing "Import Models" and pasting in the following list of repo_ids:
InvokeAI/ip_adapter_sd_image_encoder InvokeAI/ip_adapter_sdxl_image_encoder InvokeAI/ip_adapter_sd15 InvokeAI/ip_adapter_plus_sd15 InvokeAI/ip_adapter_plus_face_sd15 InvokeAI/ip_adapter_sdxl
or from the command line by starting the "Developer's Console" from the invoke.bat
launcher and pasting this command:
invokeai-model-install --add InvokeAI/ip_adapter_sd_image_encoder InvokeAI/ip_adapter_sdxl_image_encoder InvokeAI/ip_adapter_sd15 InvokeAI/ip_adapter_plus_sd15 InvokeAI/ip_adapter_plus_face_sd15 InvokeAI/ip_adapter_sdxl
To install v3.2.0, please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you already have InvokeAI v3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.
Download the installer: InvokeAI-installer-v3.2.0rc3.zip
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please see How to Contribute or reach out to imic on Discord!
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.1...v3.2.0rc3
Published by Millu about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.
To learn more about InvokeAI, please see our Documentation or join the Discord!
invokeai-db-maintenance
invokeai-metadata
commandTo install v3.2.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
Download the installer: InvokeAI-installer-v3.2.0rc2.zip
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.1...v3.2.0rc2
Published by Millu about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.
To learn more about InvokeAI, please see our Documentation, and check out the 0.1 Release Landing Page for the Community Edition!
invokeai-db-maintenance
invokeai-metadata
commandTo install v3.2.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
Download the installer: InvokeAI-installer-v3.2.0rc1.zip
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.1...v3.2.0rc1
Published by Millu about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.
To learn more about InvokeAI, please see our Documentation, and check out the 3.1 Release Landing Page for the Community Edition!
Union[str, list[str]]
)To install v3.1.1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
Download the installer: InvokeAI-installer-v3.1.1.zip
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.0...v3.1.1
Published by Millu about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.
To learn more about InvokeAI, please see our Documentation, and check out the 3.1 Release Landing Page for the Community Edition!
Union[str, list[str]]
)To install v3.1.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
Download the installer: InvokeAI-installer-v3.1.1.zip
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.0...v3.1.1rc1
Published by lstein about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.
To learn more about InvokeAI, please see our Documentation, and check out the 3.1 Release Landing Page for the Community Edition!
Download the installer: InvokeAI-installer-v3.1.0.zip
InvokeAI 3.1.0 introduces a new powerful tool to aide the image generation process in the Workflow Builder. Workflows combine the power of nodes-based software with the ease of use of a GUI to deliver the best of both worlds.
The Node Editor allows you to build the custom image generation workflows you need, as well as enables you to create and use custom nodes, making InvokeAI a fully extensible platform.
To get started with nodes in InvokeAI, take a look at our example workflows, or some of the custom Community Nodes.
A zip file of example workflows can be found at the bottom of this page under Assets.
To install v3.1.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you have InvokeAI 2.3.5 or older installed, we recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
In the event of an aborted install that has left the invokeai
directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.
All users can upgrade from 3.0.2 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.1 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
invoke.sh
or invoke.bat
upgrade
menu option [9]Windows users can instead follow this recipe:
invoke.sh
or invoke.bat
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.1.0.zip" --use-pep517 --upgrade
invokeai-configure --root .
This will produce a working 3.1.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
models/.cache
folder before proceeding.Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.
We provide a script, invokeai-import-images
, which will copy images from any previous version of InvokeAI to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-import-images
This will prompt you to select the destination and source directories, and allow you to select which image gallery board to import into.
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.1.0
invokeai-configure --yes --skip-sd-weights
Important: After doing the pip install
, it is necessary to invokeai-configure
in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
invoke.bat
launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.stabilityai/stable-diffusion-xl-base-1.0
stabilityai/stable-diffusion-xl-refiner-1.0
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
ram: 12.0
vram: 0.5
This is a list of known issues in 3.1.0 as well as features that are planned for inclusion in later releases:
max_vram_cache
and ram_cache
settings in invokeai.yaml
have been deprecated and renamed to vram
and ram
. To adjust cache size, we recommend using the configure script (option [6] in the launcher) to adjust them.For support, please use this repository's GitHub Issues tracking service, or join our Discord.
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
Thank you to all of the new and existing contributors to InvokeAI. We appreciate your efforts and contributions!
BlendLatentsInvocation
by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/4336
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.2post1...v3.1.0rc1
Published by Millu about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.
To learn more about InvokeAI, please see our Documentation Pages.
invokeai-import-images
commandTo install 3.0.2post1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
In the event of an aborted install that has left the invokeai
directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.
InvokeAI-installer-v3.0.2post1.zip
All users can upgrade from 3.0.1 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0.2 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
invoke.sh
or invoke.bat
upgrade
menu option [9]Windows users can instead follow this recipe:
invoke.sh
or invoke.bat
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.2.zip" --use-pep517 --upgrade
invokeai-configure --root .
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
models/ .cache
folder before proceeding.Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.2
invokeai-configure --yes --skip-sd-weights
Important: After doing the pip install
, it is necessary to invokeai-configure
in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
invoke.bat
launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.stabilityai/stable-diffusion-xl-base-1.0
stabilityai/stable-diffusion-xl-refiner-1.0
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.5
This is a list of known bugs in 3.0.2rc1 as well as features that are planned for inclusion in later releases:
For support, please use this repository's GitHub Issues tracking service, or join our Discord.
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
Thank you to all of the new contributors to InvokeAI. We appreciate your efforts and contributions!
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.2...v3.0.2post1
Published by Millu about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.
To learn more about InvokeAI, please see our Documentation Pages.
invokeai-import-images
commandTo install 3.0.2 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
In the event of an aborted install that has left the invokeai
directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.
All users can upgrade from 3.0.1 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0.2 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
invoke.sh
or invoke.bat
upgrade
menu option [9]Windows users can instead follow this recipe:
invoke.sh
or invoke.bat
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.2.zip" --use-pep517 --upgrade
invokeai-configure --root .
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
models/ .cache
folder before proceeding.Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.2
invokeai-configure --yes --skip-sd-weights
Important: After doing the pip install
, it is necessary to invokeai-configure
in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
invoke.bat
launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.stabilityai/stable-diffusion-xl-base-1.0
stabilityai/stable-diffusion-xl-refiner-1.0
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.5
This is a list of known bugs in 3.0.2rc1 as well as features that are planned for inclusion in later releases:
For support, please use this repository's GitHub Issues tracking service, or join our Discord.
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
Thank you to all of the new contributors to InvokeAI. We appreciate your efforts and contributions!
.github/
dir by @SauravMaheshkar in https://github.com/invoke-ai/InvokeAI/pull/4060
--ignore_missing_core_models
CLI flag to bypass checking for missing core models by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/4081
app_version
to image metadata by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/4198
vae: ''
from crashing model by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4209
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.1rc3...v3.0.2
Published by lstein about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.
To learn more about InvokeAI, please see our Documentation Pages.
This release containss a proposed hotfix for the Windows install OSError
crashes that began appearing in 3.0.1. In addition, the following bugs have been addressed:
models_dir
configuration variable used to customize the location of the models directory is now working properly--yes
flag.To install 3.0.1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
In the event of an aborted install that has left the invokeai
directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.
InvokeAI-installer-v3.0.1post3.zip
All users can upgrade from 3.0.0 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
invoke.sh
or invoke.bat
upgrade
menu option [9]Windows users can instead follow this recipe:
invoke.sh
or invoke.bat
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1post3.zip" --use-pep517 --upgrade
invokeai-configure --root .
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.1post3
invokeai-configure --yes --skip-sd-weights
Important: After doing the pip install
, it is necessary to invokeai-configure
in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
invoke.bat
launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.stabilityai/stable-diffusion-xl-base-1.0
stabilityai/stable-diffusion-xl-refiner-1.0
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.5
This is a list of known bugs in 3.0.1post3 as well as features that are planned for inclusion in later releases:
For support, please use this repository's GitHub Issues tracking service, or join our Discord.
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.1...v3.0.1post1
Published by lstein about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.
To learn more about InvokeAI, please see our Documentation Pages.
Since RC3, the following has changed:
Since RC2, the following has changed:
Developer changes:
To install 3.0.1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
All users can upgrade from 3.0.0 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
invoke.sh
or invoke.bat
upgrade
menu option [9]Windows users can instead follow this recipe:
invoke.sh
or invoke.bat
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1.zip" --use-pep517 --upgrade
invokeai-configure --root .
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --skip-sd-weights
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.1
invokeai-configure --skip-sd-weights
Important: After doing the pip install
, it is necessary to invokeai-configure
in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
invoke.bat
launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0
Users with 12 GB or more VRAM can reduce the time waiting for the image to start generating by setting max_vram_cache_size
to 6 GB or higher.
This is a list of known bugs in 3.0.1 as well as features that are planned for inclusion in later releases:
For support, please use this repository's GitHub Issues tracking service, or join our Discord.
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.0...v3.0.1
The files below include the InvokeAI installer zip file, the full source code, and previous release candidates for 3.0.1
Published by lstein about 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.
To learn more about InvokeAI, please see our Documentation Pages.
Since RC3, the following has changed:
Since RC2, the following has changed:
Developer changes:
Known bugs:
To install 3.0.1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
InvokeAI-installer-v3.0.1rc3.zip
All users can upgrade from 3.0.0 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
invoke.sh
or invoke.bat
upgrade
menu option [9]Windows users can instead follow this recipe:
invoke.sh
or invoke.bat
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1rc3.zip" --use-pep517 --upgrade
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particular affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
pip
Once 3.0.1 is released, developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
invokeai-configure
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.1rc3
invokeai-configure
Important: After doing the pip install
, it is necessary to invokeai-configure
in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
invoke.bat
launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0
Users with 12 GB or more VRAM can reduce the time waiting for the image to start generating by setting max_vram_cache_size
to 6 GB or higher.
This is a list of known bugs in 3.0.1 as well as features that are planned for inclusion in later releases:
For support, please use this repository's GitHub Issues tracking service, or join our Discord.
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.0...v3.0.1rc1
Published by lstein over 1 year ago
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI version 3.0.0 represents a major advance in functionality and ease compared with the last official release, 2.3.5.
Please use the 3.0.0 release discussion thread, for comments on this version, including feature requests, enhancement suggestions and other non-critical issues. Report bugs to InvokeAI Issues. For interactive support with the development team, contributors and user community, you are invited join the InvokeAI Discord Server.
To learn more about InvokeAI, please see our Documentation Pages.
Quite a lot has changed, both internally and externally.
The WebUI can now be launched from the command line using either invokeai-web
(preferred new way) or invokeai --web
(deprecated old way).
The previous command line tool has been removed and replaced with a new developer-oriented tool invokeai-node-cli
that allows you to experiment with InvokeAI nodes.
The console-based model installer, invokeai-model-install
has been redesigned and now provides tabs for installing checkpoint models, diffusers models, ControlNet models, LoRAs, and Textual Inversion embeddings. You can install models stored locally on disk, or install them using their web URLs or Repo_IDs.
Internally the code base has been completely rewritten to be much easier to maintain and extend. Importantly, all image generation options are now represented as "nodes", which are small pieces of code that transform inputs into outputs and can be connected together into a graph of operations. Generation and image manipulation operations can now be easily extended by writing a new InvokeAI nodes.
To install 3.0.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
All users can upgrade from the 3.0 beta releases using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
invoke.sh
or invoke.bat
upgrade
menu option [9]Windows users can instead follow this recipe:
invoke.sh
or invoke.bat
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0.zip" --use-pep517 --upgrade
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
pip
Once 3.0.0 is released, developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.0
To upgrade to an xformers
version if you are not currently using xformers
, use:
pip install --use-pep517 --upgrade InvokeAI[xformers]
You can see which versions are available by going to The PyPI InvokeAI Project Page
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. As of the current time (July 2023) SDXL had not been officially released, but a pre-release 0.9 version is widely available. InvokeAI provides support for SDXL image generation via its Nodes Editor, a user interface that allows you to create and customize complex image generation pipelines using a drag-and-drop interface. Currently SDXL generation is not directly supported in the text2image, image2image, and canvas panels, but we expect to add this feature as soon as SDXL 1.0 is officially released.
SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. For best results, the initial image is handed off from the base to the refiner before all the denoising steps are complete. It is not clear whether SDXL 1.0, when it is released, will require the refiner.
To experiment with SDXL, you'll need the "base" and "refiner" models. Currently a beta version of SDXL, version 0.9, is available from HuggingFace for research purposes. To obtain access, you will need to register with HF at https://huggingface.co/join, obtain an access token at https://huggingface.co/settings/tokens, and add the access token to your environment. To do this, run the InvokeAI launcher script, activate the InvokeAI virtual environment with option [8], and type the command huggingface-cli login
. Paste in your access token from HuggingFace and hit return (the token will not be echoed to the screen). Alternatively, select launcher option [6] "Change InvokeAI startup options" and paste the HF token into the indicated field.
Now navigate to https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9 and fill out the access request form for research use. You will be granted instant access to download. Next launch the InvokeAI console-based model installer by selecting launcher option [5] or by activating the virtual environment and giving the command invokeai-model-install
. In the STARTER MODELS section, select the checkboxes for stable-diffusion-xl-base-0-9
and stable-diffusion-xl-refiner-0-9
. Press Apply Changes to install the models and keep the installer running, or Apply Changes and Exit to install the models and exit back to the launcher menu.
Alternatively you can install these models from the Web UI Model Manager (cube at the bottom of the left-hand panel) and navigate to Import Models. In the field labeled Location type in the repo id of the base model, which is stabilityai/stable-diffusion-xl-base-0.9
. Press Add Model and wait for the model to download and install. After receiving confirmation that the model installed, repeat with stabilityai/stable-diffusion-xl-refiner-0.9
.
Note that these are large models (12 GB each) so be prepared to wait a while.
To use the installed models you will need to activate the Node Editor, an advanced feature of InvokeAI. Go to the Settings (gear) icon on the upper right of the Web interface, and activate "Enable Nodes Editor". After reloading the page, an inverted "Y" will appear on the left-hand panel. This is the Node Editor.
Enter the Node Editor and click the Upload button to upload either the SDXL base-only or SDXL base+refiner pipelines (right click to save these .json files to disk). This will load and display a flow diagram showing the (many complex) steps in generating an SDXL image.
Ensure that the SDXL Model Loader (leftmost column, bottom) is set to load the SDXL base model on your system, and that the SDXL Refiner Model Loader (third column, top) is set to load the SDXL refiner model on your system. Find the nodes that contain the example prompt and style ("bluebird in a sakura tree" and "chinese classical painting") and replace them with the prompt and style of your choice. Then press the Invoke button. If all goes well, an image will eventually be generated and added to the image gallery. Unlike standard rendering, intermediate images are not (yet) displayed during rendering.
Be aware that SDXL support is an experimental feature and is not 100% stable. When designing your own SDXL pipelines, be aware that there are certain settings that will have a disproportionate effect on image quality. In particular, the latents decode VAE step must be run at fp32
precision (using a slider at the bottom of the VAE node), and that images will change dramatically as the denoising threshold used by the refiner is adjusted.
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0
This is a list of known bugs in 3.0 as well as features that are planned for inclusion in later releases:
For support, please use this repository's GitHub Issues tracking service, or join our Discord.
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
test-invoke-pip
by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/2892
list_sessions
handler by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3109
ImageField
by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3170
sampler_name
--> scheduler
by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3169
t2i
graph by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3318
context
arg in LatentsToLatents by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3387
.and()
concatenating feature by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/3497
image_origin
from most places by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3537
openapi-fetch
; fix upload issue by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3674
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5...v3.0.0rc2
Published by lstein over 1 year ago
We are pleased to announce a new beta release of InvokeAI 3.0 for user testing.
Please use the 3.0.0 release discussion thread, InvokeAI Issues, or the InvokeAI Discord Server to report bugs and other issues.
invokeai.yaml
Known bug in this beta If you are installing InvokeAI completely from scratch, on the very first image generation you may get a black screen. Just reload the web page and the problem will be resolved for this and subsequent generations.
Quite a lot has changed, both internally and externally
The WebUI can now be launched from the command line using either invokeai-web
(preferred new way) or invokeai --web
(deprecated old way).
invokeai-node-cli
that allows you to experiment with InvokeAI nodes.The console-based model installer, invokeai-model-install
has been redesigned and now provides tabs for installing checkpoint models, diffusers models, ControlNet models, LoRAs, and Textual Inversion embeddings. You can install models stored locally on disk, or install them using their web URLs or Repo_IDs.
Internally the code base has been completely rewritten to be much easier to maintain and extend. Importantly, all image generation options are now represented as "nodes", which are small pieces of code that transform inputs into outputs and can be connected together into a graph of operations. Generation and image manipulation operations can now be easily extended by writing a new InvokeAI nodes.
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. As of the current time (July 18, 2023) SDXL had not been officially released, but a pre-release 0.9 version is widely available. InvokeAI provides support for SDXL image generation via its Nodes Editor, a user interface that allows you to create and customize complex image generation pipelines using a drag-and-drop interface. Currently SDXL generation is not directly supported in the text2image, image2image, and canvas panels, but we expect to add this feature in the next few days.
SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. For best results, the initial image is handed off from the base to the refiner before all the denoising steps are complete. It is not clear whether SDXL 1.0, when it is released, will require the refiner.
To experiment with SDXL, you'll need the "base" and "refiner" models. Currently a beta version of SDXL, version 0.9, is available from HuggingFace for research purposes. To obtain access, you will need to register with HF at https://huggingface.co/join, obtain an access token at https://huggingface.co/settings/tokens, and add the access token to your environment. To do this, run the InvokeAI launcher script, activate the InvokeAI virtual environment with option [8], and type the command huggingface-cli login
. Paste in your access token from HuggingFace and hit return (the token will not be echoed to the screen).
Now navigate to https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9 and fill out the access request form for research use. You will be granted instant access to download. Next launch the InvokeAI console-based model installer by selecting launcher option [5] or by activating the virtual environment and giving the command invokeai-model-install
. In the STARTER MODELS section, select the checkboxes for stable-diffusion-xl-base-0-9
and stable-diffusion-xl-refiner-0-9
. Press Apply Changes to install the models and keep the installer running, or Apply Changes and Exit to install the models and exit back to the launcher menu.
Alternatively you can install these models from the Web UI Model Manager (cube at the bottom of the left-hand panel) and navigate to Import Models. In the field labeled Location type in the repo id of the base model, which is stabilityai/stable-diffusion-xl-base-0.9
. Press Add Model and wait for the model to download and install (the page will freeze while this is happening). After receiving confirmation that the model installed, repeat with stabilityai/stable-diffusion-xl-refiner-0.9
.
Note that these are large models (12 GB each) so be prepared to wait a while.
To use the installed models enter the Node Editor (inverted "Y" in the left-hand panel) and upload either the SDXL base-only or SDXL base+refiner invocation graphs. This will load and display a flow diagram showing the steps in generating an SDXL image.
Ensure that the SDXL Model Loader (leftmost column, bottom) is set to load the SDXL base model on your system, and that the SDXL Refiner Model Loader (third column, top) is set to load the SDXL refiner model on your system. Find the nodes that contain the example prompt and style ("bluebird in a sakura tree" and "chinese classical painting") and replace them with the prompt and style of your choice. Then press the Invoke button. If all goes well, an image will be generated and added to the image gallery.
Be aware that SDXL support is an experimental feature and is not 100% stable. When designing your own SDXL pipelines, be aware that there are certain settings that will have a disproportionate effect on image quality. In particular, the latents decode VAE step must be run at fp32
precision (using a slider at the bottom of the VAE node), and that images will change dramatically as the denoising threshold used by the refiner is adjusted.
Also be aware that SDXL requires at least 8 GB of VRAM in order to render 1024x1024 images. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0
Some features are missing or not quite working yet. These include:
The following 2.3 features are not available:
To install 3.0.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
InvokeAI-installer-v3.0.0+b10.zip
All users can upgrade from previous beta versions using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
invoke.sh
or invoke.bat
upgrade
menu option [9]v3.0.0+b10
Windows users can instead follow this recipe:
invoke.sh
or invoke.bat
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0+b10.zip" --use-pep517 --upgrade
(Replace v3.0.0+b10
with the current version number.
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models will configuration files will be backed up.)
pip
Once 3.0.0 is released (out of alpha and beta), developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.0+b8
To upgrade to an xformers
version if you are not currently using xformers
, use:
pip install --use-pep517 --upgrade InvokeAI[xformers]
You can see which versions are available by going to The PyPI InvokeAI Project Page
Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
If you are getting the message "Server Error" in the web interface, you can help us track down the bug by getting a stack trace from the failed operation. This involves several steps. Please see this Discord thread for a step-by-step guide to generating stack traces.
If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3
source code branch, or use the pre-nodes
tag from the main
branch. Developers seeking to contribute to InvokeAI should use the head of the main
branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.
merge_group
trigger to test-invoke-pip.yml by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/2590
cuda.get_mem_info
always gets a specific device index. by @keturn in https://github.com/invoke-ai/InvokeAI/pull/2700
compel
library by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/2729
test-invoke-pip
by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/2892
list_sessions
handler by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3109
ImageField
by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3170
sampler_name
--> scheduler
by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3169
t2i
graph by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3318
context
arg in LatentsToLatents by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3387
.and()
concatenating feature by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/3497
image_origin
from most places by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3537
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.0...v3.0.0+a3
Published by lstein over 1 year ago
We are pleased to announce a minor update to InvokeAI with the release of version 2.3.5.post2.
This is a bugfix release. In previous versions, the built-in updating script did not update the Xformers library when the torch library was upgraded, leaving people with a version that ran on CPU only. Install this version to fix the issue so that it doesn't happen when updating to future versions of InvokeAI 3.0.0.
As a bonus, this version allows you to apply a checkpoint VAE, such as vae-ft-mse-840000-ema-pruned.ckpt
to a diffusers model, without worrying about finding the diffusers version of the VAE. From within the web Model Manager, choose the diffusers model you wish to change, press the edit button, and enter the Location of the VAE file of your choice. The field will now accept either a .ckpt file, or a diffusers directory.
To install 2.3.5.post2 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.5.post2.zip
If you are using the Xformers library, and running v2.3.5.post1 or earlier, please do not use the built-in updater to update, as it will not update xformers properly. Instead, either download the installer and ask it to overwrite the existing invokeai
directory (your previously-installed models and settings will not be affected), or use the following recipe to perform a command-line install:
pip install invokeai[xformers] --use-pep517 --upgrade
If you do not use Xformers, the built-in update option (# 9) will work, as will the above command without the "[xformers]" part. From v2.3.5.post2
onward, the updater script will work properly with Xformers installed.
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==2.3.5.post2
To upgrade to an xformers
version if you are not currently using xformers
, use:
pip install --use-pep517 --upgrade InvokeAI[xformers]
You can see which versions are available by going to The PyPI InvokeAI Project Page
These are known bugs in the release.
codeformer.pth
face restoration model, as well as the CIDAS/clipseg
and runwayml/stable-diffusion-v1.5
models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
This is very likely to be the last release on the v2.3
source code branch. All new features are being added to the main
branch. At the current time (mid-May, 2023), the main
branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of "nodes".
If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3
source code branch, or use the pre-nodes
tag from the main
branch. Developers seeking to contribute to InvokeAI should use the head of the main
branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5...v2.3.5.post2
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5.post1...v2.3.5.post2
Published by lstein over 1 year ago
We are pleased to announce a minor update to InvokeAI with the release of version 2.3.5.post1.
The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.
Here are the new library versions:
Library | Version |
---|---|
Torch | 2.0.0 |
Diffusers | 0.16.1 |
Xformers | 0.0.19 |
Compel | 1.1.5 |
When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running (thanks to @pedantic79 for this).
To install or upgrade to InvokeAI 2.3.5.post1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.5.post1.zip
If you are using the Xformers library, please do not use the built-in updater to update, as it will not update xformers properly. Instead, either download the installer and ask it to overwrite the existing invokeai
directory (your previously-installed models and settings will not be affected), or use the following recipe to perform a command-line install:
pip install invokeai[xformers] --use-pep517 --upgrade
If you do not use Xformers, the built-in update option (# 9) will work, as will the above command without the "[xformers]" part.
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI
. You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5.post1
. To upgrade to an xformers
version if you are not currently using xformers
, use pip install --use-pep517 --upgrade InvokeAI[xformers]
. You can see which versions are available by going to The PyPI InvokeAI Project Page
These are known bugs in the release.
codeformer.pth
face restoration model, as well as the CIDAS/clipseg
and runwayml/stable-diffusion-v1.5
models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
This is very likely to be the last release on the v2.3
source code branch. All new features are being added to the main
branch. At the current time (mid-May, 2023), the main
branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of "nodes".
If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3
source code branch, or use the pre-nodes
tag from the main
branch. Developers seeking to contribute to InvokeAI should use the head of the main
branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.4.post1...v2.3.5-rc1
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5...v2.3.5.post1
Published by lstein over 1 year ago
We are pleased to announce a features update to InvokeAI with the release of version 2.3.5. This is currently a pre-release for community testing and bug reporting.
This release expands support for additional LoRA and LyCORIS models, upgrades diffusers
to 0.15.1, and fixes a few bugs.
To install or upgrade to InvokeAI 2.3.5 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.5.zip
To update from versions 2.3.1 or higher, select the "update" option (choice 6) in the invoke.sh
/invoke.bat
launcher script and choose the option to update to 2.3.5. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai
directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI
. You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5
. To upgrade to an xformers
version if you are not currently using xformers
, use pip install --use-pep517 --upgrade InvokeAI[xformers]
. You can see which versions are available by going to The PyPI InvokeAI Project Page
These are known bugs in the release.
codeformer.pth
face restoration model, as well as the CIDAS/clipseg
and runwayml/stable-diffusion-v1.5
models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.xformers
memory-efficient attention module is used, each image generated with the same prompt and settings will be slightly different. xformers 0.0.19
reduces or eliminates this problem, but hasn't been extensively tested with InvokeAI. If you wish to upgrade, you may do so by entering the InvokeAI "developer's console" and giving the command pip install xformers==0.0.19
. You may see a message about InvokeAI being incompatible with this version, which you can safely ignore. Be sure to report any unexpected behavior to the Issues pages.Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
This is very likely to be the last release on the v2.3
source code branch. All new features are being added to the main
branch. At the current time (late April, 2023), the main
branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of "nodes".
If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3
source code branch, or use the pre-nodes
tag from the main
branch. Developers seeking to contribute to InvokeAI should use the head of the main
branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.
Many thanks to these individuals, as well as @damian0815 for his contribution to this release.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.4.post1...v2.3.5-rc1
Published by lstein over 1 year ago
We are pleased to announce a features update to InvokeAI with the release of version 2.3.4.
Update: 13 April 2024 - 2.3.4.post1
is a hotfix that corrects an installer crash resulting from an update to the upstream diffusers
library. If you have recently tried to install 2.3.4 and experienced a crash relating to "crossattention," this release will fix the issue.
This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.
LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors
. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)
To use LoRA/LyCORIS models in InvokeAI:
Download the .safetensors
files of your choice and place in /path/to/invokeai/loras
. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually.
Add withLora(lora-file,weight)
to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors
is present:
family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)
Multiple withLora()
prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. Negative weights are also allowed, which can lead to some interesting effects.
Generate as you usually would! If you find that the image is too "crisp" try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you'll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA's training. Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.
You can change the location of the loras
directory by passing the --lora_directory
option to `invokeai.
This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.
Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora()
or <textual-inversion>
prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.
Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.
By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings
. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on "Show Textual Inversions from HF Concepts Library." When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.
This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip
utility is kept up to date.
To install or upgrade to InvokeAI 2.3.4 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.4.post1.zip
To update from versions 2.3.1 or higher, select the "update" option (choice 6) in the invoke.sh
/invoke.bat
launcher script and choose the option to update to 2.3.4. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai
directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI
. You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.4
. To upgrade to an xformers
version if you are not currently using xformers
, use pip install --use-pep517 --upgrade InvokeAI[xformers]
. You can see which versions are available by going to The PyPI InvokeAI Project Page (Pre-release note: this will only work after the official release.)
These are known bugs in the release.
k_dpmpp_2a
) sampler is not yet implemented for diffusers
models and will disappear from the WebUI Sampler menu when a diffusers
model is selected.codeformer.pth
face restoration model, as well as the CIDAS/clipseg
and runwayml/stable-diffusion-v1.5
models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
Many thanks to these individuals, as well as @blessedcoolant and @damian0815 for their contributions to this release.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.3...v2.3.4rc1