InvokeAI

InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.

APACHE-2.0 License

Downloads
30K
Stars
22.4K
Committers
194
InvokeAI - InvokeAI 3.3.0post1

Published by Millu about 1 year ago

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please visit our Documentation or join our Discord server!

🌟 What's New in 3.3.0:

  • T2I-Adapter is now supported
    • Models can be downloaded through the Model Manager or the model download function in the launcher script.
  • Multi IP-Adapter Support!
  • New nodes for working with faces
  • Improved model load times from disk
  • Hotkey fixes
  • Expanded translations (for many languages!)
  • Unified Canvas improvements and bug fixes

‼️ Things to Know:

  • Future updates will bring a couple of major changes:
    • Starting with 3.3, InvokeAI will only be supported for Python 3.10 and newer versions. Please begin preparing to upgrade your Python environment.
    • Community Nodes will need to update their import structure. InvokeAI internal services are being reorganized to better support Community Nodes and future development efforts.
  • T2I-Adapter and ControlNet cannot currently be used at the same time. This is prevented in the regular UI, but users will find that errors occur if they do not follow this guidance in Workflow development.
  • T2I-Adapters currently require an image output size that is a multiple of 64. This is enforced in the regular UI, but again, you will need to adhere to this constraint in workflow development

💿 Installation and Upgrading:

To install version 3.3.0, please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

**Download the installer:
InvokeAI-installer-v3.3.0post1.zip
**

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors:

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.2.0...v3.3.0

InvokeAI - v3.3.0rc1

Published by Millu about 1 year ago

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please visit our Documentation or join our Discord server!

🌟 What's New in 3.3.0:

  • T2I-Adapter is now supported
    • Models can be downloaded through the Model Manager or the model download function in the launcher script.
  • Multi IP-Adapter Support!
  • New nodes for working with faces
  • Improved model load times from disk
  • Hotkey fixes
  • Unified Canvas improvements and bug fixes

‼️ Things to Know:

  • InvokeAI v3.4 will bring a couple of major changes:
    • This is the last release that will support Python 3.9. InvokeAI will only be supported for Python 3.10 and newer after InvokeAI v3.4.
    • Community Nodes will need to update their import structure. InvokeAI internal services are being reorganized to better support Community Nodes and future development efforts.
  • T2I-Adapter and ControlNet cannot be used at the same time

💿 Installation and Upgrading:

To install version 3.3.0, please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

Download the installer: InvokeAI-installer-v3.3.0rc1.zip

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors:

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.2.0...v3.3.0rc1

InvokeAI - InvokeAI v3.2.0

Published by Millu about 1 year ago

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation or join the Discord!

What's New in 3.2.0:

  • Queueing
    • This is a powerful new feature that allows you to queue multiple image generations, create batches, manage the queue, and gain insight into generations.
  • IP-Adapter is now supported
    • Instructions on getting started with IP-Adapter are located in the "Things to Know" section below
  • TAESD is now supported. You can download TAESD or TAESDXL through the model manager UI
  • LoRAs and ControlNets are now able to be recalled with the "Use All" function
  • New nodes! Load prompts from a file, string manipulation, and expanded math functions
  • Node caching - improve performance by using previously cached generation values
  • V-prediction for SD1.5 is now supported
  • Importing images from previous versions of InvokeAI has been fixed
  • Database maintenance script can be run with invokeai-db-maintenance
  • View image metadata with the invokeai-metadata command
  • Workflow Editor UI/UX improvements
  • Unified Canvas improvements & bug fixes

Things to Know:

  • If you experience the server error, TypeError: Invoker.create_execution_state() got an unexpected keyword argument 'queue_id', try clearing your local browser cache or resetting the InvokAI UI (Settings -> Reset UI) before running a generation.

  • You might see a red alert icon on your nodes after loading a workflow. This indicates that the node in the workflow is from an older version of InvokeAI, or the node doesn't have a version. If your workflow runs, you may safely ignore this, and we will add functionality to "upgrade" the un-versioned nodes in a future update. If the workflow does not work, you will need to delete and add the nodes.

  • To get started with IP-Adapter, you'll need to download the image encoder and IP-Adapter for the desired based model. Once the models are installed, IP-Adapter is able to be used under the "Control Adapters" options.

    Image Encoders:

    IP-Adapter Models:

    These can be installed from the Model Manager by choosing "Import Models" and pasting in the repoIDs of the desired model. Remember to install the model and the image encoder! For example to get started with IP-Adapter for SD1.5 these are the repo IDs:

    • InvokeAI/ip_adapter_plus_sd15
    • InvokeAI/ip_adapter_sd_image_encoder

    or from the command line by starting the "Developer's Console" from the invoke.bat launcher and pasting this command:

    invokeai-model-install --add InvokeAI/ip_adapter_sd_image_encoder InvokeAI/ip_adapter_sdxl_image_encoder InvokeAI/ip_adapter_sd15 InvokeAI/ip_adapter_plus_sd15 InvokeAI/ip_adapter_plus_face_sd15 InvokeAI/ip_adapter_sdxl
    

Installation and Upgrading:

To install v3.2.0, please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI v3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

Download the installer: InvokeAI-installer-v3.2.0.zip

Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please see How to Contribute or reach out to imic on Discord!

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.1...v3.2.0

InvokeAI - InvokeAI v3.2.0rc3

Published by Millu about 1 year ago

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation or join the Discord!

What's New in 3.2.0:

  • Queueing
    • This is a powerful new feature that allows you to queue multiple image generations, create batches, manage the queue, and gain insight into generations.
  • IP-Adapter is now supported
    • Instructions on getting started with IP-Adapter are located in the "Things to Know" section below
  • TAESD is now supported. You can download TAESD or TAESDXL through the model manager UI
  • LoRAs and ControlNets are now able to be recalled with the "Use All" function
  • New nodes! Load prompts from a file, string manipulation, and expanded math functions
  • Node caching - improve performance by using previously cached generation values
  • V-prediction for SD1.5 is now supported
  • Importing images from previous versions of InvokeAI has been fixed
  • Database maintenance script can be run with invokeai-db-maintenance
  • View image metadata with the invokeai-metadata command
  • Workflow Editor UI/UX improvements
  • Unified Canvas improvements & bug fixes

Things to Know:

  • You might see a red alert icon on your nodes after loading a workflow. This indicates that the node in the workflow is from an older version of InvokeAI, or the node doesn't have a version. If your workflow runs, you may safely ignore this, and we will add functionality to "upgrade" the un-versioned nodes in a future update. If the workflow does not work, you will need to delete and add the nodes.

  • To get started with IP-Adapter, you'll need to download the image encoder and IP-Adapter for the desired based model. These can be downloaded through the Model Manager. Once the models are installed, IP-Adapter is able to be used under the "Control Adapters" options.

    Image Encoders:

    IP-Adapter Models:

    These can be installed from the Model Manager by choosing "Import Models" and pasting in the following list of repo_ids:

    InvokeAI/ip_adapter_sd_image_encoder InvokeAI/ip_adapter_sdxl_image_encoder InvokeAI/ip_adapter_sd15 InvokeAI/ip_adapter_plus_sd15 InvokeAI/ip_adapter_plus_face_sd15 InvokeAI/ip_adapter_sdxl
    

    or from the command line by starting the "Developer's Console" from the invoke.bat launcher and pasting this command:

    invokeai-model-install --add InvokeAI/ip_adapter_sd_image_encoder InvokeAI/ip_adapter_sdxl_image_encoder InvokeAI/ip_adapter_sd15 InvokeAI/ip_adapter_plus_sd15 InvokeAI/ip_adapter_plus_face_sd15 InvokeAI/ip_adapter_sdxl
    

Installation and Upgrading:

To install v3.2.0, please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI v3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

Download the installer: InvokeAI-installer-v3.2.0rc3.zip

Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please see How to Contribute or reach out to imic on Discord!

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.1...v3.2.0rc3

InvokeAI - InvokeAI v3.0.2rc2

Published by Millu about 1 year ago

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation or join the Discord!

What's New in 3.2.0:

  • Queueing
    • This is a powerful new feature that allows you to queue multiple image generations, create batches, manage the queue, and have insight into generations.
  • IP-Adapter is now supported.
    • To get started with IP-Adapter, download the model manually or through the model manager and select it under the "Control Adapter" settings. Once you have provided an image, it will use the image to help prompt the model during image generation.
  • TAESD is now supported. You can download TAESD or TAESDXL through the model manager UI as you would any other model from HuggingFace.
  • LoRAs are now able to be recalled with the "Use All" function
  • New nodes! Load prompts from a file, String manipulation, and expanded math functions
  • Importing images from previous versions of InvokeAI has be fixed
  • Database maintenence script can be ran with invokeai-db-maintenance
  • View image metadata with the invokeai-metadata command

Things to Know:

  • You might see a red alert icon on your nodes after loading a workflow. This indicates that the node in the workflow is from an older version of InvokeAI, or the node doesn't have a version. If your workflow runs, you may safely ignore this, and we will add functionality to "upgrade" the un-versioned nodes in a future update. If the workflow does not work, you will need to delete and add the nodes.

Installation and Upgrading

To install v3.2.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

Download the installer: InvokeAI-installer-v3.2.0rc2.zip

Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.1...v3.2.0rc2

InvokeAI - InvokeAI v3.2.0rc1

Published by Millu about 1 year ago

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation, and check out the 0.1 Release Landing Page for the Community Edition!

What's New in 3.2.0:

  • Queueing
    • This is a powerful new feature that allows you to queue multiple image generations, create batches, manage the queue, and have insight into generations.
  • IP-Adapater is now supported
  • TASED is now supported
  • Image Metadata is now preserved with Workflows
  • LoRAs are now able to be recalled with the "Use All" function
  • New nodes! Load prompts from a file, String manipulation, and expanded math functions
  • Importing images from previous versions of InvokeAI has be fixed
  • Database maintenence script can be ran with invokeai-db-maintenance
  • View image metadata with the invokeai-metadata command

Things to Know:

  • You might see a red alert icon on your nodes after loading a workflow. This indicates that the node in the workflow is from an older version of InvokeAI, or the node doesn't have a version. If your workflow runs, you may safely ignore this, and we will add functionality to "upgrade" the un-versioned nodes in a future update. If the workflow does not work, you will need to delete and add the nodes.

Installation and Upgrading

To install v3.2.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

Download the installer: InvokeAI-installer-v3.2.0rc1.zip

Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.1...v3.2.0rc1

InvokeAI - InvokeAI 3.1.1

Published by Millu about 1 year ago

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation, and check out the 3.1 Release Landing Page for the Community Edition!

What's New in 3.1.1:

  • Node versioning
  • Nodes now support polymorphic inputs (inputs which are a single of a given type or list of a given type, e.g. Union[str, list[str]])
  • SDXL Inpainting Model is now supported
  • Inpainting & Outpainting Improvements
  • Workflow Editor UI Improvements
  • Model Manager Improvements
  • Fixed configuration script trying to set VRAM on macOS

Things to Know:

  • You might see a red alert icon on your nodes after loading a workflow. This indicates that the node in the workflow is from an older version of InvokeAI, or the node doesn't have a version. If your workflow runs, you may safely ignore this, and we will add functionality to "upgrade" the un-versioned nodes in a future update. If the workflow does not work, you will need to delete and add the nodes.

Installation and Upgrading

To install v3.1.1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

Download the installer: InvokeAI-installer-v3.1.1.zip

Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.0...v3.1.1

InvokeAI - InvokeAI 3.1.1

Published by Millu about 1 year ago

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation, and check out the 3.1 Release Landing Page for the Community Edition!

What's New in 3.1.1:

  • Node versioning
  • Nodes now support polymorphic inputs (inputs which are a single of a given type or list of a given type, e.g. Union[str, list[str]])
  • SDXL Inpainting Model is now supported
  • Inpainting & Outpainting Improvements
  • Workflow Editor UI Improvements
  • Model Manager Improvements
  • Fixed configuration script trying to set VRAM on macOS

Things to Know:

  • You might see a red alert icon on your nodes after loading a workflow. This indicates that the node in the workflow is from an older version of InvokeAI, or the node doesn't have a version. If your workflow still works, you may safely ignore this, and we will add functionality to "upgrade" the unversioned nodes in a future update. If the workflow does not work, you will need to delete and add the nodes again.

Installation and Upgrading

To install v3.1.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

Download the installer: InvokeAI-installer-v3.1.1.zip

Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.0...v3.1.1rc1

InvokeAI - InvokeAI 3.1.0

Published by lstein about 1 year ago

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation, and check out the 3.1 Release Landing Page for the Community Edition!

Download the installer: InvokeAI-installer-v3.1.0.zip

What's New in v3.1.0

Workflows

InvokeAI 3.1.0 introduces a new powerful tool to aide the image generation process in the Workflow Builder. Workflows combine the power of nodes-based software with the ease of use of a GUI to deliver the best of both worlds.

The Node Editor allows you to build the custom image generation workflows you need, as well as enables you to create and use custom nodes, making InvokeAI a fully extensible platform.

To get started with nodes in InvokeAI, take a look at our example workflows, or some of the custom Community Nodes.

A zip file of example workflows can be found at the bottom of this page under Assets.

Other New Features

  • Expanded SDXL support across all areas of InvokeAI.
  • Enhanced In-painting & Out-painting capabilities.
  • Improved Control Asset Usage, including from the Unified Canvas.
  • Newly added nodes for better functionality.
  • Seamless Tiling is back, with SDXL support!
  • Improved In-inpainting & Out-painting
  • Generation statistics can be viewed from the command line after generation
  • Hot-reloading is now available for python files in the application
  • LoRAs are sorted alphabetically
  • Symbolic links to directories in the autoimport folder are now supported
  • UI/UX Improvements
  • Interactively configure image generation options, the attention system, and the VRAM cache
  • ...and so much more! You can view the full change log here

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install v3.1.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you have InvokeAI 2.3.5 or older installed, we recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

In the event of an aborted install that has left the invokeai directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.

InvokeAI-installer-v3.1.0.zip

Upgrading in place

All users can upgrade from 3.0.2 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.1 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select "Manually enter the tag name for the version you wish to update to" option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the "Developer's console" option [8]
  4. Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.1.0.zip" --use-pep517 --upgrade
invokeai-configure --root .

This will produce a working 3.1.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".

Note:

  • If you had issues with inpainting on a previous InvokeAI 3.0 version, delete your models/.cache folder before proceeding.

What to do if problems occur during the install

Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.

In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.

Migrating images from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-import-images, which will copy images from any previous version of InvokeAI to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:

invokeai-import-images

This will prompt you to select the destination and source directories, and allow you to select which image gallery board to import into.

Migrating models and settings from an old InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade  InvokeAI==3.1.0
invokeai-configure --yes --skip-sd-weights

Important: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.

To download the base and refiner SDXL models, you have several options:

  1. Select option [5] from the invoke.bat launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.
  2. Use the Web's Model Manager to select "Import Models" and when prompted provide the HuggingFace repo_ids for the two models:
    • stabilityai/stable-diffusion-xl-base-1.0
    • stabilityai/stable-diffusion-xl-refiner-1.0
  3. Download the models manually and cut and paste their paths into the Location field in "Import Models"

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
ram: 12.0
vram: 0.5

Known Issues in 3.1

This is a list of known issues in 3.1.0 as well as features that are planned for inclusion in later releases:

  • The max_vram_cache and ram_cache settings in invokeai.yaml have been deprecated and renamed to vram and ram. To adjust cache size, we recommend using the configure script (option [6] in the launcher) to adjust them.
  • Variation generation was not fully functional and did not make it into the release.
  • High-res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find a high-res optimization workflows attached and in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the Workflow tool.

Getting Help

For support, please use this repository's GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

New Contributors

Thank you to all of the new and existing contributors to InvokeAI. We appreciate your efforts and contributions!

Detailed Change Log since 3.0.2

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.2post1...v3.1.0rc1

InvokeAI - InvokeAI v3.0.2post1

Published by Millu about 1 year ago

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation Pages.

What's New in v3.0.2post1

  • Support for LoRA models in diffusers format
  • Warn instead of crashing when a corrupted model is detected
  • Bug fix for auto-adding to a board

What's New in v3.0.2

  • LoRA support for SDXL is now available
  • Mutli-select actions are now supported in the Gallery
  • Images are automatically sent to the board that is selected at invocation
  • Images from previous versions of InvokeAI are able to imported with the invokeai-import-images command
  • Inpainting models imported from A1111 will now work with InvokeAI (see upgrading note)
  • Model merging functionality has been fixed
  • Improved Model Manager UI/UX
  • InvokeAI 3.0 can be served via HTTPS
  • Execution statistics are visible in the terminal after each invocation
  • ONNX models are now supported for use with Text2Image
  • Pydantic errors when upgrading inplace have been resolved
  • Code formatting is now part of the CI/CD pipeline
  • ...and lots more! You can view the full change log here

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.2post1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

In the event of an aborted install that has left the invokeai directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.

InvokeAI-installer-v3.0.2post1.zip

Upgrading in place

All users can upgrade from 3.0.1 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0.2 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select "Manually enter the tag name for the version you wish to update to" option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the "Developer's console" option [8]
  4. Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.2.zip" --use-pep517 --upgrade
invokeai-configure --root .

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".

Note:

  • If you had issues with inpainting on a previous InvokeAI 3.0 version, delete your models/ .cache folder before proceeding.

What to do if problems occur during the install

Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.

In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade  InvokeAI==3.0.2
invokeai-configure --yes --skip-sd-weights

Important: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.

To download the base and refiner SDXL models, you have several options:

  1. Select option [5] from the invoke.bat launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.
  2. Use the Web's Model Manager to select "Import Models" and when prompted provide the HuggingFace repo_ids for the two models:
    • stabilityai/stable-diffusion-xl-base-1.0
    • stabilityai/stable-diffusion-xl-refiner-1.0
  3. Download the models manually and cut and paste their paths into the Location field in "Import Models"

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.5

Known Issues in 3.0

This is a list of known bugs in 3.0.2rc1 as well as features that are planned for inclusion in later releases:

  • Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
  • Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
  • High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.

Getting Help

For support, please use this repository's GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

New Contributors

Thank you to all of the new contributors to InvokeAI. We appreciate your efforts and contributions!

Detailed Change Long since 3.0.2

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.2...v3.0.2post1

InvokeAI - InvokeAI Version 3.0.2

Published by Millu about 1 year ago

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation Pages.

What's New in v3.0.2

  • LoRA support for SDXL is now available
  • Mutli-select actions are now supported in the Gallery
  • Images are automatically sent to the board that is selected at invocation
  • Images from previous versions of InvokeAI are able to imported with the invokeai-import-images command
  • Inpainting models imported from A1111 will now work with InvokeAI (see upgrading note)
  • Model merging functionality has been fixed
  • Improved Model Manager UI/UX
  • InvokeAI 3.0 can be served via HTTPS
  • Execution statistics are visible in the terminal after each invocation
  • ONNX models are now supported for use with Text2Image
  • Pydantic errors when upgrading inplace have been resolved
  • Code formatting is now part of the CI/CD pipeline
  • ...and lots more! You can view the full change log here

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.2 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

In the event of an aborted install that has left the invokeai directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.

InvokeAI-installer-v3.0.2.zip

Upgrading in place

All users can upgrade from 3.0.1 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0.2 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select "Manually enter the tag name for the version you wish to update to" option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the "Developer's console" option [8]
  4. Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.2.zip" --use-pep517 --upgrade
invokeai-configure --root .

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".

Note:

  • If you had issues with inpainting on a previous InvokeAI 3.0 version, delete your models/ .cache folder before proceeding.

What to do if problems occur during the install

Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.

In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade  InvokeAI==3.0.2
invokeai-configure --yes --skip-sd-weights

Important: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.

To download the base and refiner SDXL models, you have several options:

  1. Select option [5] from the invoke.bat launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.
  2. Use the Web's Model Manager to select "Import Models" and when prompted provide the HuggingFace repo_ids for the two models:
    • stabilityai/stable-diffusion-xl-base-1.0
    • stabilityai/stable-diffusion-xl-refiner-1.0
  3. Download the models manually and cut and paste their paths into the Location field in "Import Models"

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.5

Known Issues in 3.0

This is a list of known bugs in 3.0.2rc1 as well as features that are planned for inclusion in later releases:

  • Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
  • Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
  • High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.

Getting Help

For support, please use this repository's GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

New Contributors

Thank you to all of the new contributors to InvokeAI. We appreciate your efforts and contributions!

Detailed Change Log

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.1rc3...v3.0.2

InvokeAI - InvokeAI 3.0.1 (hotfix 3)

Published by lstein about 1 year ago

InvokeAI Version 3.0.1

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.

InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.

To learn more about InvokeAI, please see our Documentation Pages.

What's New in v3.0.1

  • Stable Diffusion XL support in the Text2Image and Image2Image (but not the Unified Canvas).
  • Can install and run both diffusers-style and .safetensors-style SDXL models.
  • Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
  • Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
  • The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
  • During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed by default, along with several "starter" main models.
  • User interface cleanup to reduce visual clutter and increase usability.

v3.0.1post3Hotfixes

This release containss a proposed hotfix for the Windows install OSError crashes that began appearing in 3.0.1. In addition, the following bugs have been addressed:

  • Correct issue of some SD-1 safetensors models could not be loaded or converted
  • The models_dir configuration variable used to customize the location of the models directory is now working properly
  • Fixed crashes of the text-based installer when the number of installed LoRAs and other models exceeded 72
  • SDXL metadata is now set and retrieved properly
  • Correct post1's crash when performing configure with --yes flag.
  • Correct crashes in the CLI model installer

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

In the event of an aborted install that has left the invokeai directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.

InvokeAI-installer-v3.0.1post3.zip

Upgrading in place

All users can upgrade from 3.0.0 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select "Manually enter the tag name for the version you wish to update to" option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the "Developer's console" option [8]
  4. Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1post3.zip" --use-pep517 --upgrade
invokeai-configure --root .

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".

What to do if problems occur during the install

Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.

In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade  InvokeAI==3.0.1post3
invokeai-configure --yes --skip-sd-weights

Important: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.

To download the base and refiner SDXL models, you have several options:

  1. Select option [5] from the invoke.bat launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.
  2. Use the Web's Model Manager to select "Import Models" and when prompted provide the HuggingFace repo_ids for the two models:
    • stabilityai/stable-diffusion-xl-base-1.0
    • stabilityai/stable-diffusion-xl-refiner-1.0
  3. Download the models manually and cut and paste their paths into the Location field in "Import Models"

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.5

Known Bugs in 3.0

This is a list of known bugs in 3.0.1post3 as well as features that are planned for inclusion in later releases:

  • The merge script isn't working, and crashes during startup (will be fixed soon)
  • Inpainting models generated using the A1111 merge module are not loading properly (will be fixed soon)
  • Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
  • Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
  • Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
  • High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
  • There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.

Getting Help

For support, please use this repository's GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.1...v3.0.1post1

InvokeAI - InvokeAI Version 3.0.1

Published by lstein about 1 year ago

InvokeAI Version 3.0.1

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.

InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.

To learn more about InvokeAI, please see our Documentation Pages.

What's New in v3.0.1

  • Stable Diffusion XL support in the Text2Image and Image2Image (but not the Unified Canvas).
  • Can install and run both diffusers-style and .safetensors-style SDXL models.
  • Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
  • Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
  • The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
  • During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed by default, along with several "starter" main models.
  • User interface cleanup to reduce visual clutter and increase usability.

Recent Changes

Since RC3, the following has changed:

  • Fixed crash on Macintosh M1 machines when rendering SDXL images
  • Fixed black images when generating on Macintoshes using the Unipc scheduler (falls back to CPU; slow)

Since RC2, the following has changed:

  • Added compatibility with Python 3.11
  • Updated diffusers to 0.19.0
  • Cleaned up console logging - can now change logging level as described in the docs
  • Added download of an updated SDXL VAE "sdxl-vae-fix" that may correct certain image artifacts in SDXL-1.0 models
  • Prevent web crashes during certain resize operations

Developer changes:

  • Reformatted the whole code base with the "black" tool for a consistent coding style
  • Add pre-commit hooks to reformat committed code on the fly

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

InvokeAI-installer-v3.0.1.zip

Upgrading in place

All users can upgrade from 3.0.0 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select "Manually enter the tag name for the version you wish to update to" option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the "Developer's console" option [8]
  4. Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1.zip" --use-pep517 --upgrade
invokeai-configure --root .

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".

What to do if problems occur during the install

Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --skip-sd-weights

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade  InvokeAI==3.0.1
invokeai-configure --skip-sd-weights

Important: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.

To download the base and refiner SDXL models, you have several options:

  1. Select option [5] from the invoke.bat launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.
  2. Use the Web's Model Manager to select "Import Models" and when prompted provide the HuggingFace repo_ids for the two models:
    • stabilityai/stable-diffusion-xl-base-1.0
    • stabilityai/stable-diffusion-xl-refiner-1.0
      (note that these are preliminary IDs - these notes are being written before the SDXL release)
  3. Download the models manually and cut and paste their paths into the Location field in "Import Models"

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0

Users with 12 GB or more VRAM can reduce the time waiting for the image to start generating by setting max_vram_cache_size to 6 GB or higher.


Known Bugs in 3.0

This is a list of known bugs in 3.0.1 as well as features that are planned for inclusion in later releases:

  • Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
  • Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
  • Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
  • High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
  • There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.

Getting Help

For support, please use this repository's GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.0...v3.0.1

Source code and previous installer files

The files below include the InvokeAI installer zip file, the full source code, and previous release candidates for 3.0.1

InvokeAI - InvokeAI 3.0.1 Release Candidate 3

Published by lstein about 1 year ago

InvokeAI Version 3.0.1

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.

InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.

To learn more about InvokeAI, please see our Documentation Pages.

What's New in v3.0.1

  • Stable Diffusion XL support in the Text2Image and Image2Image (but not the Unified Canvas).
  • Can install and run both diffusers-style and .safetensors-style SDXL models.
  • Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
  • Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
  • The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
  • During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed biy default, along with several "starter" main models.
  • User interface cleanup to reduce visual clutter and increase usability.

Recent Changes

Since RC3, the following has changed:

  • Fixed crash on Macintosh M1 machines when rendering SDXL images
  • Fixed black images when generating on Macintoshes using the Unipc scheduler (falls back to CPU; slow)

Since RC2, the following has changed:

  • Added compatibility with Python 3.11
  • Updated diffusers to 0.19.0
  • Cleaned up console logging - can now change logging level as described in the docs
  • Added download of an updated SDXL VAE "sdxl-vae-fix" that may correct certain image artifacts in SDXL-1.0 models
  • Prevent web crashes during certain resize operations

Developer changes:

  • Reformatted the whole code base with the "black" tool for a consistent coding style
  • Add pre-commit hooks to reformat committed code on the fly

Known bugs:

  • Rendering SDXL-1.0 models causes a crash on certain (all?) Macintosh models with MPS chips

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

InvokeAI-installer-v3.0.1rc3.zip

Upgrading in place

All users can upgrade from 3.0.0 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select "Manually enter the tag name for the version you wish to update to" option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the "Developer's console" option [8]
  4. Type the following command:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1rc3.zip" --use-pep517 --upgrade

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".

What to do if problems occur during the install

Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particular affects Windows users. Please see the Installation Troubleshooting Guide for solutions.

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Once 3.0.1 is released, developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI
invokeai-configure

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade  InvokeAI==3.0.1rc3
invokeai-configure

Important: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.

To download the base and refiner SDXL models, you have several options:

  1. Select option [5] from the invoke.bat launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models.
  2. Use the Web's Model Manager to select "Import Models" and when prompted provide the HuggingFace repo_ids for the two models:
    • stabilityai/stable-diffusion-xl-base-1.0
    • stabilityai/stable-diffusion-xl-refiner-1.0
      (note that these are preliminary IDs - these notes are being written before the SDXL release)
  3. Download the models manually and cut and paste their paths into the Location field in "Import Models"

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0

Users with 12 GB or more VRAM can reduce the time waiting for the image to start generating by setting max_vram_cache_size to 6 GB or higher.


Known Bugs in 3.0

This is a list of known bugs in 3.0.1 as well as features that are planned for inclusion in later releases:

  • Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
  • Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
  • Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
  • High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
  • There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.

Getting Help

For support, please use this repository's GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.0...v3.0.1rc1

InvokeAI - InvokeAI 3.0.0

Published by lstein over 1 year ago

InvokeAI Version 3.0.0

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.

InvokeAI version 3.0.0 represents a major advance in functionality and ease compared with the last official release, 2.3.5.

Please use the 3.0.0 release discussion thread, for comments on this version, including feature requests, enhancement suggestions and other non-critical issues. Report bugs to InvokeAI Issues. For interactive support with the development team, contributors and user community, you are invited join the InvokeAI Discord Server.

To learn more about InvokeAI, please see our Documentation Pages.

What's New in v3.0.0

Quite a lot has changed, both internally and externally.

Web User Interface:

  • A ControlNet interface that gives you fine control over such things as the posture of figures in generated images by providing an image that illustrates the end result you wish to achieve.
  • A Dynamic Prompts interface that lets you generate combinations of prompt elements.
  • Preliminary support for Stable Diffusion XL the latest iteration of Stability AI's image generation models.
  • A redesigned user interface which makes it easier to access frequently-used elements, such as the random seed generator.
  • The ability to create multiple image galleries, allowing you to organize your generated images topically or chronologically.
  • An experimental Nodes Editor that lets you design and execute complex image generation operations using a point-and-click interface. To activate this, please use the settings icon at the upper right of the Web UI.
  • Macintosh users can now load models at half-precision (float16) in order to reduce the amount of RAM.
  • Advanced users can choose earlier CLIP layers during generation to produce a larger variety of images.
  • Long prompt support (>77 tokens).
  • Memory and speed improvements.

The WebUI can now be launched from the command line using either invokeai-web (preferred new way) or invokeai --web (deprecated old way).

Command Line Tool

The previous command line tool has been removed and replaced with a new developer-oriented tool invokeai-node-cli that allows you to experiment with InvokeAI nodes.

Installer

The console-based model installer, invokeai-model-install has been redesigned and now provides tabs for installing checkpoint models, diffusers models, ControlNet models, LoRAs, and Textual Inversion embeddings. You can install models stored locally on disk, or install them using their web URLs or Repo_IDs.

Internal

Internally the code base has been completely rewritten to be much easier to maintain and extend. Importantly, all image generation options are now represented as "nodes", which are small pieces of code that transform inputs into outputs and can be connected together into a graph of operations. Generation and image manipulation operations can now be easily extended by writing a new InvokeAI nodes.


Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

InvokeAI-installer-v3.0.0.zip

Upgrading in place

All users can upgrade from the 3.0 beta releases using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select "Manually enter the tag name for the version you wish to update to" option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the "Developer's console" option [8]
  4. Type the following command:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0.zip" --use-pep517 --upgrade

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Once 3.0.0 is released, developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade  InvokeAI==3.0.0

To upgrade to an xformers version if you are not currently using xformers, use:

pip install --use-pep517 --upgrade InvokeAI[xformers]

You can see which versions are available by going to The PyPI InvokeAI Project Page


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. As of the current time (July 2023) SDXL had not been officially released, but a pre-release 0.9 version is widely available. InvokeAI provides support for SDXL image generation via its Nodes Editor, a user interface that allows you to create and customize complex image generation pipelines using a drag-and-drop interface. Currently SDXL generation is not directly supported in the text2image, image2image, and canvas panels, but we expect to add this feature as soon as SDXL 1.0 is officially released.

SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. For best results, the initial image is handed off from the base to the refiner before all the denoising steps are complete. It is not clear whether SDXL 1.0, when it is released, will require the refiner.

To experiment with SDXL, you'll need the "base" and "refiner" models. Currently a beta version of SDXL, version 0.9, is available from HuggingFace for research purposes. To obtain access, you will need to register with HF at https://huggingface.co/join, obtain an access token at https://huggingface.co/settings/tokens, and add the access token to your environment. To do this, run the InvokeAI launcher script, activate the InvokeAI virtual environment with option [8], and type the command huggingface-cli login. Paste in your access token from HuggingFace and hit return (the token will not be echoed to the screen). Alternatively, select launcher option [6] "Change InvokeAI startup options" and paste the HF token into the indicated field.

Now navigate to https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9 and fill out the access request form for research use. You will be granted instant access to download. Next launch the InvokeAI console-based model installer by selecting launcher option [5] or by activating the virtual environment and giving the command invokeai-model-install. In the STARTER MODELS section, select the checkboxes for stable-diffusion-xl-base-0-9 and stable-diffusion-xl-refiner-0-9. Press Apply Changes to install the models and keep the installer running, or Apply Changes and Exit to install the models and exit back to the launcher menu.

Alternatively you can install these models from the Web UI Model Manager (cube at the bottom of the left-hand panel) and navigate to Import Models. In the field labeled Location type in the repo id of the base model, which is stabilityai/stable-diffusion-xl-base-0.9. Press Add Model and wait for the model to download and install. After receiving confirmation that the model installed, repeat with stabilityai/stable-diffusion-xl-refiner-0.9.

Note that these are large models (12 GB each) so be prepared to wait a while.

To use the installed models you will need to activate the Node Editor, an advanced feature of InvokeAI. Go to the Settings (gear) icon on the upper right of the Web interface, and activate "Enable Nodes Editor". After reloading the page, an inverted "Y" will appear on the left-hand panel. This is the Node Editor.

Enter the Node Editor and click the Upload button to upload either the SDXL base-only or SDXL base+refiner pipelines (right click to save these .json files to disk). This will load and display a flow diagram showing the (many complex) steps in generating an SDXL image.

Ensure that the SDXL Model Loader (leftmost column, bottom) is set to load the SDXL base model on your system, and that the SDXL Refiner Model Loader (third column, top) is set to load the SDXL refiner model on your system. Find the nodes that contain the example prompt and style ("bluebird in a sakura tree" and "chinese classical painting") and replace them with the prompt and style of your choice. Then press the Invoke button. If all goes well, an image will eventually be generated and added to the image gallery. Unlike standard rendering, intermediate images are not (yet) displayed during rendering.

Be aware that SDXL support is an experimental feature and is not 100% stable. When designing your own SDXL pipelines, be aware that there are certain settings that will have a disproportionate effect on image quality. In particular, the latents decode VAE step must be run at fp32 precision (using a slider at the bottom of the VAE node), and that images will change dramatically as the denoising threshold used by the refiner is adjusted.

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0

Known Bugs in 3.0

This is a list of known bugs in 3.0 as well as features that are planned for inclusion in later releases:

  • On Macintoshes with MPS, Stable Diffusion 2 models will not render properly. This will be corrected in the next point release.
  • Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
  • Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
  • Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
  • High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
  • There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.
  • The NSFW checker (blurs explicit images) is currently disabled but will be reenabled in time for the next release.

Getting Help

For support, please use this repository's GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

What's Changed Since 2.3.5

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5...v3.0.0rc2

InvokeAI - InvokeAI Version 3.0.0 Beta-10

Published by lstein over 1 year ago

We are pleased to announce a new beta release of InvokeAI 3.0 for user testing.

Please use the 3.0.0 release discussion thread, InvokeAI Issues, or the InvokeAI Discord Server to report bugs and other issues.

Recent fixes

  • Stable Diffusion XL (SDXL) 0.9 support in the node editor. See Getting Started with SDXL
  • Stable Diffusion XL models added to the optional starter models presented by the model installer
  • Memory and performance improvements for XL models (thanks to @StAlKeR7779)
  • Image upscaling using the latest version of RealESRGAN (fixed thanks to @psychedelicious )
  • VRAM optimizations to allow SDXL to run on 8 GB VRAM environments.
  • Feature-complete Model Manager in the Web GUI to provide online model installation, configuration and deletion.
  • Recommended LoRA and ControlNet models added to model installer.
  • UI tweaks, including updated hotkeys.
  • Translation and tooltip fixes
  • Documentation fixes, including description of all options in invokeai.yaml
  • Improved support for half-precision generation on Macintoshes.
  • Improved long prompt support.
  • Fix "Package 'invokeai' requires a different Python:" error

Known bug in this beta If you are installing InvokeAI completely from scratch, on the very first image generation you may get a black screen. Just reload the web page and the problem will be resolved for this and subsequent generations.

What's New in v3.0.0

Quite a lot has changed, both internally and externally

Web User Interface:

  • A ControlNet interface that gives you fine control over such things as the posture of figures in generated images by providing an image that illustrates the end result you wish to achieve.
  • A Dynamic Prompts interface that lets you generate combinations of prompt elements.
  • SDXL support
  • A redesigned user interface which makes it easier to access frequently-used elements, such as the random seed generator.
  • The ability to create multiple image galleries, allowing you to organize your generated images topically or chronologically.
  • A graphical node editor that lets you design and execute complex image generation operations using a point-and-click interface (see below for more about nodes)
  • Macintosh users can now load models at half-precision (float16) in order to reduce the amount of RAM used by each model by half.
  • Advanced users can choose earlier CLIP layers during generation to produce a larger variety of images.
  • Long prompt support (>77 tokens)
  • Schedulers that did not work properly for Canvas inpainting have been fixed.

The WebUI can now be launched from the command line using either invokeai-web (preferred new way) or invokeai --web (deprecated old way).

Command Line Tool

  • The previous command line tool has been removed and replaced with a new developer-oriented tool invokeai-node-cli that allows you to experiment with InvokeAI nodes.

Installer

The console-based model installer, invokeai-model-install has been redesigned and now provides tabs for installing checkpoint models, diffusers models, ControlNet models, LoRAs, and Textual Inversion embeddings. You can install models stored locally on disk, or install them using their web URLs or Repo_IDs.

Internal

Internally the code base has been completely rewritten to be much easier to maintain and extend. Importantly, all image generation options are now represented as "nodes", which are small pieces of code that transform inputs into outputs and can be connected together into a graph of operations. Generation and image manipulation operations can now be easily extended by writing a new InvokeAI nodes.

Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. As of the current time (July 18, 2023) SDXL had not been officially released, but a pre-release 0.9 version is widely available. InvokeAI provides support for SDXL image generation via its Nodes Editor, a user interface that allows you to create and customize complex image generation pipelines using a drag-and-drop interface. Currently SDXL generation is not directly supported in the text2image, image2image, and canvas panels, but we expect to add this feature in the next few days.

SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. For best results, the initial image is handed off from the base to the refiner before all the denoising steps are complete. It is not clear whether SDXL 1.0, when it is released, will require the refiner.

To experiment with SDXL, you'll need the "base" and "refiner" models. Currently a beta version of SDXL, version 0.9, is available from HuggingFace for research purposes. To obtain access, you will need to register with HF at https://huggingface.co/join, obtain an access token at https://huggingface.co/settings/tokens, and add the access token to your environment. To do this, run the InvokeAI launcher script, activate the InvokeAI virtual environment with option [8], and type the command huggingface-cli login. Paste in your access token from HuggingFace and hit return (the token will not be echoed to the screen).

Now navigate to https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9 and fill out the access request form for research use. You will be granted instant access to download. Next launch the InvokeAI console-based model installer by selecting launcher option [5] or by activating the virtual environment and giving the command invokeai-model-install. In the STARTER MODELS section, select the checkboxes for stable-diffusion-xl-base-0-9 and stable-diffusion-xl-refiner-0-9. Press Apply Changes to install the models and keep the installer running, or Apply Changes and Exit to install the models and exit back to the launcher menu.

Alternatively you can install these models from the Web UI Model Manager (cube at the bottom of the left-hand panel) and navigate to Import Models. In the field labeled Location type in the repo id of the base model, which is stabilityai/stable-diffusion-xl-base-0.9. Press Add Model and wait for the model to download and install (the page will freeze while this is happening). After receiving confirmation that the model installed, repeat with stabilityai/stable-diffusion-xl-refiner-0.9.

Note that these are large models (12 GB each) so be prepared to wait a while.

To use the installed models enter the Node Editor (inverted "Y" in the left-hand panel) and upload either the SDXL base-only or SDXL base+refiner invocation graphs. This will load and display a flow diagram showing the steps in generating an SDXL image.

Ensure that the SDXL Model Loader (leftmost column, bottom) is set to load the SDXL base model on your system, and that the SDXL Refiner Model Loader (third column, top) is set to load the SDXL refiner model on your system. Find the nodes that contain the example prompt and style ("bluebird in a sakura tree" and "chinese classical painting") and replace them with the prompt and style of your choice. Then press the Invoke button. If all goes well, an image will be generated and added to the image gallery.

Be aware that SDXL support is an experimental feature and is not 100% stable. When designing your own SDXL pipelines, be aware that there are certain settings that will have a disproportionate effect on image quality. In particular, the latents decode VAE step must be run at fp32 precision (using a slider at the bottom of the VAE node), and that images will change dramatically as the denoising threshold used by the refiner is adjusted.

Also be aware that SDXL requires at least 8 GB of VRAM in order to render 1024x1024 images. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0

What's Missing in v3.0.0

Some features are missing or not quite working yet. These include:

  • SDXL models can only be used in the node editor, and not in the text2img, img2img or unified canvas panels.
  • A migration path to import 2.3-generated images into the 3.0 image gallery
  • Diffusers-style LoRA files (with a HuggingFace repository ID) can be imported but do not run. There are very few of these models and they will not be supported at release time.
  • Various minor glitches in image gallery behavior.

The following 2.3 features are not available:

  • Variation generation (may be added in time for the final release)
  • Perlin Noise (will likely not be added)
  • Noise Threshold (available through Node Editor)
  • Symmetry (will likely not be added)
  • Seamless tiling (will likely not be added)
  • Face restoration (no longer needed, will not be added)

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

InvokeAI-installer-v3.0.0+b10.zip

Upgrading in place

All users can upgrade from previous beta versions using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select "Manually enter the tag name for the version you wish to update to" option [3]
  5. When prompted for the tag, enter v3.0.0+b10
  6. When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the "Developer's console" option [8]
  4. Type the following command:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0+b10.zip" --use-pep517 --upgrade

(Replace v3.0.0+b10 with the current version number.

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models will configuration files will be backed up.)

Upgrading using pip

Once 3.0.0 is released (out of alpha and beta), developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade  InvokeAI==3.0.0+b8

To upgrade to an xformers version if you are not currently using xformers, use:

pip install --use-pep517 --upgrade InvokeAI[xformers]

You can see which versions are available by going to The PyPI InvokeAI Project Page

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Getting a Stack Trace for Bug Reporting

If you are getting the message "Server Error" in the web interface, you can help us track down the bug by getting a stack trace from the failed operation. This involves several steps. Please see this Discord thread for a step-by-step guide to generating stack traces.

Development Roadmap

If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.

Detailed Change Log

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.0...v3.0.0+a3

InvokeAI - InvokeAI 2.3.5.post2

Published by lstein over 1 year ago

We are pleased to announce a minor update to InvokeAI with the release of version 2.3.5.post2.

What's New in 2.3.5.post2

This is a bugfix release. In previous versions, the built-in updating script did not update the Xformers library when the torch library was upgraded, leaving people with a version that ran on CPU only. Install this version to fix the issue so that it doesn't happen when updating to future versions of InvokeAI 3.0.0.

As a bonus, this version allows you to apply a checkpoint VAE, such as vae-ft-mse-840000-ema-pruned.ckpt to a diffusers model, without worrying about finding the diffusers version of the VAE. From within the web Model Manager, choose the diffusers model you wish to change, press the edit button, and enter the Location of the VAE file of your choice. The field will now accept either a .ckpt file, or a diffusers directory.

Installation / Upgrading

To install 2.3.5.post2 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.5.post2.zip

If you are using the Xformers library, and running v2.3.5.post1 or earlier, please do not use the built-in updater to update, as it will not update xformers properly. Instead, either download the installer and ask it to overwrite the existing invokeai directory (your previously-installed models and settings will not be affected), or use the following recipe to perform a command-line install:

  1. Start the launcher script and select option # 8 - Developer's console.
  2. Give the following command:
pip install invokeai[xformers] --use-pep517 --upgrade

If you do not use Xformers, the built-in update option (# 9) will work, as will the above command without the "[xformers]" part. From v2.3.5.post2 onward, the updater script will work properly with Xformers installed.

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade  InvokeAI==2.3.5.post2

To upgrade to an xformers version if you are not currently using xformers, use:

pip install --use-pep517 --upgrade InvokeAI[xformers]

You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.5.post2

These are known bugs in the release.

  1. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Development Roadmap

This is very likely to be the last release on the v2.3 source code branch. All new features are being added to the main branch. At the current time (mid-May, 2023), the main branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of "nodes".

If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5...v2.3.5.post2

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5.post1...v2.3.5.post2

InvokeAI - InvokeAI Version 2.3.5.post1

Published by lstein over 1 year ago

We are pleased to announce a minor update to InvokeAI with the release of version 2.3.5.post1.

What's New in 2.3.5.post1

The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.

Here are the new library versions:

Library Version
Torch 2.0.0
Diffusers 0.16.1
Xformers 0.0.19
Compel 1.1.5

Other Improvements

When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running (thanks to @pedantic79 for this).

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.5.post1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.5.post1.zip

If you are using the Xformers library, please do not use the built-in updater to update, as it will not update xformers properly. Instead, either download the installer and ask it to overwrite the existing invokeai directory (your previously-installed models and settings will not be affected), or use the following recipe to perform a command-line install:

  1. Start the launcher script and select option # 8 - Developer's console.
  2. Give the following command:
pip install invokeai[xformers] --use-pep517 --upgrade

If you do not use Xformers, the built-in update option (# 9) will work, as will the above command without the "[xformers]" part.

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5.post1. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.5.post1

These are known bugs in the release.

  1. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Development Roadmap

This is very likely to be the last release on the v2.3 source code branch. All new features are being added to the main branch. At the current time (mid-May, 2023), the main branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of "nodes".

If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.4.post1...v2.3.5-rc1

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5...v2.3.5.post1

InvokeAI - InvokeAI 2.3.5

Published by lstein over 1 year ago

We are pleased to announce a features update to InvokeAI with the release of version 2.3.5. This is currently a pre-release for community testing and bug reporting.

What's New in 2.3.5

This release expands support for additional LoRA and LyCORIS models, upgrades diffusers to 0.15.1, and fixes a few bugs.

LoRA and LyCORIS Support Improvement

  • A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
  • Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
  • Support for the newer LoKR LyCORIS files has been added.

Diffusers 0.15.1

  • This version updates the diffusers module to version 0.15.1 and is no longer compatible with 0.14. This provides a number of performance improvements and bug fixes.

Performance Improvements

  • When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.

Bug Fixes

  • The "import models from directory" and "import from URL" functionality in the console-based model installer has now been fixed.

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.5 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.5.zip

To update from versions 2.3.1 or higher, select the "update" option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.5. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.5

These are known bugs in the release.

  1. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
  2. If the xformers memory-efficient attention module is used, each image generated with the same prompt and settings will be slightly different. xformers 0.0.19 reduces or eliminates this problem, but hasn't been extensively tested with InvokeAI. If you wish to upgrade, you may do so by entering the InvokeAI "developer's console" and giving the command pip install xformers==0.0.19. You may see a message about InvokeAI being incompatible with this version, which you can safely ignore. Be sure to report any unexpected behavior to the Issues pages.

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Development Roadmap

This is very likely to be the last release on the v2.3 source code branch. All new features are being added to the main branch. At the current time (late April, 2023), the main branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of "nodes".

If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.

Change Log

New Contributors and Acknowledgements

  • @AbdBarho contributed the checksum performance improvements
  • @StAlKeR7779 (Sergey Borisov) contributed the LoKR support, did the diffusers 0.15 port, and cleaned up the code in multiple places.

Many thanks to these individuals, as well as @damian0815 for his contribution to this release.

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.4.post1...v2.3.5-rc1

InvokeAI - InvokeAI Version 2.3.4.post1 - A Stable Diffusion Toolkit

Published by lstein over 1 year ago

We are pleased to announce a features update to InvokeAI with the release of version 2.3.4.

Update: 13 April 2024 - 2.3.4.post1 is a hotfix that corrects an installer crash resulting from an update to the upstream diffusers library. If you have recently tried to install 2.3.4 and experienced a crash relating to "crossattention," this release will fix the issue.

What's New in 2.3.4

This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.

LoRA and LyCORIS Support

LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)

To use LoRA/LyCORIS models in InvokeAI:

  1. Download the .safetensors files of your choice and place in /path/to/invokeai/loras. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually.

  2. Add withLora(lora-file,weight) to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors is present:

family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)

Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. Negative weights are also allowed, which can lead to some interesting effects.

  1. Generate as you usually would! If you find that the image is too "crisp" try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you'll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA's training. Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.

  2. You can change the location of the loras directory by passing the --lora_directory option to `invokeai.

New WebUI LoRA and Textual Inversion Buttons

This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.

old-sea-captain-annotated

Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora() or <textual-inversion> prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.

Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.

By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on "Show Textual Inversions from HF Concepts Library." When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.

Minor features and fixes

This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip utility is kept up to date.

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.4 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.4.post1.zip

To update from versions 2.3.1 or higher, select the "update" option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.4. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.4. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page (Pre-release note: this will only work after the official release.)

Known Bugs in 2.3.4

These are known bugs in the release.

  1. The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
  2. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Change Log

New Contributors and Acknowledgements

Many thanks to these individuals, as well as @blessedcoolant and @damian0815 for their contributions to this release.

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.3...v2.3.4rc1