InvokeAI

InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.

APACHE-2.0 License

Downloads
30K
Stars
22.4K
Committers
194

Bot releases are hidden (Show)

InvokeAI - v5.0.0.a2 Latest Release

Published by brandonrising about 1 month ago

This is an alpha release. Features in this version are still under active development and may not be stable.

Your feedback is particularly important for this release, which makes big changes.

Canvas v2

image

The Generation & Canvas UIs have been merged into a unified experience as part of our Control Canvas release. This enhances the interaction between all your favorite features for a more intuitive and efficient workflow. Highlighted below are the key improvements and new additions that bring this experience to life.

Control Canvas

To orient existing users, you’ll find that the core generation experience is now optimized and geared towards maximizing control. There are two main workflows that users have primarily geared towards in the past:

  • Batch Generation: Generating a large number of images/iterations into the Gallery by varying/tweaking different settings.
  • Composition: Working continuously on a single composition, with multiple iterations and edits.

Both of these workflows have increasingly gravitated towards a canvas for control mechanisms like ControlNet, Initial Image, and more. Now, with the power of our Control Canvas, including a full layer system, you’ll be able to use the same Canvas controls in both of these workflows.

The destination of your generations can be set at the top of your Layers & Gallery tab, with Gallery generations saving a new copy of the image to your gallery with each generation, and Canvas generations creating a new Raster layer in the bounding box on the canvas.

This is one of the big changes with v5.0, and a major point we’re looking for feedback on during alpha testing. We ask that you try to approach it with an open mind, and highlight areas where you find sustained friction, as opposed to just managing the initial shock and adjustment of change.

Layers

Carrying forward from the Control Layers release, the full suite of controls is now available on the Canvas, with some notable enhancements.

Layer Types

Each control layer on the canvas is now manageable as a moveable and editable layer. You can create multiple layers, manipulate and transform them, and compose the full set of generational controls before generating your invocation.

The naming of these layers is likely to change. A full write-up of the layers will be as we work towards a stable release.

Control Editing

When using ControlNet models, the control image can now be manipulated as a layer. Instead of managing processors just for ControlNets, any layer can now have a processors applied as Filters. Unless your control layer is a pre-processed image, remember to apply the appropriate filter before generation.

One notable benefit of this approach is that creators are now able to draw and manipulate the control images directly. While tablet support is currently limited, we intend to expand that along with some additional pressure sensitivity/brushing options to streamline that part of leveraging the tool. In the meantime, use a white brush and eraser to draw and edit your control images.

Other Updates

We'd be here all day if we were to call out every individual change, so we'll hit the highlights and expand on each point as we get closer to the stable release.

  • Layer Types - Inpaint Mask, Regional Guidance, Raster Layer, Control Layer:
    • Inpaint Mask and Raster Layer map to the Canvas v1 Inpaint Mask and Base Layer.
    • Regional Guidance works the same as it does in the current Control Layers canvas.
    • Control Layer (name TBD) is a Raster Layer with a ControlNet stapled on. You can convert a Raster Layer into a Control Layers and back again.
  • Layer Compositing During Generation: You may have multiple Inpaint Masks and Raster Layers, but internally, generation still needs a single input image and mask. We handle this by virtually flattening all enabled Inpaint Masks into a single mask image, and all enabled Raster Layers into a single input image. This does not affect your layers setup - it happens behind the scenes.
  • Control Layer Auto-Background: When a Control Layer has some transparency, we automatically give it a black background. This means you can create a Control Layer, select a white brush and go to town with a scribble. We'll add a black background automatically, as most ControlNet models require. This allows you to stack multiple Control Layers, even if they are of difference sizes, without artifacts at their edges.
  • Layer Type Hiding: When you have even just one of each layer type, the canvas gets pretty hectic. Each layer type has a Hide toggle, which only hides the layers visually. For example, you can hide your Control Layers while you edit a Raster Layer for a cleaner-looking canvas. Hidden layers are still used during generation.
  • Layer Transformation: All layer types may be moved, resized and rotated.
  • Layer Filtering: Raster Layers and Control Layers may be have filters applied. You can apply as many filters as you want.
  • Other Layer Operations: Duplicate, lock, disable, hide all of type, arrange. Merge visible for Raster Layers and Inpaint Masks.
  • Layer Quick Switch: Press q to switch between the last two selected layers. Bookmark a layer to instead switch between the bookmarked layer and the last selected non-bookmarked layer.
  • New Rendering Engine: The canvas rendering engine is a ground-up rewrite, based on konvajs.
  • Canvas Caching: Extensive use of caching greatly improves efficiency. For example, on Canvas v1, if you click Invoke twice without changing anything else, we would export and upload the canvas image data twice. On Canvas v2, that export is cached and reused.
  • Color Picker Quick Switch: Hold alt to temporarily switch to the color picker.
  • Revised Graph Builders: Curious nodeologists might find the updated graphs interesting. You can take a peek by setting Send to Gallery, generate, and load up the output image's workflow.

Installation and Updating

To install or update to v5.0.0.a2, download the installer and follow the installation instructions
To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.0.0.a1...v5.0.0.a2

InvokeAI - v5.0.0.a1

Published by brandonrising about 1 month ago

This is an alpha release. Features in this version are still under active development and may not be stable.

Your feedback is particularly important for this release, which makes big changes.

Canvas v2

image

The Generation & Canvas UIs have been merged into a unified experience as part of our Control Canvas release. This enhances the interaction between all your favorite features for a more intuitive and efficient workflow. Highlighted below are the key improvements and new additions that bring this experience to life.

Control Canvas

To orient existing users, you’ll find that the core generation experience is now optimized and geared towards maximizing control. There are two main workflows that users have primarily geared towards in the past:

  • Batch Generation: Generating a large number of images/iterations into the Gallery by varying/tweaking different settings.
  • Composition: Working continuously on a single composition, with multiple iterations and edits.

Both of these workflows have increasingly gravitated towards a canvas for control mechanisms like ControlNet, Initial Image, and more. Now, with the power of our Control Canvas, including a full layer system, you’ll be able to use the same Canvas controls in both of these workflows.

The destination of your generations can be set at the top of your Layers & Gallery tab, with Gallery generations saving a new copy of the image to your gallery with each generation, and Canvas generations creating a new Raster layer in the bounding box on the canvas.

This is one of the big changes with v5.0, and a major point we’re looking for feedback on during alpha testing. We ask that you try to approach it with an open mind, and highlight areas where you find sustained friction, as opposed to just managing the initial shock and adjustment of change.

Layers

Carrying forward from the Control Layers release, the full suite of controls is now available on the Canvas, with some notable enhancements.

Layer Types

Each control layer on the canvas is now manageable as a moveable and editable layer. You can create multiple layers, manipulate and transform them, and compose the full set of generational controls before generating your invocation.

The naming of these layers is likely to change. A full write-up of the layers will be as we work towards a stable release.

Control Editing

When using ControlNet models, the control image can now be manipulated as a layer. Instead of managing processors just for ControlNets, any layer can now have a processors applied as Filters. Unless your control layer is a pre-processed image, remember to apply the appropriate filter before generation.

One notable benefit of this approach is that creators are now able to draw and manipulate the control images directly. While tablet support is currently limited, we intend to expand that along with some additional pressure sensitivity/brushing options to streamline that part of leveraging the tool. In the meantime, use a white brush and eraser to draw and edit your control images.

Other Updates

We'd be here all day if we were to call out every individual change, so we'll hit the highlights and expand on each point as we get closer to the stable release.

  • Layer Types - Inpaint Mask, Regional Guidance, Raster Layer, Control Layer:
    • Inpaint Mask and Raster Layer map to the Canvas v1 Inpaint Mask and Base Layer.
    • Regional Guidance works the same as it does in the current Control Layers canvas.
    • Control Layer (name TBD) is a Raster Layer with a ControlNet stapled on. You can convert a Raster Layer into a Control Layers and back again.
  • Layer Compositing During Generation: You may have multiple Inpaint Masks and Raster Layers, but internally, generation still needs a single input image and mask. We handle this by virtually flattening all enabled Inpaint Masks into a single mask image, and all enabled Raster Layers into a single input image. This does not affect your layers setup - it happens behind the scenes.
  • Control Layer Auto-Background: When a Control Layer has some transparency, we automatically give it a black background. This means you can create a Control Layer, select a white brush and go to town with a scribble. We'll add a black background automatically, as most ControlNet models require. This allows you to stack multiple Control Layers, even if they are of difference sizes, without artifacts at their edges.
  • Layer Type Hiding: When you have even just one of each layer type, the canvas gets pretty hectic. Each layer type has a Hide toggle, which only hides the layers visually. For example, you can hide your Control Layers while you edit a Raster Layer for a cleaner-looking canvas. Hidden layers are still used during generation.
  • Layer Transformation: All layer types may be moved, resized and rotated.
  • Layer Filtering: Raster Layers and Control Layers may be have filters applied. You can apply as many filters as you want.
  • Other Layer Operations: Duplicate, lock, disable, hide all of type, arrange. Merge visible for Raster Layers and Inpaint Masks.
  • Layer Quick Switch: Press q to switch between the last two selected layers. Bookmark a layer to instead switch between the bookmarked layer and the last selected non-bookmarked layer.
  • New Rendering Engine: The canvas rendering engine is a ground-up rewrite, based on konvajs.
  • Canvas Caching: Extensive use of caching greatly improves efficiency. For example, on Canvas v1, if you click Invoke twice without changing anything else, we would export and upload the canvas image data twice. On Canvas v2, that export is cached and reused.
  • Color Picker Quick Switch: Hold alt to temporarily switch to the color picker.
  • Revised Graph Builders: Curious nodeologists might find the updated graphs interesting. You can take a peek by setting Send to Gallery, generate, and load up the output image's workflow.

Installation and Updating

To install or update to v5.0.0.a1, download the installer and follow the installation instructions
To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.9...v5.0.0.a1

InvokeAI - v4.2.9

Published by brandonrising about 1 month ago

FLUX

Please note these nodes are still in the prototype stage and are subject to change. This Node API is not stable!

We are supporting both FLUX dev and FLUX schnell at this time in workflows only. These will be incorporated into the rest of the UI in future updates. At this time, this is an initial and developing implementation - we’re bringing this in with the intent of long-term stable support for FLUX.

Default workflows can be found in your workflow tab: FLUX Text to Image and FLUX Image to Image. Please note that we have not added FLUX to the linear UI yet, LoRAs and Img2Img are not yet supported, but will be added soon.

Required Dependencies

In order to run FLUX on Invoke, you will need to download and install several models. We have provided options in the Starter Models (found in your Model Manager tab) for quantized and unquantized versions of both FLUX dev and FLUX schnell. Selecting these will automatically download the dependencies you need, listed below. These dependencies are also available for adhoc download in Starter Models list. Currently invoke only supports unquantized models, and bitsandbytes nf4 quantized models.

  • T5 encoder
  • CLIP-L encoder
  • FLUX transformer/unet
  • FLUX VAE

Considerations

FLUX is a large model, and has significant VRAM requirements. The full models require 24gb of VRAM on Linux — Windows PCs are less efficient, and thus need slightly more, making it difficult to run the full models.

To compensate for this, the community has begun to develop quantized versions of the DEV model - These are models with a slightly lower quality, but significant reductions in VRAM requirements.

Currently, Invoke is only supporting NVidia GPUs. You may be able to work out a way to get an AMD GPU to generate, however we’ve not been able to test this, and so can’t provide committed support for it. FLUX on MPS is not supported at this time.

Please note that the FLUX Dev model is a non-commercial license. You will need a commercial license to use the model for any commercial work.

Below are additional details on which model to use based on your system:

  • FLUX dev quantized starter model: non-commercial, >16GB RAM, ≥12GB VRAM
  • FLUX schnell quantized starter model: commercial, faster inference than dev, >16GB RAM, ≥ 12GB VRAM
  • FLUX dev starter model: non-commercial, >32GB RAM, ≥24GB VRAM, linux OS
  • FLUX schnell starter model: commercial, >32GB RAM, ≥24GB VRAM, linux OS

Running the Workflow

You can find a new default workflow in your workflows tab called FLUX Text to Image. This can be run with both FLUX dev and FLUX schnell models, but note that the default step count of 30 is the recommendation for FLUX dev. If running FLUX schnell, we recommend you lower your step count to 4. You will not be able to successfully run this workflow without the models listed above as required dependencies installed.

  • Navigate to the Workflows tab.
  • Press the Workflow Library button at the top left of your screen.
  • Select Default Workflows and choose the FLUX workflow you’d like to use.

The exposed fields will require you to select a FLUX model ,T5 encoder, CLIP Embed model, VAE, prompt, and your step count. If you are missing any models, use the "Starter Models" tab in the model manager to download and install FLUX Dev or Schnell.

Screenshot 2024-09-04 141124

We've also added a new default workflow named Flux Image to Image. This can be run vary similarly to the workflow described above with the additional ability to provide a base image.

Screenshot 2024-09-04 140846

Other Changes

  • Enhancement: add fields for CLIPEmbedModel and FluxVAEModel by @maryhipp
  • Enhancement: FLUX memory management improvements by @RyanJDick
  • Feature: Add FLUX image-to-image and inpainting by @RyanJDick
  • Feature: flux preview images by @brandonrising
  • Enhancement: Add install probes for T5_encoder and ClipTextModel by @lstein
  • Fix: support checkpoint bundles containing more than the transformer by @brandonrising

Installation and Updating

To install or update to v4.2.9, download the installer and follow the [installation instructions](https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/).

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.8...v4.2.9

InvokeAI - v4.2.9rc2

Published by brandonrising about 1 month ago

FLUX

Please note these nodes are still in the prototype stage and are subject to change. This Node API is not stable!

We are supporting both FLUX dev and FLUX schnell at this time in workflows only. These will be incorporated into the rest of the UI in future updates. At this time, this is an initial and developing implementation - we’re bringing this in with the intent of long-term stable support for FLUX.

Default workflows can be found in your workflow tab: FLUX Text to Image and FLUX Image to Image. Please note that we have not added FLUX to the linear UI yet, LoRAs and Img2Img are not yet supported, but will be added soon.

Flux denoise nodes now provide preview images.

Clip embeds and T5 model encoders can now be installed outside of the starter models

Required Dependencies

image (20)

In order to run FLUX on Invoke, you will need to download and install several models. We have provided options in the Starter Models (found in your Model Manager tab) for quantized and unquantized versions of both FLUX dev and FLUX schnell. Selecting these will automatically download the dependencies you need, listed below. These dependencies are also available for adhoc download in Starter Models list.

  • T5 encoder
  • CLIP-L encoder
  • FLUX transformer/unet
  • FLUX VAE

Considerations

FLUX is a large model, and has significant VRAM requirements. The full models require 24gb of VRAM on Linux — Windows PCs are less efficient, and thus need slightly more, making it difficult to run the full models.

To compensate for this, the community has begun to develop quantized versions of the DEV model - These are models with a slightly lower quality, but significant reductions in VRAM requirements.

Currently, Invoke is only supporting NVidia GPUs. You may be able to work out a way to get an AMD GPU to generate, however we’ve not been able to test this, and so can’t provide committed support for it. FLUX on MPS is not supported at this time.

Please note that the FLUX Dev model is a non-commercial license. You will need a commercial license to use the model for any commercial work.

Below are additional details on which model to use based on your system:

  • FLUX dev quantized starter model: non-commercial, >16GB RAM, ≥12GB VRAM
  • FLUX schnell quantized starter model: commercial, faster inference than dev, >16GB RAM, ≥ 12GB VRAM
  • FLUX dev starter model: non-commercial, >32GB RAM, ≥24GB VRAM, linux OS
  • FLUX schnell starter model: commercial, >32GB RAM, ≥24GB VRAM, linux OS

Running the Workflow

You can find a new default workflow in your workflows tab called FLUX Text to Image. This can be run with both FLUX dev and FLUX schnell models, but note that the default step count of 30 is the recommendation for FLUX dev. If running FLUX schnell, we recommend you lower your step count to 4. You will not be able to successfully run this workflow without the models listed above as required dependencies installed.

The exposed fields will require you to select a FLUX model ,T5 encoder, CLIP Embed model, VAE, prompt, and your step count.

Screenshot 2024-09-04 141124

We've also added a new default workflow named Flux Image to Image. This can be run vary similarly to the workflow described above with the additional ability to provide a base image.

Screenshot 2024-09-04 140846

Other Changes

  • Enhancement: add fields for CLIPEmbedModel and FluxVAEModel by @maryhipp
  • Enhancement: FLUX memory management improvements by @RyanJDick
  • Feature: Add FLUX image-to-image and inpainting by @RyanJDick
  • Feature: flux preview images by @brandonrising
  • Enhancement: Add install probes for T5_encoder and ClipTextModel by @lstein
  • Fix: support checkpoint bundles containing more than the transformer by @brandonrising

Installation and Updating

To install or update to v4.2.9rc2, download the installer and follow the [installation instructions](https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/).

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.9rc1...v4.2.9rc2

InvokeAI - v4.2.9rc1

Published by maryhipp about 2 months ago

v4.2.9rc1 brings the initial FLUX workflow implementation to Invoke. Please note these nodes are still in the prototype stage and are subject to change. This Node API is not stable!

FLUX

We are supporting both FLUX dev and FLUX schnell at this time in workflows only. These will be incorporated into the rest of the UI in future updates. At this time, this is an initial and developing implementation - we’re bringing this in with the intent of long-term stable support for FLUX.

A default workflow can be found in your workflow tab called FLUX Text to Image. Please note that we have not added FLUX to the linear UI yet, LoRAs and Img2Img are not yet supported, but will be added soon.

Thanks to @RyanJDick and @brandonrising for their hard work bringing FLUX support to Invoke.

Required Dependencies

image (20)

In order to run FLUX on Invoke, you will need to download and install several models. We have provided options in the Starter Models (found in your Model Manager tab) for quantized and unquantized versions of both FLUX dev and FLUX schnell. Selecting these will automatically download the dependencies you need, listed below. These dependencies are also available for adhoc download in Starter Models list. We strongly recommend using the CLIP-L encoder and FLUX VAE provided in our starter models for this initial implementation to work seamlessly.

  • T5 encoder
  • CLIP-L encoder
  • FLUX transformer/unet
  • FLUX VAE

Considerations

FLUX is a large model, and has significant VRAM requirements. The full models require 24gb of VRAM on Linux — Windows PCs are less efficient, and thus need slightly more, making it difficult to run the full models.

To compensate for this, the community has begun to develop quantized versions of the DEV model - These are models with a slightly lower quality, but significant reductions in VRAM requirements.

Currently, Invoke is only supporting NVidia GPUs. You may be able to work out a way to get an AMD GPU to generate, however we’ve not been able to test this, and so can’t provide committed support for it. FLUX on MPS is not supported at this time.

Please note that the FLUX Dev model is a non-commercial license. You will need a commercial license to use the model for any commercial work.

Below are additional details on which model to use based on your system:

  • FLUX dev quantized starter model: non-commercial, >16GB RAM, ≥12GB VRAM
  • FLUX schnell quantized starter model: commercial, faster inference than dev, >16GB RAM, ≥ 12GB VRAM
  • FLUX dev starter model: non-commercial, >32GB RAM, ≥24GB VRAM, linux OS
  • FLUX schnell starter model: commercial, >32GB RAM, ≥24GB VRAM, linux OS

Running the Workflow

You can find a new default workflow in your workflows tab called FLUX Text to Image. This can be run with both FLUX dev and FLUX schnell models, but note that the default step count of 30 is the recommendation for FLUX dev. If running FLUX schnell, we recommend you lower your step count to 4. You will not be able to successfully run this workflow without the models listed above as required dependencies installed.

The exposed fields will require you to select a FLUX model, a T5 encoder, a prompt, and your step count.

image (21)

Other Changes

  • Fix: Follow-up docker readme fixes by @ebr
  • Fix: use empty string fallback if unable to parse prompts when creating style preset from existing image by @maryhipp
  • Chore: bump version v4.2.8post1 by @psychedelicious
  • Enhancement: Added support for bounding boxes in the Invocation API by @JPPhoto
  • Fix: disable export button if no non-default presets by @maryhipp
  • Build: remove broken scripts by @psychedelicious
  • Fix: missing translation keys for new model types by @maryhipp

Installation and Updating

To install or update to v4.2.9rc1, download the installer and follow the [installation instructions](https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/).

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.8...v4.2.9rc1

InvokeAI - v4.2.8

Published by psychedelicious about 2 months ago

v4.2.8 brings Prompt Templates to Invoke, new schedulers and a number of minor fixes and enhancements.

Prompt Templates

Prompt templates are often used for commonly-used style keywords, letting you focus on subject and composition in your prompts - but you can use them in other creative ways.

Thanks to @maryhipp for implementing Prompt Templates!

Creating a Prompt Template

Create a prompt template from an existing image generated with Invoke. We'll add the positive and negative prompts from the image's metadata as the template, and the image will be used as a cover image for the template.

You can also create a prompt template from scratch, uploading a cover image.

How it Works

Add a positive and/or negative prompt to your template. Use the {prompt} placeholder in the template to indicate where your prompt should be inserted into the template:

  • Template: highly detailed photo of {prompt}, award-winning, nikon dslr
  • Prompt: a super cute fennec fox cub
  • Result: highly detailed photo of a super cute fennec fox cub, award-winning, nikon dslr

If you omit the placeholder, the template will be appended to the end of your prompt:

  • Template: turtles
  • Prompt: i like
  • Result: i like turtles

Default Prompt Templates

We're shipping a number of templates with the app, many of which were contributed by community members (thanks y'all!). We'll update these as we continue developing Invoke with improvements and new templates.

Import and Export

You can import templates from other SD apps. We support CSV and JSON files with these columns/keys:

  • name
  • prompt or positive_prompt
  • negative_prompt

Export your prompt templates to share with others. When you export prompt templates, only your own templates are exported.

Preview and Flatten

Use the Preview button to see the prompt that will be used for generation. Flatten the prompt template to bake it into your prompts.

Compatible with Dynamic Prompts

You can use dynamic prompt in prompt templates, and they will work with dynamic prompts in your positive prompt box.

Other Changes

  • Enhancement: Added DPM++ 3M, DPM++ 3M Karras, DEIS Karras, KDPM 2 Karras, KDPM 2 Ancestral Karras and UniPC Karras schedulers @StAlKeR7779
  • Enhancement: Updated translations - Italian is 100%! Thanks @Harvester62!
  • Enhancement: Grounded SAM node (text prompt image segmentation) @RyanJDick
  • Enhancement: Update DepthAnything to V2 (small variant only) @blessedcoolant
  • Fix: Image downloads with correct filename
  • Fix: Delays with events (progress images will be smoother)
  • Fix: Jank with board selection when hiding or deleting boards
  • Fix: Error deleting images on systems without a "trash bin"
  • Fix: Upscale metadata included in SDXL Multidiffusion upscales @maryhipp
  • Fix: invoke.sh works with symlinks @max-maag
  • Internal: Continued work on the modular backend refactor @StAlKeR7779

Installation and Updating

To install or update to v4.2.8, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.7post1...v4.2.8

InvokeAI - v4.2.8rc2

Published by psychedelicious 2 months ago

v4.2.8rc2 brings Prompt Templates to Invoke, plus a number of minor fixes and enhancements.

This second RC fixes an issue where the default prompt templates were not packaged correctly, causing an error on startup.

Prompt Templates

We've added the ability to create, import and export prompt templates. These are saved prompts that you may add to your existing prompt.

How it Works

Add a positive and/or negative prompt to your template. Use the {prompt} placeholder in the template to indicate where your prompt should be inserted into the template:

  • Template: highly detailed photo of {prompt}, award-winning, nikon dslr
  • Prompt: a super cute fennec fox cub
  • Result: highly detailed photo of a super cute fennec fox cub, award-winning, nikon dslr

If you omit the placeholder, the template will be appended to the end of your prompt:

  • Template: turtles
  • Prompt: i like
  • Result: i like turtles

Creating a Prompt Template

You can create a prompt templates from within Invoke in two ways:

  • Directly, by providing the name, positive prompt and negative prompt. You can upload an image to be the preview image for the template.
  • Via metadata from an image generated with Invoke. We'll use the positive and negative prompts from the image's metadata, and that image will be the preview image for that template.

Default Prompt Templates

We're shipping a number of templates with the app. We'll update these as we continue developing Invoke with improvements and new templates.

Import and Export

You can import templates from other SD apps. We support CSV and JSON files with these columns/keys:

  • name
  • prompt or positive_prompt
  • negative_prompt

Export your prompt templates to share with others. When you export prompt templates, only your own templates are exported.

Preview and Flatten

Use the Preview button to see the prompt that will be used for generation. Flatten the prompt template to bake it into your prompts.

Thanks to @maryhipp for implementing Prompt Templates!

Other Changes

  • Enhancement: Added DPM++ 3M, DPM++ 3M Karras, DEIS Karras, KDPM 2 Karras, KDPM 2 Ancestral Karras and UniPC Karras schedulers @StAlKeR7779
  • Enhancement: Updated translations - Italian is 100%! Thanks @Harvester62!
  • Enhancement: Grounded SAM node (text prompt image segmentation) @RyanJDick
  • Enhancement: Update DepthAnything to V2 (small variant only) @blessedcoolant
  • Fix: Image downloads with correct filename
  • Fix: Delays with events (progress images will be smoother)
  • Fix: Jank with board selection when hiding or deleting boards
  • Fix: Error deleting images on systems without a "trash bin"
  • Fix: Upscale metadata included in SDXL Multidiffusion upscales @maryhipp
  • Fix: invoke.sh works with symlinks @max-maag
  • Internal: Continued work on the modular backend refactor @StAlKeR7779

Installation and Updating

To install or update to v4.2.8rc2, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.7...v4.2.8rc2

InvokeAI - v4.2.8rc1

Published by psychedelicious 2 months ago

v4.2.8rc1 brings Prompt Templates to Invoke, plus a number of minor fixes and enhancements.

Prompt Templates

We've added the ability to create, import and export prompt templates. These are saved prompts that you may add to your existing prompt.

How it Works

Add a positive and/or negative prompt to your template. Use the {prompt} placeholder in the template to indicate where your prompt should be inserted into the template:

  • Template: highly detailed photo of {prompt}, award-winning, nikon dslr
  • Prompt: a super cute fennec fox cub
  • Result: highly detailed photo of a super cute fennec fox cub, award-winning, nikon dslr

If you omit the placeholder, the template will be appended to the end of your prompt:

  • Template: turtles
  • Prompt: i like
  • Result: i like turtles

Creating a Prompt Template

You can create a prompt templates from within Invoke in two ways:

  • Directly, by providing the name, positive prompt and negative prompt. You can upload an image to be the preview image for the template.
  • Via metadata from an image generated with Invoke. We'll use the positive and negative prompts from the image's metadata, and that image will be the preview image for that template.

Default Prompt Templates

We're shipping a number of templates with the app. We'll update these as we continue developing Invoke with improvements and new templates.

Import and Export

You can import templates from other SD apps. We support CSV and JSON files with these columns/keys:

  • name
  • prompt or positive_prompt
  • negative_prompt

Export your prompt templates to share with others. When you export prompt templates, only your own templates are exported.

Preview and Flatten

Use the Preview button to see the prompt that will be used for generation. Flatten the prompt template to bake it into your prompts.

Thanks to @maryhipp for implementing Prompt Templates!

Other Changes

  • Enhancement: Added DPM++ 3M, DPM++ 3M Karras, DEIS Karras, KDPM 2 Karras, KDPM 2 Ancestral Karras and UniPC Karras schedulers @StAlKeR7779
  • Enhancement: Updated translations - Italian is 100%! Thanks @Harvester62!
  • Enhancement: Grounded SAM node (text prompt image segmentation) @RyanJDick
  • Enhancement: Update DepthAnything to V2 (small variant only) @blessedcoolant
  • Fix: Image downloads with correct filename
  • Fix: Delays with events (progress images will be smoother)
  • Fix: Jank with board selection when hiding or deleting boards
  • Fix: Error deleting images on systems without a "trash bin"
  • Fix: Upscale metadata included in SDXL Multidiffusion upscales @maryhipp
  • Fix: invoke.sh works with symlinks @max-maag
  • Internal: Continued work on the modular backend refactor @StAlKeR7779

Installation and Updating

To install or update to v4.2.8rc1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.7...v4.2.8rc1

InvokeAI - v4.2.7post1

Published by psychedelicious 3 months ago

🚨 v4.2.7post1 resolves an issue with Windows installs. 🚨

v4.2.7 includes gallery improvements and some major features focused on upscaling.

Upscaling

We've added a dedicated upscaling tab, support for custom upscaling models, and some new nodes.

Thanks to @RyanJDick (backend implementation), @chainchompa (frontend) and @maryhipp (frontend) for working on this!

Dedicated Upscaling Tab

The new upscaling tab provides a simple and powerful UI to Invoke's MultiDiffusion implementation. This builds on the workflow released in v4.2.6, allowing for memory-efficient upscaling to huge output image sizes.

We're pretty happy with the results!

4x scale, 4x_NMKD-Siax_200k upscale model, Deliberate_v5 SD1.5 model, KDPM 2 scheduler @ 30 steps, all other settings default

Requirements

You need 3 models installed to use this feature:

  • An upscale model for the first pass upscale
  • A main SD model (SD1.5 or SDXL) for the image-to-image
  • A tile ControlNet model of the same model architecture as your main SD model

If you are missing any of these, you'll see a warning directing you to the model manager to install them. You can search the starter models for upscale, main, and tile to get you started.

image

Tips

  • The main SD model architecture has the biggest impact on VRAM usage. For example, SD1.5 @ 2k needs just under 4GB, while SDXL @ 2k needs just under 9GB. VRAM usage increases a small amount as output size increases - SD1.5 @ 8k needs ~4.5GB while SDXL @ 8k needs ~10.5GB.
  • The upscale and main SD model choices matter. Choose models best suited to your input image or desired output characteristics.
  • Some schedulers work better than others. KDPM 2 is a good choice.
  • LoRAs - like a detail-adding LoRA - can make a big impact.
  • Higher Creativity values give the SD model more leeway in creating new details. This parameter controls denoising start and end percentages.
  • Higher Structure values tell the SD model to stick closer to the input image's structure. This parameter controls the tile ControlNet.

Custom Upscaling Models

You can now install and use custom upscaling models in Invoke. The excellent spandrel library handles loading and running the models.

spandrel can do a lot more than upscaling - it supports a wide range of "image to image" models. This includes single-image super resolution like ESRGAN (upscalers) but also things like GFPGAN (face restoration) and DeJPEG (cleans up JPEG compression artifacts).

A complete list of supported architectures can be found here.

Note: We have not enabled the restrictively-licensed architectures, which are denoted with a + symbol in the list.

Installing Models

We've added a few popular upscaling models to the Starter Models tab in the Model Manager - search for "upscale" to find them.

You can install models found online via the Model Manager, just like any other model. OpenModelDB is a popular place to get these models. For most of them, you can copy the model's download link and paste in into the Model Manager to install.

Nodes

Two nodes have been added to support processing images with spandrel - be that upscaling or any of the other tasks these models support.

  • Image-to-Image - Runs the selected model without any extra processing.
  • Image-to-Image (Autoscale) - Runs the selected model repeatedly until the desired scale is reached. This node is intended for upscaling models specifically, providing some useful extra functionality:
    • If the model overshoots the target scale, the final image will be downscaled to the target scale with Lanczos resampling.
    • As a convenience, the output image width and height can be fit to a multiple of 8, as is required for SD. This will only resize down, and may change the aspect ratio slightly.
    • If the model doesn't actually upscale the image, the scale parameter will be ignored.

Gallery Improvements

Thanks to @maryhipp and @chainchompa for continued iteration on the gallery!

  • Cleaner boards UI.
  • Improved boards and image search UI.
  • Fixed issues where board counts don't update when images are moved between boards.
  • Added a "Jump" button to allow you to skip pages of the gallery

Other Changes

  • Enhancement: When installing starter models, the description is carried over. Thanks @lstein!
  • Enhancement: Updated translations.
  • Fix: Model unpatching when running on CPU, causing bad/no outputs.
  • Fix: Occasional visible seams on images with smooth textures, like skies. MultiDiffusion tiling now uses gradient blending to mitigate this issue.
  • Fix: Model names overflow the model selection drop-downs.
  • Internal: Backend SD pipeline refactor (WIP). This will allow contributors to add functionality to Invoke more easily. This will be behind a feature flag until the refactor is complete and tested. Thanks to @StAlKeR7779 for leading the effort, with major contributions from @dunkeroni and @RyanJDick.

Installation and Updating

To install or update to v4.2.7post1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.6...v4.2.7post1

InvokeAI - v4.2.7

Published by brandonrising 3 months ago

v4.2.7 includes gallery improvements and some major features focused on upscaling.

Upscaling

We've added a dedicated upscaling tab, support for custom upscaling models, and some new nodes.

Thanks to @RyanJDick (backend implementation), @chainchompa (frontend) and @maryhipp (frontend) for working on this!

Dedicated Upscaling Tab

The new upscaling tab provides a simple and powerful UI to Invoke's MultiDiffusion implementation. This builds on the workflow released in v4.2.6, allowing for memory-efficient upscaling to huge output image sizes.

We're pretty happy with the results!

4x scale, 4x_NMKD-Siax_200k upscale model, Deliberate_v5 SD1.5 model, KDPM 2 scheduler @ 30 steps, all other settings default

Requirements

You need 3 models installed to use this feature:

  • An upscale model for the first pass upscale
  • A main SD model (SD1.5 or SDXL) for the image-to-image
  • A tile ControlNet model of the same model architecture as your main SD model

If you are missing any of these, you'll see a warning directing you to the model manager to install them. You can search the starter models for upscale, main, and tile to get you started.

image

Tips

  • The main SD model architecture has the biggest impact on VRAM usage. For example, SD1.5 @ 2k needs just under 4GB, while SDXL @ 2k needs just under 9GB. VRAM usage increases a small amount as output size increases - SD1.5 @ 8k needs ~4.5GB while SDXL @ 8k needs ~10.5GB.
  • The upscale and main SD model choices matter. Choose models best suited to your input image or desired output characteristics.
  • Some schedulers work better than others. KDPM 2 is a good choice.
  • LoRAs - like a detail-adding LoRA - can make a big impact.
  • Higher Creativity values give the SD model more leeway in creating new details. This parameter controls denoising start and end percentages.
  • Higher Structure values tell the SD model to stick closer to the input image's structure. This parameter controls the tile ControlNet.

Custom Upscaling Models

You can now install and use custom upscaling models in Invoke. The excellent spandrel library handles loading and running the models.

spandrel can do a lot more than upscaling - it supports a wide range of "image to image" models. This includes single-image super resolution like ESRGAN (upscalers) but also things like GFPGAN (face restoration) and DeJPEG (cleans up JPEG compression artifacts).

A complete list of supported architectures can be found here.

Note: We have not enabled the restrictively-licensed architectures, which are denoted with a + symbol in the list.

Installing Models

We've added a few popular upscaling models to the Starter Models tab in the Model Manager - search for "upscale" to find them.

You can install models found online via the Model Manager, just like any other model. OpenModelDB is a popular place to get these models. For most of them, you can copy the model's download link and paste in into the Model Manager to install.

Nodes

Two nodes have been added to support processing images with spandrel - be that upscaling or any of the other tasks these models support.

  • Image-to-Image - Runs the selected model without any extra processing.
  • Image-to-Image (Autoscale) - Runs the selected model repeatedly until the desired scale is reached. This node is intended for upscaling models specifically, providing some useful extra functionality:
    • If the model overshoots the target scale, the final image will be downscaled to the target scale with Lanczos resampling.
    • As a convenience, the output image width and height can be fit to a multiple of 8, as is required for SD. This will only resize down, and may change the aspect ratio slightly.
    • If the model doesn't actually upscale the image, the scale parameter will be ignored.

Gallery Improvements

Thanks to @maryhipp and @chainchompa for continued iteration on the gallery!

  • Cleaner boards UI.
  • Improved boards and image search UI.
  • Fixed issues where board counts don't update when images are moved between boards.
  • Added a "Jump" button to allow you to skip pages of the gallery

Other Changes

  • Enhancement: When installing starter models, the description is carried over. Thanks @lstein!
  • Enhancement: Updated translations.
  • Fix: Model unpatching when running on CPU, causing bad/no outputs.
  • Fix: Occasional visible seams on images with smooth textures, like skies. MultiDiffusion tiling now uses gradient blending to mitigate this issue.
  • Fix: Model names overflow the model selection drop-downs.
  • Internal: Backend SD pipeline refactor (WIP). This will allow contributors to add functionality to Invoke more easily. This will be behind a feature flag until the refactor is complete and tested. Thanks to @StAlKeR7779 for leading the effort, with major contributions from @dunkeroni and @RyanJDick.

Installation and Updating

To install or update to v4.2.7, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.6post1...v4.2.7

InvokeAI - v4.2.7rc1

Published by psychedelicious 3 months ago

v4.2.7rc1 includes gallery improvements and some major features focused on upscaling.

Upscaling

We've added a dedicated upscaling tab, support for custom upscaling models, and some new nodes.

Thanks to @RyanJDick (backend implementation), @chainchompa (frontend) and @maryhipp (frontend) for working on this!

Dedicated Upscaling Tab

The new upscaling tab provides a simple and powerful UI to Invoke's MultiDiffusion implementation. This builds on the workflow released in v4.2.6, allowing for memory-efficient upscaling to huge output image sizes.

We're pretty happy with the results!

4x scale, 4x_NMKD-Siax_200k upscale model, Deliberate_v5 SD1.5 model, KDPM 2 scheduler @ 30 steps, all other settings default

Requirements

You need 3 models installed to use this feature:

  • An upscale model for the first pass upscale
  • A main SD model (SD1.5 or SDXL) for the image-to-image
  • A tile ControlNet model of the same model architecture as your main SD model

If you are missing any of these, you'll see a warning directing you to the model manager to install them. You can search the starter models for upscale, main, and tile to get you started.

image

Tips

  • The main SD model architecture has the biggest impact on VRAM usage. For example, SD1.5 @ 2k needs just under 4GB, while SDXL @ 2k needs just under 9GB. VRAM usage increases a small amount as output size increases - SD1.5 @ 8k needs ~4.5GB while SDXL @ 8k needs ~10.5GB.
  • The upscale and main SD model choices matter. Choose models best suited to your input image or desired output characteristics.
  • Some schedulers work better than others. KDPM 2 is a good choice.
  • LoRAs - like a detail-adding LoRA - can make a big impact.
  • Higher Creativity values give the SD model more leeway in creating new details. This parameter controls denoising start and end percentages.
  • Higher Structure values tell the SD model to stick closer to the input image's structure. This parameter controls the tile ControlNet.

Custom Upscaling Models

You can now install and use custom upscaling models in Invoke. The excellent spandrel library handles loading and running the models.

spandrel can do a lot more than upscaling - it supports a wide range of "image to image" models. This includes single-image super resolution like ESRGAN (upscalers) but also things like GFPGAN (face restoration) and DeJPEG (cleans up JPEG compression artifacts).

A complete list of supported architectures can be found here.

Note: We have not enabled the restrictively-licensed architectures, which are denoted with a + symbol in the list.

Installing Models

We've added a few popular upscaling models to the Starter Models tab in the Model Manager - search for "upscale" to find them.

You can install models found online via the Model Manager, just like any other model. OpenModelDB is a popular place to get these models. For most of them, you can copy the model's download link and paste in into the Model Manager to install.

Nodes

Two nodes have been added to support processing images with spandrel - be that upscaling or any of the other tasks these models support.

  • Image-to-Image - Runs the selected model without any extra processing.
  • Image-to-Image (Autoscale) - Runs the selected model repeatedly until the desired scale is reached. This node is intended for upscaling models specifically, providing some useful extra functionality:
    • If the model overshoots the target scale, the final image will be downscaled to the target scale with Lanczos resampling.
    • As a convenience, the output image width and height can be fit to a multiple of 8, as is required for SD. This will only resize down, and may change the aspect ratio slightly.
    • If the model doesn't actually upscale the image, the scale parameter will be ignored.

Gallery Improvements

Thanks to @maryhipp and @chainchompa for continued iteration on the gallery!

  • Cleaner boards UI.
  • Improved boards and image search UI.
  • Fixed issues where board counts don't update when images are moved between boards.

Other Changes

  • Enhancement: When installing starter models, the description is carried over. Thanks @lstein!
  • Enhancement: Updated translations.
  • Fix: Model unpatching when running on CPU, causing bad/no outputs.
  • Fix: Occasional visible seams on images with smooth textures, like skies. MultiDiffusion tiling now uses gradient blending to mitigate this issue.
  • Fix: Model names overflow the model selection drop-downs.
  • Internal: Backend SD pipeline refactor (WIP). This will allow contributors to add functionality to Invoke more easily. This will be behind a feature flag until the refactor is complete and tested. Thanks to @StAlKeR7779 for leading the effort, with major contributions from @dunkeroni and @RyanJDick.

Installation and Updating

To install or update to v4.2.7rc1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.6post1...v4.2.7rc1

InvokeAI - v4.2.6post1

Published by psychedelicious 3 months ago

v4.2.6post1 fixes issues some users may experience with memory management and sporadic black image outputs.

Please see the v4.2.6 release for full release notes.

💾 Installation and Updating

To install or update to v4.2.6post1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.6...v4.2.6post1

InvokeAI - v4.2.6

Published by psychedelicious 3 months ago

v4.2.6 includes a handful of fixes and improvements, plus three major changes:

  • Gallery updates
  • Tiled upscaling via MultiDiffusion
  • Checkpoint models work without conversion to diffusers

Gallery Updates

We've made some changes to the gallery, adding features, improving the performance of the app and reducing memory usage. The changes also fix a number of bugs relating to stale data - for example, a board not updating as expected after moving an image to it.

Thanks to @chainchompa and @maryhipp for working on this major effort.

Pagination & Selection

Infinite scroll is dead, long live infinite scroll!

The gallery is now paginated. Selection logic has been updated to work with pagination. An indicator shows how many images are selected and allows you to clear the selection entirely. Arrow keys still navigate.

https://github.com/invoke-ai/InvokeAI/assets/4822129/128c998a-efac-41e5-8639-b346da78ca5b

The number of images per page is dynamically calculated as the panel is resized, ensuring the panel is always filled with images.

Boards UI Refresh

The bulky tiled boards grid has been replaced by a scrollable list. The boards list panel is now a resizable, collapsible panel.

https://github.com/invoke-ai/InvokeAI/assets/4822129/2dd7c316-36e3-4f8d-9d0c-d38d7de1d423

Boards and Image Search

Search for boards by name and images by metadata. The search term is matched against the image's metadata as a string. We landed on full-text search as a flexible yet simple implementation after considering a few methods for search.

https://github.com/invoke-ai/InvokeAI/assets/4822129/ebe2ecfe-edb4-4e09-aef8-212495b32d65

Archived Boards

Archive a board to hide it from the main boards list. This is purely an organizational enhancement. You can still interact with archived boards as you would any other board.

https://github.com/invoke-ai/InvokeAI/assets/4822129/7033b7a1-1cb7-4fa0-ae30-5e1037ba3261

Image Sorting

You can now change the sort for images to show oldest first. A switch allows starred images to be placed in the list according to their age, instead of always showing them first.

https://github.com/invoke-ai/InvokeAI/assets/4822129/f1ec68d0-3ba5-4ed0-b1e8-8e8bc9ceb957

Tiled Upscaling via MultiDiffusion

MultiDiffusion is a fairly straightforward technique for tiled denoising. The gist is similar to other tiled upscaling methods - split the input image up in to tiles, process each independently, and stitch them back together. The main innovation for MultiDiffusion is to do this in latent space, blending the tensors together continuously. This results in excellent consistency across the output image, with no seams.

This feature is exposed as a Tiled MultiDiffusion Denoise Latents node, currently classified as a beta version. It works much the same as the OG Denoise Latents node. You can find an example workflow in the workflow library's default workflows.

We are still thinking about to expose this in the linear UI. Most likely, we expose this with very minimal settings. If you want to tweak it, use the workflow.

Thanks to @RyanJDick for designing and implementing MultiDiffusion.

How to use it

This technique is fundamentally the same as normal img2img. Appropriate use of conditioning and control will greatly improve the output. The one hard requirement is to use the Tile ControlNet model.

Besides that, here are some tips from our initial testing:

  • Use a detail-adding or style LoRAs.
  • Use a base model best suited for the desired output style.
  • Prompts make a difference.
  • The initial upscaling method makes a difference.
  • Scheduler makes a difference. Some produce softer outputs.

VRAM Usage

This technique can upscale images to very large sizes without substantially increasing VRAM usage beyond what you'd see for a "normal" sized generation. The VRAM bottlenecks then become the first VAE encode (Image to Latents) and final VAE decode (Latents to Image) steps.

You may run into OOM errors during these steps. The solution is to enable tiling using the toggle on the Image to Latents and Latents to Image nodes. This allows the VAE operations to be done piecewise, similar to the tiled denoising process, without using gobs of VRAM.

There's one caveat - VAE tiling often introduces inconsistency across tiles. Textures and colors may differ from tile to tile. This is a function of diffusers' handling of VAE tiling, not the new tiled denoising process. We are investigating ways to improve this.

Takeaway: If your GPU can handle non-tiled VAE encode and decode for a given output size, use that for best results.

Checkpoint models work without conversion to diffusers

The required conversion of checkpoint format models to diffusers format has long been a pain point. The diffusers library now supports loading single-file (checkpoint) models directly, and we have removed the mandatory checkpoint-to-diffusers conversion step.

The main user-facing change is that there is no longer a conversion cache directory.

Major thanks to @lstein for getting this working.

📈 Patch Nodes for v4.2.6

Enhancements

  • When downloading image metadata, graphs or workflows, the JSON file includes the image name and type of data. Thanks @jstnlowe!
  • Add clear_queue_on_startup config setting to clear problematic queues. This is useful for a rare edge case where your queue is full of items that somehow crash the app. Set this to true, and the queue will clear before it has time to attempt to execute the problematic item. Thanks @steffy-lo!
  • Performance and memory efficiency improvements for LoRA patching and model offloading.
  • Addition of a simplified model installation methods to the Invocation API: download_and_cache_model, load_local_model and load_remote_model. These methods allow models to be used without needing them to be added to the model manager. For example, we are now using these methods to load ESRGAN models.
  • Support for probing and loading SDXL VAE checkpoint.
  • Updated gallery UI.
  • Checkpoint models work without conversion to diffusers.
  • When using a VAE in tiled mode, you may now select the tile size.

Fixes

  • Fix handling handling of 0-step denoising process.
  • If a control image's processed version is missing when the app loads, it is now re-processed.
  • Fixed an issue where a model's size could be misreported as 0, possibly causing memory issues.
  • Fixed an issue where images - especially large images - may fail to delete.

Performance improvements

  • Improved LoRA patching.
  • Improved RAM <-> VRAM model transfer performance.

Internal changes

  • The DenoiseLatentsInvocation has had its internal methods split up to support tiled upscaling via MultiDiffusion. This included some amount of file shuffling and renaming. The invokeai package's exported classes should still be the same. Please let us know if this has broken an import for you.
  • Internal cleanup, intending to eliminate circular import issues. There's a lot left to do for this issue, but we are making progress.

💾 Installation and Updating

To install or update to v4.2.6, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.4...v4.2.6

InvokeAI - v4.2.6rc1

Published by psychedelicious 3 months ago

v4.2.6 includes a handful of fixes and improvements, plus three major changes:

  • Gallery updates
  • Tiled upscaling via MultiDiffusion
  • Checkpoint models work without conversion to diffusers

Known Issues

Our last release, v4.2.5, was quickly pulled after a black image issue on MPS (macOS) was discovered. We also had reports of CUDA (Nvidia) GPUs getting unexpected OOM (Out of Memory) errors.

The MPS issue is resolved in this release, but we haven't been able to replicate unexpected OOMs on Linux or Windows. We did fix one issue that may have been a factor.

If you get OOMs on this alpha release with settings that worked fine on v4.2.4 - or have any other issues - please let us know via GH issues or discord.

Gallery Updates

We've made some changes to the gallery, adding features, improving the performance of the app and reducing memory usage. The changes also fix a number of bugs relating to stale data - for example, a board not updating as expected after moving an image to it.

Thanks to @chainchompa and @maryhipp for working on this major effort.

Pagination & Selection

Infinite scroll is dead, long live infinite scroll!

The gallery is now paginated. Selection logic has been updated to work with pagination. An indicator shows how many images are selected and allows you to clear the selection entirely. Arrow keys still navigate.

https://github.com/invoke-ai/InvokeAI/assets/4822129/128c998a-efac-41e5-8639-b346da78ca5b

The number of images per page is dynamically calculated as the panel is resized, ensuring the panel is always filled with images.

Boards UI Refresh

The bulky tiled boards grid has been replaced by a scrollable list. The boards list panel is now a resizable, collapsible panel.

https://github.com/invoke-ai/InvokeAI/assets/4822129/2dd7c316-36e3-4f8d-9d0c-d38d7de1d423

Boards and Image Search

Search for boards by name and images by metadata. The search term is matched against the image's metadata as a string. We landed on full-text search as a flexible yet simple implementation after considering a few methods for search.

https://github.com/invoke-ai/InvokeAI/assets/4822129/ebe2ecfe-edb4-4e09-aef8-212495b32d65

Archived Boards

Archive a board to hide it from the main boards list. This is purely an organizational enhancement. You can still interact with archived boards as you would any other board.

https://github.com/invoke-ai/InvokeAI/assets/4822129/7033b7a1-1cb7-4fa0-ae30-5e1037ba3261

Image Sorting

You can now change the sort for images to show oldest first. A switch allows starred images to be placed in the list according to their age, instead of always showing them first.

https://github.com/invoke-ai/InvokeAI/assets/4822129/f1ec68d0-3ba5-4ed0-b1e8-8e8bc9ceb957

Tiled Upscaling via MultiDiffusion

MultiDiffusion is a fairly straightforward technique for tiled denoising. The gist is similar to other tiled upscaling methods - split the input image up in to tiles, process each independently, and stitch them back together. The main innovation for MultiDiffusion is to do this in latent space, blending the tensors together continually. This results in excellent consistency across the output image, with no seams.

This feature is exposed as a Tiled MultiDiffusion Denoise Latents node, currently classified as a beta version. It works much the same as the OG Denoise Latents node. Here's a workflow to get you started: sd15_multi_diffusion_esrgan_x2_upscale.json

image

We are still thinking about to expose this in the linear UI. Most likely, we expose this with very minimal settings. If you want to tweak it, use the workflow.

Thanks to @RyanJDick for designing and implementing MultiDiffusion.

How to use it

This technique is fundamentally the same as normal img2img. Appropriate use of conditioning and control will greatly improve the output. The one hard requirement is to use the Tile ControlNet model.

Besides that, here are some tips from our initial testing:

  • Use a detail-adding or style LoRAs.
  • Use a base model best suited for the desired output style.
  • Prompts make a difference.
  • The initial upscaling method makes a difference.
  • Scheduler makes a difference. Some produce softer outputs.

VRAM Usage

This technique can upscale images to very large sizes without substantially increasing VRAM usage beyond what you'd see for a "normal" sized generation. The VRAM bottlenecks then become the first VAE encode (Image to Latents) and final VAE decode (Latents to Image) steps.

You may run into OOM errors during these steps. The solution is to enable tiling using the toggle on the Image to Latents and Latents to Image nodes. This allows the VAE operations to be done piecewise, similar to the tiled denoising process, without using gobs of VRAM.

There's one caveat - VAE tiling often introduces inconsistency across tiles. Textures and colors may differ from tile to tile. This is a function of the diffusers handling of VAE tiling, not the tiled denoising process introduced in v4.2.5. We are investigating ways to improve this.

Takeaway: If your GPU can handle non-tiled VAE encode and decode for a given output size, use that for best results.

Checkpoint models work without conversion to diffusers

The required conversion of checkpoint format models to diffusers format has long been a pain point. Diffusers now supports loading single-file (checkpoint) models directly, and we have removed the mandatory checkpoint-to-diffusers conversion step.

The main user-facing change is that there is no longer a conversion cache directory!

Major thanks to @lstein for getting this working.

📈 Patch Nodes for v4.2.6

Enhancements

  • When downloading image metadata, graphs or workflows, the JSON file includes the image name and type of data. Thanks @jstnlowe!
  • Add clear_queue_on_startup config setting to clear problematic queues. This is useful for a rare edge case where your queue is full of items that somehow crash the app. Set this to true, and the queue will clear before it has time to attempt to execute the problematic item. Thanks @steffy-lo!
  • Performance and memory efficiency improvements for LoRA patching and model offloading.
  • Addition of a simplified model installation methods to the Invocation API: download_and_cache_model, load_local_model and load_remote_model. These methods allow models to be used without needing them to be added to the model manager. For example, we are now using these methods to load ESRGAN models.
  • Support for probing and loading SDXL VAE checkpoint.
  • Updated gallery UI.
  • Checkpoint models work without conversion to diffusers.
  • When using a VAE in tiled mode, you may now select the tile size.

Fixes

  • Fix handling handling of 0-step denoising process.
  • If a control image's processed version is missing when the app loads, it is now re-processed.
  • Fixed an issue where a model's size could be misreported as 0, possibly causing memory issues.
  • Fixed an issue where images - especially large images - may fail to delete.

Performance improvements

  • Improved LoRA patching.
  • Improved RAM <-> VRAM model transfer performance.

Internal changes

  • The DenoiseLatentsInvocation has had its internal methods split up to support tiled upscaling via MultiDiffusion. This included some amount of file shuffling and renaming. The invokeai package's exported classes should still be the same. Please let us know if this has broken an import for you.
  • Internal cleanup, intending to eliminate circular import issues. There's a lot left to do for this issue, but we are making progress.

💾 Installation and Updating

To install or update to v4.2.6rc1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.4...v4.2.6rc1

InvokeAI - v4.2.6a1

Published by psychedelicious 3 months ago

v4.2.6a1 includes a handful of fixes and improvements, plus three major changes:

  • Gallery updates
  • Tiled upscaling via MultiDiffusion
  • Checkpoint models work without conversion to diffusers

Known Issues

Our last release, v4.2.5, was quickly pulled after a black image issue on MPS (macOS) was discovered. We also had reports of CUDA (Nvidia) GPUs getting unexpected OOM (Out of Memory) errors.

The MPS issue is resolved in this release, but we haven't been able to replicate unexpected OOMs on Linux or Windows. We did fix one issue that may have been a factor.

If you get OOMs on this alpha release with settings that worked fine on v4.2.4 - or have any other issues - please let us know via GH issues or discord.

Gallery Updates

We've made some changes to the gallery, adding features, improving the performance of the app and reducing memory usage. The changes also fix a number of bugs relating to stale data - for example, a board not updating as expected after moving an image to it.

Thanks to @chainchompa and @maryhipp for working on this major effort.

Pagination & Selection

Infinite scroll is dead, long live infinite scroll!

The gallery is now paginated. Selection logic has been updated to work with pagination. An indicator shows how many images are selected and allows you to clear the selection entirely. Arrow keys still navigate.

https://github.com/invoke-ai/InvokeAI/assets/4822129/128c998a-efac-41e5-8639-b346da78ca5b

The number of images per page is dynamically calculated as the panel is resized, ensuring the panel is always filled with images.

Boards UI Refresh

The bulky tiled boards grid has been replaced by a scrollable list. The boards list panel is now a resizable, collapsible panel.

https://github.com/invoke-ai/InvokeAI/assets/4822129/2dd7c316-36e3-4f8d-9d0c-d38d7de1d423

Boards and Image Search

Search for boards by name and images by metadata. The search term is matched against the image's metadata as a string. We landed on full-text search as a flexible yet simple implementation after considering a few methods for search.

https://github.com/invoke-ai/InvokeAI/assets/4822129/ebe2ecfe-edb4-4e09-aef8-212495b32d65

Archived Boards

Archive a board to hide it from the main boards list. This is purely an organizational enhancement. You can still interact with archived boards as you would any other board.

https://github.com/invoke-ai/InvokeAI/assets/4822129/7033b7a1-1cb7-4fa0-ae30-5e1037ba3261

Image Sorting

You can now change the sort for images to show oldest first. A switch allows starred images to be placed in the list according to their age, instead of always showing them first.

https://github.com/invoke-ai/InvokeAI/assets/4822129/f1ec68d0-3ba5-4ed0-b1e8-8e8bc9ceb957

Tiled Upscaling via MultiDiffusion

MultiDiffusion is a fairly straightforward technique for tiled denoising. The gist is similar to other tiled upscaling methods - split the input image up in to tiles, process each independently, and stitch them back together. The main innovation for MultiDiffusion is to do this in latent space, blending the tensors together continually. This results in excellent consistency across the output image, with no seams.

This feature is exposed as a Tiled MultiDiffusion Denoise Latents node, currently classified as a beta version. It works much the same as the OG Denoise Latents node. Here's a workflow to get you started: sd15_multi_diffusion_esrgan_x2_upscale.json

image

We are still thinking about to expose this in the linear UI. Most likely, we expose this with very minimal settings. If you want to tweak it, use the workflow.

Thanks to @RyanJDick for designing and implementing MultiDiffusion.

How to use it

This technique is fundamentally the same as normal img2img. Appropriate use of conditioning and control will greatly improve the output. The one hard requirement is to use the Tile ControlNet model.

Besides that, here are some tips from our initial testing:

  • Use a detail-adding or style LoRAs.
  • Use a base model best suited for the desired output style.
  • Prompts make a difference.
  • The initial upscaling method makes a difference.
  • Scheduler makes a difference. Some produce softer outputs.

VRAM Usage

This technique can upscale images to very large sizes without substantially increasing VRAM usage beyond what you'd see for a "normal" sized generation. The VRAM bottlenecks then become the first VAE encode (Image to Latents) and final VAE decode (Latents to Image) steps.

You may run into OOM errors during these steps. The solution is to enable tiling using the toggle on the Image to Latents and Latents to Image nodes. This allows the VAE operations to be done piecewise, similar to the tiled denoising process, without using gobs of VRAM.

There's one caveat - VAE tiling often introduces inconsistency across tiles. Textures and colors may differ from tile to tile. This is a function of the diffusers handling of VAE tiling, not the tiled denoising process introduced in v4.2.5. We are investigating ways to improve this.

Takeaway: If your GPU can handle non-tiled VAE encode and decode for a given output size, use that for best results.

Checkpoint models work without conversion to diffusers

The required conversion of checkpoint format models to diffusers format has long been a pain point. Diffusers now supports loading single-file (checkpoint) models directly, and we have removed the mandatory checkpoint-to-diffusers conversion step.

The main user-facing change is that there is no longer a conversion cache directory!

Major thanks to @lstein for getting this working.

📈 Patch Nodes for v4.2.5

Enhancements

  • When downloading image metadata, graphs or workflows, the JSON file includes the image name and type of data. Thanks @jstnlowe!
  • Add clear_queue_on_startup config setting to clear problematic queues. This is useful for a rare edge case where your queue is full of items that somehow crash the app. Set this to true, and the queue will clear before it has time to attempt to execute the problematic item. Thanks @steffy-lo!
  • Performance and memory efficiency improvements for LoRA patching and model offloading.
  • Addition of a simplified model installation methods to the Invocation API: download_and_cache_model, load_local_model and load_remote_model. These methods allow models to be used without needing them to be added to the model manager. For example, we are now using these methods to load ESRGAN models.
  • Support for probing and loading SDXL VAE checkpoint.
  • Updated gallery UI.
  • Checkpoint models work without conversion to diffusers.
  • When using a VAE in tiled mode, you may now select the tile size.

Fixes

  • Fix handling handling of 0-step denoising process.
  • If a control image's processed version is missing when the app loads, it is now re-processed.
  • Fixed an issue where a model's size could be misreported as 0, possibly causing memory issues.

Performance improvements

  • Improved LoRA patching.
  • Improved RAM <-> VRAM model transfer performance.

Internal changes

  • The DenoiseLatentsInvocation has had its internal methods split up to support tiled upscaling via MultiDiffusion. This included some amount of file shuffling and renaming. The invokeai package's exported classes should still be the same. Please let us know if this has broken an import for you.
  • Internal cleanup, intending to eliminate circular import issues. There's a lot left to do for this issue, but we are making progress.

💾 Installation and Updating

To install or update to v4.2.6a1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.4...v4.2.6a1

InvokeAI - v4.2.5

Published by psychedelicious 4 months ago

🚨 macOS users may get black images when using LoRAs or IP Adapters. Users with CUDA GPUs may get unexpected OOMs. We are investigating. 🚨

v4.2.5 includes a handful of fixes and improvements, plus one exciting beta node - tiled upscaling via MultiDiffusion.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

Tiled Upscaling via MultiDiffusion

MultiDiffusion is a fairly straightforward technique for tiled denoising. The gist is similar to other tiled upscaling methods - split the input image up in to tiles, process each independently, and stitch them back together. The main innovation for MultiDiffusion is to do this in latent space, blending the tensors together continually. This results in excellent consistency across the output image, with no seams.

This feature is exposed as a Tiled MultiDiffusion Denoise Latents node, currently classified as a beta version. It works much the same as the OG Denoise Latents node. Here's a workflow to get you started: sd15_multi_diffusion_esrgan_x2_upscale.json

image

We are still thinking about to expose this in the linear UI. Most likely, we expose this with very minimal settings. If you want to tweak it, use the workflow.

How to use it

This technique is fundamentally the same as normal img2img. Appropriate use of conditioning and control will greatly improve the output. The one hard requirement is to use the Tile ControlNet model.

Besides that, here are some tips from our initial testing:

  • Use a detail-adding or style LoRAs.
  • Use a base model best suited for the desired output style.
  • Prompts make a difference.
  • The initial upscaling method makes a difference.
  • Scheduler makes a difference. Some produce softer outputs.

VRAM Usage

This technique can upscale images to very large sizes without substantially increasing VRAM usage beyond what you'd see for a "normal" sized generation. The VRAM bottlenecks then become the first VAE encode (Image to Latents) and final VAE decode (Latents to Image) steps.

You may run into OOM errors during these steps. The solution is to enable tiling using the toggle on the Image to Latents and Latents to Image nodes. This allows the VAE operations to be done piecewise, similar to the tiled denoising process, without using gobs of VRAM.

There's one caveat - VAE tiling often introduces inconsistency across tiles. Textures and colors may differ from tile to tile. This is a function of the diffusers handling of VAE tiling, not the tiled denoising process introduced in v4.2.5. We are investigating ways to improve this.

Takeaway: If your GPU can handle non-tiled VAE encode and decode for a given output size, use that for best results.

📈 Patch Nodes for v4.2.5

Enhancements

  • When downloading image metadata, graphs or workflows, the JSON file includes the image name and type of data. Thanks @jstnlowe!
  • Add clear_queue_on_startup config setting to clear problematic queues. This is useful for a rare edge case where your queue is full of items that somehow crash the app. Set this to true, and the queue will clear before it has time to attempt to execute the problematic item. Thanks @steffy-lo!
  • Performance and memory efficiency improvements for LoRA patching and model offloading.
  • Addition of a simplified model installation methods to the Invocation API: download_and_cache_model, load_local_model and load_remote_model. These methods allow models to be used without needing them to be added to the model manager. For example, we are now using these methods to load ESRGAN models.
  • Support for probing and loading SDXL VAE checkpoint.

Fixes

  • Fix handling handling of 0-step denoising process.
  • If a control image's processed version is missing when the app loads, it is now re-processed.

Performance improvements

  • Improved LoRA patching.
  • Improved RAM <-> VRAM model transfer performance.

Internal changes

  • The DenoiseLatentsInvocation has had its internal methods split up to support tiled upscaling via MultiDiffusion. This included some amount of file shuffling and renaming. The invokeai package's exported classes should still be the same. Please let us know if this has broken an import for you.

💾 Installation and Updating

To install or update to v4.2.5, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.4...v4.2.5

InvokeAI - v4.2.4

Published by psychedelicious 5 months ago

v4.2.4 brings one frequently requested feature and a host of fixes and improvements, mostly focused on performance and internal code quality.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

Image Comparison

The image viewer now supports comparing two images using a Slider, Side-by-Side or Hover UI.

To enter the comparison UI, select a compare image using one of these methods:

  • Right click an image and click Select for Compare.
  • Hold alt (option on mac) while clicking a gallery image to select it as the compare image.
  • Hold alt (option on mac) and use the arrow keys to select the comparison image.

Press C to swap the images and M to cycle through the comparison modes. Press Escape or Z to exit the comparison UI and return to the single image viewer.

When comparing images of different aspect ratios or sizes, the compare image will be stretched to fit the viewer image. Disable the toggle button at the top-left to instead contain the compare image within the viewer image.

https://github.com/invoke-ai/InvokeAI/assets/4822129/4bcfb9c4-c31c-4e62-bfa4-510ab34b15c9

📈 Patch Nodes for v4.2.4

Enhancements

  • The queue item detail view now updates when it finishes. The finished (completed, failed or canceled) session is displayed.
  • Updated translations. @Harvester62 @Vasyanator @BrunoCdot @gallegonovato @Atalanttore @hugoalh
  • Docs updates. @hsm207 @cdpath

Fixes

  • Fixed problem when using a latents from the blend latents node for denoising with certain schedulers which made images drastically different, even with an alpha of 0.
  • Fixed unnecessarily strict constraints for ControlNet and IP Adapter weights in the Control Layers UI. This prevented layers with weights outside the range of 0-1 from recalling.
  • Fixed error when editing non-main models (e.g. LoRAs).
  • Fixed the SDXL prompt concat flag from not being set when recalling prompts.
  • Fixed model metadata recall not working when a model has a different key. This can happen if the model was uninstalled and reinstalled. When recalling, we fall back on the model's name, base and type, if the key doesn't match an existing model.

Performance improvements

Big thanks to @lstein for these very impactful improvements!

  • Substantially improved performance when moving models between RAM and VRAM. For example, an SDXL model RAM -> VRAM -> RAM roundtrip tested at ~0.8s, down from ~3s. That's about 75% faster!
  • Fixed bug with VRAM lazy offloading which caused inefficient VRAM cache usage.
  • Reduced VRAM requirements when using IP Adapter.

Internal changes

  • Modularize the queue processor.
  • Use pydantic models for events instead of plain dicts.
  • Improved handling of pydantic invocation unions.
  • Updated ML dependencies. @Malrama

💾 Installation and Updating

To install or update to v4.2.4, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.3...v4.2.4

InvokeAI - v4.2.3

Published by psychedelicious 5 months ago

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

📈 Patch Nodes for v4.2.3

  • Spellcheck is re-enabled on prompt boxes

  • DB maintenance script removed from launcher (it currently does not work)

  • Reworked toasts. When a toast of a given type is triggered, if another toast of that type is already being displayed, it is updated instead of creating another toast. The old behaviour was painful in situations where you queue up many generations that all immediately fail, or install a lot of models at once. In these situations, you'd get a wall of toasts. Now you get only 1.

  • Fixed: Control layer checkbox correctly indicates that it enables or disables the layer

  • Fixed: Disabling Regional Guidance layers didn't work

  • Fixed: Excessive warnings in terminal when uploading images

  • Fixed: When loading a workflow, if an image, board or model for an input for that workflow no longer exists, the workflow will execute but error.

    For example, say you save a workflow that has a certain model set for a node, then delete the model. When you load that workflow, the model is missing but the workflow doesn't detect this. You can run the workflow, and it will fail when it attempts to use the nonexistent model.

    With this fix, when a workflow is loaded, we check for the existence of all images, boards and models referenced by the workflow. If something is missing, that input is reset.

  • Docs updates @hsm207

  • Translations updates @gallegonovato @Harvester62 @dvanzoerlandt

💾 Installation and Updating

To install or update to v4.2.3, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

What's Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.2post1...v4.2.3

InvokeAI - v4.2.2post1

Published by psychedelicious 5 months ago

This release brings many fixes and enhancements, including two long-awaited features: undo/redo in workflows and load workflow from any image.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

📈 Patch Nodes for v4.2.2post1

v4.2.2 had a critical bug related to notes nodes & missing templates in workflows. That is fixed in v4.2.2post1.

✨ Undo/redo in Workflows

Undo/redo redo now available in the workflow editor. There's some amount of tuning to be done with how actions are grouped.

For example, when you move a node around, do we allow you to undo each pixel of movement, or do we group the position changes as one action? When you are typing a prompt, do we undo each letter, word, or the whole change at once?

Currently, we group like changes together. It's possible some things are grouped when they shouldn't be, or should be grouped but are not. Your feedback will be very useful in tuning the behaviour so it un-does the right changes.

✨ Load Workflow from Any Image

Starting with v4.2.2, graphs are embedded in all images generated by Invoke. Images generated in the workflow editor also have the enriched workflow embedded separately. The Load Workflow button will load the enriched workflow if it exists, else it will load the graph.

You'll see a new Graph tab in the metadata viewer showing the embedded graph.

Graph vs Workflow

Graphs are used by the backend and contain minimal data. Workflows are an enrich data format that includes a representation of the graph plus extra information, including things like:

  • Title, description, author, etc
  • Node positions
  • Custom node and field labels

This new feature embeds the graph in every image - including images generated on the Generation or Canvas tabs.

Canvas Caveat

This functionality is available only for individual canvas generations - not the full composition. Why is that?

Consider what goes into a full canvas composition. It's the product of any number of graphs, with any amount of drawing and erasing between each graph execution. It's not possible to consolidate this into a single graph.

When you generate on canvas, your images for the given bounding box are added to a staging area, which allows you to cycle through images and commit or discard the image. The staging area also allows you to save a candidate generation. It is these images that can be loaded as a workflow, because they are the product of a single graph execution.

👷 Other Fixes and Enhancements

  • Min/max LoRA weight values extended (-10 to +10) @H0onnn
  • Denoising strength and layer opacity are retained when sending image to initial image @steffy-lo
  • SDXL T2I Adapter only blocks invoking when dimensions aren't multiple of 32 (was erroneously 64)
  • Improved UX when manipulating edges in workflows
  • Connected inputs on nodes collapse, hiding the nonfunctional UI component
  • Use ctrl/cmd-shift-v to paste copied nodes with input edges
  • Docs updates @hsm207
  • Fix: visible seams when outpainting
  • Fix: edge case that could prevent workflows from loading if user hadn't opened the workflows tab yet
  • Fix: minor jank/inefficiency with control adapter auto-process (control layers only)
  • Internal: utility to create graph objects without going crazy
  • Internal: rewritten connection validation logic for workflows with full test coverage
  • Internal: rewritten edge connection interactions
  • Internal: revised field type format

💾 Installation and Updating

To install or update to v4.2.2post1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.1...v4.2.2post1

InvokeAI - v4.2.2

Published by psychedelicious 5 months ago

This release brings many fixes and enhancements, including two long-awaited features: undo/redo in workflows and load workflow from any image.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

📈 Patch Nodes for v4.2.2

✨ Undo/redo in Workflows

Undo/redo redo now available in the workflow editor. There's some amount of tuning to be done with how actions are grouped.

For example, when you move a node around, do we allow you to undo each pixel of movement, or do we group the position changes as one action? When you are typing a prompt, do we undo each letter, word, or the whole change at once?

Currently, we group like changes together. It's possible some things are grouped when they shouldn't be, or should be grouped but are not. Your feedback will be very useful in tuning the behaviour so it un-does the right changes.

✨ Load Workflow from Any Image

Starting with v4.2.2, graphs are embedded in all images generated by Invoke. Images generated in the workflow editor also have the enriched workflow embedded separately. The Load Workflow button will load the enriched workflow if it exists, else it will load the graph.

You'll see a new Graph tab in the metadata viewer showing the embedded graph.

Graph vs Workflow

Graphs are used by the backend and contain minimal data. Workflows are an enrich data format that includes a representation of the graph plus extra information, including things like:

  • Title, description, author, etc
  • Node positions
  • Custom node and field labels

This new feature embeds the graph in every image - including images generated on the Generation or Canvas tabs.

Canvas Caveat

This functionality is available only for individual canvas generations - not the full composition. Why is that?

Consider what goes into a full canvas composition. It's the product of any number of graphs, with any amount of drawing and erasing between each graph execution. It's not possible to consolidate this into a single graph.

When you generate on canvas, your images for the given bounding box are added to a staging area, which allows you to cycle through images and commit or discard the image. The staging area also allows you to save a candidate generation. It is these images that can be loaded as a workflow, because they are the product of a single graph execution.

👷 Other Fixes and Enhancements

  • Min/max LoRA weight values extended (-10 to +10) @H0onnn
  • Denoising strength and layer opacity are retained when sending image to initial image @steffy-lo
  • SDXL T2I Adapter only blocks invoking when dimensions aren't multiple of 32 (was erroneously 64)
  • Improved UX when manipulating edges in workflows
  • Connected inputs on nodes collapse, hiding the nonfunctional UI component
  • Use ctrl/cmd-shift-v to paste copied nodes with input edges
  • Docs updates @hsm207
  • Fix: visible seams when outpainting
  • Fix: edge case that could prevent workflows from loading if user hadn't opened the workflows tab yet
  • Fix: minor jank/inefficiency with control adapter auto-process (control layers only)
  • Internal: utility to create graph objects without going crazy
  • Internal: rewritten connection validation logic for workflows with full test coverage
  • Internal: rewritten edge connection interactions
  • Internal: revised field type format

💾 Installation and Updating

To install or update to v4.2.2, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

What's Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.1...v4.2.2