InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
APACHE-2.0 License
Published by psychedelicious 5 months ago
This patch release brings a handful of fixes, plus docs and translation updates.
If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.
To install or update to v4.2.1, download the installer and follow the installation instructions.
To update, select the same installation location. Your user data (images, models, etc) will be retained.
See this FAQ.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.0...v4.2.1
Published by psychedelicious 6 months ago
Since the very beginning, Invoke has been innovating where it matters for creatives. Today, we're excited to do it again with Control Layers.
Invoke 4.2 brings a number of enhancements and fixes, with the addition of a major new feature - Control Layers.
Integrating some of the latest in open-source research, creatives can use Control Adapters, Image Prompts, and regional guidance to articulate and control the generation process from a single panel. With regional guidance, you can compose specific regions to apply a positive prompt, negative prompt, or any number of IP Adapters to be applied to the masked region. Control Adapters (ControlNet & T2I Adapters) and an Initial Image are visualized on the new Control Layers canvas.
You can read more about how to use Control Layers here - Control Layers
Also known as the "who moved my π§?" section, this list details where certain features have moved.
To install or update to v4.2.0, download the installer and follow the installation instructions.
To update, select the same installation location. Your user data will not be touched.
See this FAQ.
outputs/tensors
at startup time by @lstein in https://github.com/invoke-ai/InvokeAI/pull/6246
data-testid
s, fix canvas toolbar align, add invert scroll checkbox to CL settings by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/6324
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.1.0...v4.2.0
Published by psychedelicious 6 months ago
This is a beta release. There may be some hiccups, but overally, it is purring along nicely.
Control Layers give you control over specific areas of the image. Draw a mask and set a positive prompt, negative prompt, or any number of IP Adapters to be applied to the masked region. Control Adapters (ControlNet & T2I Adapters) and the Initial Image are visualized on the canvas.
Full documentation to be included with the full release.
Your feedback is greatly appreciated as we continue to iterate on Control Layers.
You may get a white screen on first launch, if you were testing the alpha release. This won't be a problem for users updating from the last stable release (v4.1.0). If you encounter this, follow these steps to reset the browser storage:
indexedDB.deleteDatabase('invoke')
To install or update to v4.2.0b2, download the installer and follow the installation instructions.
To update, select the same installation location. Your user data will not be touched.
See this FAQ.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.0b1...v4.2.0b2
Published by psychedelicious 6 months ago
This is a beta release. There may be some hiccups, but overally, it is purring along nicely.
Generation
, Canvas
, Workflows
, Models
and Queue
Control Layers give you control over specific areas of the image. Draw a mask and set a positive prompt, negative prompt, or any number of IP Adapters to be applied to the masked region. Control Adapters (ControlNet & T2I Adapters) and the Initial Image are visualized on the canvas.
Full documentation to be included with the full release.
Your feedback is greatly appreciated as we continue to iterate on Control Layers.
Send to Image to Image
, Send to Unified Canvas
, and doing anything that adds a layer.z
), it can get stuck. If you run into this, you'll need to reset the UI to fix it. This will be fixed in the next release.You may get a white screen on first launch, if you were testing the alpha release. This won't be a problem for users updating from the last stable release (v4.1.0). If you encounter this, follow these steps to reset the browser storage:
indexedDB.deleteDatabase('invoke')
To install or update to v4.2.0b1, download the installer and follow the installation instructions.
To update, select the same installation location. Your user data will not be touched.
See this FAQ.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.0a4...v4.2.0b1
Published by psychedelicious 6 months ago
This is an alpha release. We suggest backing up your database in case there are any issues and you need to roll back.
Control Layers give you control over specific areas of the image. Draw a mask and set a positive prompt, negative prompt, or any number of IP Adapters to be applied to the masked region. Control Adapters (ControlNet & T2I Adapters) are visualized on the canvas.
Full documentation to be included with the full release.
Your feedback is greatly appreciated as we continue to iterate on Control Layers.
These issues will be fixed for the full release.
To install or update to v4.2.0a4, download the installer and follow the installation instructions.
To update, select the same installation location. Your user data will not be touched.
See this FAQ.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.0a3...v4.2.0a4
Published by psychedelicious 6 months ago
This is an alpha release. We suggest backing up your database in case there are any issues and you need to roll back.
Regional Control (name may change) gives you control over specific areas of the image. Draw a mask and set a positive prompt, negative prompt, or any number of IP Adapters to be applied to the masked region.
To support this powerful feature, we are introducing a new canvas editor. Here's a brief demo:
https://github.com/invoke-ai/InvokeAI/assets/4822129/4bf5ee96-126d-4048-ab0f-54c62b664403
Full documentation to be included with the full release.
Your feedback is greatly appreciated as we continue to iterate on Regional Control.
These issues will be fixed for the full release.
To install or update to v4.2.0a3, download the installer and follow the installation instructions.
To update, select the same installation location. Your user data will not be touched.
See this FAQ.
outputs/tensors
at startup time by @lstein in https://github.com/invoke-ai/InvokeAI/pull/6246
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.0a2...v4.2.0a3
Published by psychedelicious 6 months ago
This is an alpha release. We suggest backing up your database in case there are any issues and you need to roll back.
Regional Control (name may change) gives you control over specific areas of the image. Draw a mask and set a positive prompt, negative prompt, or any number of IP Adapters to be applied to the masked region.
To support this powerful feature, we are introducing a new canvas editor. Here's a brief demo:
https://github.com/invoke-ai/InvokeAI/assets/4822129/4bf5ee96-126d-4048-ab0f-54c62b664403
Full documentation to be included with the full release.
Your feedback is greatly appreciated as we continue to iterate on Regional Control.
These issues will be fixed for the full release.
To install or update to v4.2.0a2, download the installer and follow the installation instructions.
To update, select the same installation location. Your user data will not be touched.
See this FAQ.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.1.0...v4.2.0a2
Published by psychedelicious 6 months ago
Invoke v4.1.0 brings a many fixes and enhancements. The big ticket is Style and Composition IP Adapter.
IP Adapter uses an image as a prompt. Images have two major components - their style and their composition - and you can choose either or both when using IP Adapter.
Use the new IP Adapter Method
dropdown to select Full, Style, or Composition. The setting is applied per IP Adapter. You may need to delete and re-add active IP Adapters to see the dropdown.
"a fierce wolf in an alpine forest", all using same seed - note how the Full method turns the wolf into a mouse-canine hybrid
Shout-out to @blessedcoolant for this feature!
multipleOf
for invocations (for example, the Noise invocation's width and height have a step of 8)context.images.get_path(image_name: str, thumbnail: bool)
@fieldOfViewknip
config @webproTo install or update to v4.1.0, download the installer and follow the installation instructions.
To update, select the same installation location. Your user data will not be touched.
See this FAQ.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.0.4...v4.1.0
Published by psychedelicious 7 months ago
π¨ v4 has some major changes. Please read the patch notes. π¨
This patch release includes the following changes:
To install or update to v4.0.4, download the installer and follow the installation instructions. To update, select the same installation location.
We've simplified and streamlined installation, making it much faster and more reliable:
invokeai.yaml
The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:
<
key in any prompt boxScan Folder
instead
When you first run v4, it may take a few minutes to start up as it does a one-time hash of all of your model files.
Do not panic.
Hashes provide a stable identifier for a model that is the same across every platform.
π¨ If you donβt care about this, you can press Ctrl+C to interrupt the process and disable hashing by setting hashing_algorithm: random
setting in invokeai.yaml
.
The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple βpassesβ, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.
The compositing settings on canvas allow for control over the gradient denoising process.
Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.
π¨ Inpainting models on Canvas sometimes kinda give up and output mush. We have a fix en-route, but it will need to wait for 4.1.0.
Scan Folder
Many small bug fixes, resolved papercuts, and warm fuzzies. Shouting out just a few notable goodies from the community:
torch
and diffusers
deps @MalramaAs of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:
Learn more on the Invoke Training repo, as well as our YT video on getting started
Follow these steps. If you are still missing some models, please create an issue on GitHub or ask for help on discord.
v4.0.0 is versioned as a major release due to breaking changes:
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.0.2...v4.0.4
Published by psychedelicious 7 months ago
π¨ v4 has some major changes. Please read the patch notes. π¨
π¨ π¨ π¨ Yes - Those patch notes π π¨ π¨ π¨
This is a patch release includes these changes:
Scan Folder
It also includes one notable feature:
We've simplified and streamlined installation, making it much faster and more reliable:
invokeai.yaml
The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:
<
key in any prompt boxScan Folder
instead
When you first run v4, it may take a few minutes to start up as it does a one-time hash of all of your model files.
Do not panic.
Hashes provide a stable identifier for a model that is the same across every platform.
π¨ If you donβt care about this, you can press Ctrl+C to interrupt the process and disable hashing by setting hashing_algorithm: random
setting in invokeai.yaml
.
The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple βpassesβ, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.
The compositing settings on canvas allow for control over the gradient denoising process.
Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.
π¨ Inpainting models on Canvas sometimes kinda give up and output mush. We have a fix en-route, but it will need to wait for 4.1.0.
Scan Folder
Many small bug fixes, resolved papercuts, and warm fuzzies. Shouting out just a few notable goodies from the community:
torch
and diffusers
deps @MalramaAs of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:
Learn more on the Invoke Training repo, as well as our YT video on getting started
π¨ To install or upgrade to version 4.0, download the zip file from the release notes ("Assets" section), unzip it, and follow the installation instructions. For upgrades, select the same installation location.
Follow these steps. If you are still missing some models, please create an issue on GitHub or ask for help on discord.
v4.0.0 is versioned as a major release due to breaking changes:
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.0.1...v4.0.2
Published by hipsterusername 7 months ago
π¨ 4.0.0 has some major changes. Please read the patch notes. π¨
π¨ π¨ π¨ Yes - Those patch notes π π¨ π¨ π¨
We've simplified and streamlined installation, making it much faster and more reliable:
invokeai.yaml
The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:
<
key in any prompt boxScan Folder
instead
When you first run v4.0.0, it may take a few minutes to start up as it does a one-time hash of all of your model files.
Do not panic.
Hashes provide a stable identifier for a model that is the same across every platform.
π¨ If you donβt care about this, you can press Ctrl+C to interrupt the process and disable hashing by setting hashing_algorithm: random
setting in invokeai.yaml
.
The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple βpassesβ, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.
The compositing settings on canvas allow for control over the gradient denoising process.
Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.
π¨ Inpainting models on Canvas sometimes kinda give up and output mush. We have a fix en-route, but it will need to wait for 4.1.0.
Many small bug fixes, resolved papercuts, and warm fuzzies. Shouting out just a few notable goodies from the community:
torch
and diffusers
deps @Malrama4.01 Fixes
As of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:
Learn more on the Invoke Training repo, as well as our YT video on getting started
π¨ To install or upgrade to version 4.0, download the zip file from the release notes ("Assets" section), unzip it, and follow the installation instructions. For upgrades, select the same installation location.
Follow these steps. If you are still missing some models, please create an issue on GitHub or ask for help on discord.
v4.0.0 is versioned as a major release due to breaking changes:
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!
defaultModel
by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5866
config_path
in yaml -> DB migration by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5905
always_run
input to checks & tests, use this on release workflow by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5929
ruff
by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/6012
.LoRA
by @skunkworxdark in https://github.com/invoke-ai/InvokeAI/pull/6031
MALLOC_MMAP_THRESHOLD_
env var by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/6059
session_processor_default._process()
by @lstein in https://github.com/invoke-ai/InvokeAI/pull/6095
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/3.7.0...v4.0.1
Published by psychedelicious 7 months ago
π¨ 4.0.0 has some major changes. Please read the patch notes. π¨
π¨ π¨ π¨ Yes - Those patch notes π π¨ π¨ π¨
We've simplified and streamlined installation, making it much faster and more reliable:
invokeai.yaml
The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:
<
key in any prompt boxScan Folder
instead
When you first run v4.0.0, it may take a few minutes to start up as it does a one-time hash of all of your model files.
Do not panic.
Hashes provide a stable identifier for a model that is the same across every platform.
π¨ If you donβt care about this, you can press Ctrl+C to interrupt the process and disable hashing by setting hashing_algorithm: random
setting in invokeai.yaml
.
The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple βpassesβ, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.
The compositing settings on canvas allow for control over the gradient denoising process.
Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.
π¨ Inpainting models on Canvas sometimes kinda give up and output mush. We have a fix en-route, but it will need to wait for 4.1.0.
Many small bug fixes, resolved papercuts, and warm fuzzies. Shouting out just a few notable goodies from the community:
torch
and diffusers
deps @MalramaAs of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:
Learn more on the Invoke Training repo, as well as our YT video on getting started
π¨ To install or upgrade to version 4.0, download the zip file from the release notes ("Assets" section), unzip it, and follow the installation instructions. For upgrades, select the same installation location.
v4.0.0 is versioned as a major release due to breaking changes:
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!
defaultModel
by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5866
config_path
in yaml -> DB migration by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5905
always_run
input to checks & tests, use this on release workflow by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5929
ruff
by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/6012
.LoRA
by @skunkworxdark in https://github.com/invoke-ai/InvokeAI/pull/6031
MALLOC_MMAP_THRESHOLD_
env var by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/6059
session_processor_default._process()
by @lstein in https://github.com/invoke-ai/InvokeAI/pull/6095
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/3.7.0...v4.0.0
Published by psychedelicious 7 months ago
This is a Release Candidate. We strongly suggest backing up your database before testing to prevent data loss in case of any issues.
Please let us know if you run into anything unexpected.
π Barring any major issues, this will be the last RC before v4.0.0! π
Scan Folder
instead
We've simplified and streamlined installation, making it much faster and reliable:
invokeai.yaml
The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:
<
key in any prompt boxScan Folder
instead
When you first run v4.0.0, it may take a few minutes to start up as it does a one-time hash of all of your model files.
Do not panic.
Hashes provide a stable identifier for a model that is the same across every platform.
If you donβt care about this, you can press Ctrl+C to interrupt the process and disable hashing by setting hashing_algorithm: random
setting in invokeai.yaml
.
The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple βpassesβ, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.
The compositing settings on canvas allow for control over the gradient denoising process.
Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.
Many small bug fixes, resolved papercuts, and warm fuzzies. Shouting out some notable goodies from the community:
torch
and diffusers
deps @MalramaAs of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:
Learn more on the Invoke Training repo.
To install or upgrade to version 4.0, download the zip file from the release notes ("Assets" section), unzip it, and follow the installation instructions. For upgrades, select the same installation location.
πΎ Download Installer
v4.0.0 is versioned as a major release due to breaking changes:
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!
.LoRA
by @skunkworxdark in https://github.com/invoke-ai/InvokeAI/pull/6031
MALLOC_MMAP_THRESHOLD_
env var by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/6059
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.0.0rc5...v4.0.0rc6
Published by hipsterusername 7 months ago
This is a Release Candidate. We strongly suggest backing up your database before testing to prevent data loss in case of any issues.
Please let us know if you run into anything unexpected.
RC5 has improved the default hashing experience, updated default ControlNet Processor quality for SDXL outputs, and addressed other minor bugs/issues found in RC testing.
A new node has also been added for masking by ID.
The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:
When you first run v4.0.0, it will take a while to start up as it does a one-time hash of all of your model files.
Do not panic.
Hashes provide a stable identifier for a model that is the same across every platform.
If you donβt care about this, you can disable the hashing using the hashing_algorithm setting in invokeai.yaml.
The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple βpassesβ, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.
The compositing settings on canvas allow for control over the gradient denoising process.
Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.
As of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:
You can learn more about Invoke Training at https://github.com/invoke-ai/invoke-training
To install or upgrade to version 4.0, download the zip file from the release notes ("Assets" section), unpack it, and follow the installation instructions. For upgrades, select the same installation location.
πΎ Download Installer
v4.0.0 is versioned as a major release due to breaking changes:
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!
ruff
by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/6012
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.0.0rc4...v4.0.0rc5
Published by brandonrising 7 months ago
This is a Release Candidate. We strongly suggest backing up your database before testing to prevent data loss in case of any issues.
Please let us know if you run into anything unexpected.
We are now updated to use diffusers:0.27.0 and pytorch 2.2.1!
In RC4, the configs managed in invokeai.yaml are managed differently within the app. As a consequence of this, we will no longer be supporting passing in all configs as args on the invokeai-web
cli command. Instead configs can be passed in via environment variables in the form of INVOKEAI_<name_of_config>
For example:
INVOKEAI_REMOTE_API_TOKENS="[{\"url_regex\":\"huggingface.co/.*\", \"token\":\"example\"}]" invokeai-web
As seen in the example, JSON notation can be used for any config properties that are more complicated than a standard string.
Along with revamping how we manage configs, we've also removed the need for the invokeai-configure
script which was previously required before installation.
RC1 supported the setting skip_model_hash in invokeai.yaml. In RC2, this is replaced by a more flexible setting hashing_algorithm.
The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:
When you first run v4.0.0, it will take a while to start up as it does a one-time hash of all of your model files.
Do not panic.
Hashes provide a stable identifier for a model that is the same across every platform.
If you donβt care about this, you can disable the hashing using the hashing_algorithm setting in invokeai.yaml.
The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple βpassesβ, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.
The compositing settings on canvas allow for control over the gradient denoising process.
Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.
As of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:
You can learn more about Invoke Training at https://github.com/invoke-ai/invoke-training
To install or upgrade to version 4.0, download the zip file from the release notes ("Assets" section), unpack it, and follow the installation instructions. For upgrades, select the same installation location.
πΎ Download Installer
v4.0.0 is versioned as a major release due to breaking changes:
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!
defaultModel
by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5866
config_path
in yaml -> DB migration by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5905
always_run
input to checks & tests, use this on release workflow by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5929
Full Changelog Since Last Release Candidate: https://github.com/invoke-ai/InvokeAI/compare/v4.0.0rc2...v4.0.0rc4
Published by brandonrising 7 months ago
This is a Release Candidate. We strongly suggest backing up your database before testing to prevent data loss in case of any issues.
Please let us know if you run into anything unexpected.
RC1 supported the setting skip_model_hash
in invokeai.yaml
. In RC2, this is replaced by a more flexible setting hashing_algorithm
.
The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:
<
key in any prompt boxWhen you first run v4.0.0, it will take a while to start up as it does a one-time hash of all of your model files.
Do not panic.
Hashes provide a stable identifier for a model that is the same across every platform.
If you donβt care about this, you can disable the hashing using the hashing_algorithm
setting in invokeai.yaml
.
The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple βpassesβ, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.
The compositing settings on canvas allow for control over the gradient denoising process.
Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.
As of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:
You can learn more about Invoke Training at https://github.com/invoke-ai/invoke-training
To install or upgrade to version 4.0, download the zip file from the release notes ("Assets" section), unpack it, and follow the installation instructions. For upgrades, select the same installation location.
πΎ Download Installer
v4.0.0 is versioned as a major release due to breaking changes:
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!
always_run
input to checks & tests, use this on release workflow by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5929
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.0.0rc1...v4.0.0rc2
Published by hipsterusername 7 months ago
This is a Release Candidate. We strongly suggest backing up your database before testing to prevent data loss in case of any issues.
Please let us know if you run into anything unexpected.
The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:
<
key in any prompt boxWhen you first run v4.0.0, it will take a while to start up as it does a one-time hash of all of your model files.
Do not panic.
Hashes provide a stable identifier for a model that is the same across every platform.
If you donβt care about this, you can disable the hashing using the skip_model_hash
setting in invokeai.yaml
.
The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple βpassesβ, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.
The compositing settings on canvas allow for control over the gradient denoising process.
Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.
As of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:
You can learn more about Invoke Training at https://github.com/invoke-ai/invoke-training
To install or upgrade to version 4.0, download the zip file from the release notes ("Assets" section), unpack it, and follow the installation instructions. For upgrades, select the same installation location.
πΎ Download Installer
v4.0.0 is versioned as a major release due to breaking changes:
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!
defaultModel
by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5866
config_path
in yaml -> DB migration by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5905
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/3.7.0...v4.0.0rc1
Published by Millu 8 months ago
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry-leading web interface and also serves as the foundation for multiple commercial products.
You can learn more Invoke and our mission by visiting https://www.invoke.com/about, or joining our Discord server!
{"detail":"Not Found"}
. To fix this error, download the installer and re-run it in the same location as your existing installation.pip install --force-reinstall torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
pip install -U typing-extensions
pip install -U fsspec==2023.5.0
To install version 3.6.4, please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you already have Invoke version 3.x installed, you can update by running invoke.sh
/ invoke.bat
and selecting "Updated Invoke" to upgrade, or you can download and run the installer in your existing Invoke installation location.
π¨ Please ensure your generation queue has no pending items before upgrading. Pending generations may fail after an upgrade. π¨
Download the installer: InvokeAI-installer-v3.7.0.zip
There are a number of important changes for contributors to be aware of.
The Model Manager is partway through a redesign, to make it more capable and maintainable. The redesign will support a much better user experience for downloading, installing and managing models. The changes are in the repo, but implemented separately from the user-facing app.
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/3.6.3...3.7.0
Published by Millu 8 months ago
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry-leading web interface and also serves as the foundation for multiple commercial products.
You can learn more Invoke and our mission by visiting https://www.invoke.com/about, or joining our Discord server!
{"detail":"Not Found"}
. To fix this error, download the installer and re-run it in the same location as your existing installation.pip install --force-reinstall torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
pip install -U typing-extensions
pip install -U fsspec==2023.5.0
png_compress_level
to 1 in your invoke.yaml file.To install version 3.6.3, please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you already have Invoke version 3.x installed, you can update by running invoke.sh
/ invoke.bat
and selecting "Updated Invoke" to upgrade, or you can download and run the installer in your existing Invoke installation location.
π¨ Please ensure your generation queue has no pending items before upgrading. Pending generations may fail after an upgrade. π¨
Download the installer: InvokeAI-installer-v3.6.3.zip
There are a number of important changes for contributors to be aware of.
The Model Manager is partway through a redesign, to make it more capable and maintainable. The redesign will support a much better user experience for downloading, installing and managing models. The changes are in the repo, but implemented separately from the user-facing app.
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.6.2...3.6.3
Published by Millu 9 months ago
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry-leading web interface and also serves as the foundation for multiple commercial products.
You can learn more Invoke and our mission by visiting https://www.invoke.com/about, or joining our Discord server!
{"detail":"Not Found"}
. To fix this error, download the installer and re-run it in the same location as your existing installation.pip install --force-reinstall torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
pip install -U typing-extensions
pip install -U fsspec==2023.5.0
png_compress_level
to 1 in your invoke.yaml file.To install version 3.6.3, please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you already have Invoke version 3.x installed, you can update by running invoke.sh
/ invoke.bat
and selecting "Updated Invoke" to upgrade, or you can download and run the installer in your existing Invoke installation location.
π¨ Please ensure your generation queue has no pending items before upgrading. Pending generations may fail after an upgrade. π¨
Download the installer: InvokeAI-installer-v3.6.3rc1.zip
There are a number of important changes for contributors to be aware of.
The Model Manager is partway through a redesign, to make it more capable and maintainable. The redesign will support a much better user experience for downloading, installing and managing models. The changes are in the repo, but implemented separately from the user-facing app.
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.6.0...v3.6.3