InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
APACHE-2.0 License
Published by lstein over 1 year ago
We are pleased to announce a bugfix update to InvokeAI with the release of version 2.3.3.
This is a bugfix and minor feature release.
Since version 2.3.2 the following bugs have been fixed:
xformers
is active, and will default to xformers
enabled if the library is detected..next_prefix
file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.invoke.sh
script has been corrected.embeddings
directory.--model
is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.invoke.sh
launcher now uses a prettier console-based interface. To take advantage of it, install the dialog
package using your package manager (e.g. sudo apt install dialog
).my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
To install or upgrade to InvokeAI 2.3.3 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
To update from 2.3.1 or 2.3.2 you may use the "update" option (choice 6) in the invoke.sh
/invoke.bat
launcher script and choose the option to update to 2.3.3.
Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai
directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI
. You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.3
. To upgrade to an xformers
version if you are not currently using xformers
, use pip install --use-pep517 --upgrade InvokeAI[xformers]
. You can see which versions are available by going to The PyPI InvokeAI Project Page
These are known bugs in the release.
k_dpmpp_2a
) sampler is not yet implemented for diffusers
models and will disappear from the WebUI Sampler menu when a diffusers
model is selected.codeformer.pth
face restoration model, as well as the CIDAS/clipseg
and runwayml/stable-diffusion-v1.5
models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.negativeprompts.safetensors
by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3045
invoke.sh
on Linux systems with "dialog" installed by Joshua Kimsey.Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.2.post1...v2.3.3-rc1
Many thanks to @psychedelicious, @blessedcoolant (Vic), @JPPhoto (Jonathan Pollack), @ebr (Eugene Brodsky) @JoshuaKimsey, @EgoringKosmos, and our crack team of Discord moderators, @gogurtenjoyer and @whosawhatsis, for all their contributions to this release.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.2.post1...v2.3.3
Published by lstein over 1 year ago
We are pleased to announce a bugfix update to InvokeAI with the release of version 2.3.2.
This is a bugfix and minor feature release.
Since version 2.3.1 the following bugs have been fixed:
--no-nsfw_checker
and --ckpt_convert
turned on.diffusers
, transformers
, safetensors
and accelerate
libraries upstream. We hope that this will fix the assertion NDArray > 2**32
issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.As part of the upgrade to diffusers
, the location of the diffusers-based models has changed from models/diffusers
to models/hub
. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your models/diffusers
directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.
2.3.2 introduces a new command-line only script called invokeai-batch
that can be used to generate hundreds of images from prompts and settings that vary systematically. This can be used to try the same prompt across multiple combinations of models, steps, CFG settings and so forth. It also allows you to template prompts and generate a combinatorial list like:
a shack in the mountains, photograph
a shack in the mountains, watercolor
a shack in the mountains, oil painting
a chalet in the mountains, photograph
a chalet in the mountains, watercolor
a chalet in the mountains, oil painting
a shack in the desert, photograph
...
If you have a system with multiple GPUs, or a single GPU with lots of VRAM, you can parallelize generation across the combinatorial set, reducing wait times and using your system's resources efficiently (make sure you have good GPU cooling).
To try invokeai-batch
out. Launch the "developer's console" using the invoke
launcher script, or activate the invokeai virtual environment manually. From the console, give the command invokeai-batch --help
in order to learn how the script works and create your first template file for dynamic prompt generation.
To install or upgrade to InvokeAI 2.3.2 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.2.post1.zip
To update from 2.3.1 you may use the "update" option (choice 6) in the invoke.sh
/invoke.bat
launcher script. Alternatively, you may use the installer. When it asks you to confirm the location of the invokeai
directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI
. You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.2
. To upgrade to an xformers
version if you are not currently using xformers
, use pip install --use-pep517 --upgrade InvokeAI[xformers]
. You can see which versions are available by going to The PyPI InvokeAI Project Page
These are known bugs in the release.
k_dpmpp_2a
) sampler is not yet implemented for diffusers
models and will disappear from the WebUI Sampler menu when a diffusers
model is selected.codeformer.pth
face restoration model. As far as we have been able to determine, this is a false positive and can be safely whitelisted.Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.1...v2.3.2
Many thanks to @mauwii (Matthias Wilde), @psychedelicious, @blessedcoolant (Vic), @blhook (Pull Shark), and our crack team of Discord moderators, @gogurtenjoyer and @whosawhatsis, for all their contributions to this release.
Published by lstein over 1 year ago
We are pleased to announce a bugfix and quality of life update to InvokeAI with the release of version 2.3.1.
This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
There are three ways of accessing the model management features:
Choose option (5) download and install models from the invoke
launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
Command-line users can start this app using the command invokeai-model-install
.
The !install_model
and !convert_model
commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do not need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
Please see INSTALLING MODELS for more information on model management.
The installer now launches a console-based UI for setting and changing commonly-used startup options:
After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh
/invoke.bat
and entering option (6) change InvokeAI startup options
Command-line users can launch the new configure app using invokeai-configure
.
This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh
or invoke.bat
and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
Command-line users can run this interface by typing invokeai-configure
There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options --h_symmetry_time_pct
and --v_symmetry_time_pct
(these can be abbreviated to --h_sym
and --v_sym
like all other options).
This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout:
Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh
/invoke.bat
scripts.
We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute.
This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0
, and using the compel
library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.
Command | Description |
---|---|
invokeai |
Command line interface |
invokeai --web |
Web interface |
invokeai-model-install |
Model installer with console forms-based front end |
invokeai-ti --gui |
Textual inversion, with a console forms-based front end |
invokeai-merge --gui |
Model merging, with a console forms-based front end |
invokeai-configure |
Startup configuration; can also be used to reinstall support models |
invokeai-update |
InvokeAI software updater |
To install or upgrade to InvokeAI 2.3.1, please download the zip file below, unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.1.post2.zip
If you are upgrading from an earlier version of InvokeAI, run the installer and when it asks you to confirm the location of the invokeai
directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI
. You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.1
. To upgrade to an xformers
version if you are not currently using xformers
, use pip install --use-pep517 --upgrade InvokeAI[xformers]
. You can see which versions are available by going to The PyPI InvokeAI Project Page
This will be the last feature release on the 2.3.x branch. The development team is migrating to a new software architecture called Nodes, which will provide enhanced workflow management features as well as a much easier way for community developers to contribute to the project. We anticipate the transition taking 4-8 weeks (spring 2023). Until that time, we will be releasing bugfixes and other minor updates only.
These are known bugs in the release.
diffusers
models may experience a hard crash with assertion NDArray > 2**32
This appears to be an issue in an upstream library and currently the only workaround is to install and use legacy .ckpt/.safetensors
models instead of the diffusers
models. For more information on this bug, see this Issue
k_dpmpp_2a
) sampler is not yet implemented for diffusers
models and will disappear from the WebUI Sampler menu when a diffusers
model is selected. Support will be added in the next diffusers
library release.codeformer.pth
face restoration model. As far as we have been able to determine, this is a false positive and can be safely whitelisted.Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
InvokeAI is the product of the loving attention of a large number of Contributors. For this release in particular, we'd like to recognize the combined efforts of @blessedcoolant, who worked tirelessly on the model management interface despite multiple changes in the backend, and Jonathon Pollack (@JPPhoto) for working deep in the bowels of memory management and image generation. Kudos to @damian0815 and Kevin Turner (@keturn) for their improvements on model memory management and prompt parsing, respectively, and many thanks to Matthias Wild (@mauwii) and Eugene Brodsky (@ebr) for their work on package management and installation.
Last but not least, we acknowledge the tireless efforts of Kent Keirsey (@hipsterusername) for his amazing videos, outreach and team management.
merge_group
trigger to test-invoke-pip.yml by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/2590
cuda.get_mem_info
always gets a specific device index. by @keturn in https://github.com/invoke-ai/InvokeAI/pull/2700
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.0...v.2.3.1-rc1
Published by lstein over 1 year ago
We are pleased to announce a features and performance update to InvokeAI with the release of version 2.3.0.
There are multiple internal and external changes in this version of InvokeAI which greatly enhance the developer and user experiences respectively.
diffusers
modelsPrevious versions of InvokeAI supported the original model file format introduced with Stable Diffusion 1.4. In the original format, known variously as "checkpoint", or "legacy" format, there is a single large weights file ending with .ckpt
or .safetensors
. Though this format has served the community well, it has a number of disadvantages, including file size, slow loading times, and a variety of non-standard variants that require special-case code to handle. In addition, because checkpoint files are actually a bundle of multiple machine learning sub-models, it is hard to swap different sub-models in and out, or to share common sub-models. A new format, introduced by the StabilityAI company in collaboration with HuggingFace, is called diffusers
and consists of a directory of individual models. The most immediate benefit of diffusers
is that they load from disk very quickly. A longer term benefit is that in the near future diffusers
models will be able to share common sub-models, dramatically reducing disk space when you have multiple fine-tune models derived from the same base.
When you perform a new install of version 2.3.0, you will be offered the option to install the diffusers
versions of a number of popular SD models, including Stable Diffusion versions 1.5 and 2.1 (including the 768x768 pixel version of 2.1). These will act and work just like the checkpoint versions. Do not be concerned if you already have a lot of ".ckpt" or ".safetensors" models on disk! InvokeAI 2.3.0 can still load these and generate images from them without any extra intervention on your part.
To take advantage of the optimized loading times of diffusers
models, InvokeAI offers options to convert legacy checkpoint models into optimized diffusers
models. If you use the invokeai
command line interface, the relevant commands are:
!convert_model
-- Take the path to a local checkpoint file or a URL that is pointing to one, convert it into a diffusers
model, and import it into InvokeAI's models registry file.!optimize_model
-- If you already have a checkpoint model in your InvokeAI models file, this command will accept its short name and convert it into a like-named diffusers
model, optionally deleting the original checkpoint file.!import_model
-- Take the local path of either a checkpoint file or a diffusers
model directory and import it into InvokeAI's registry file. You may also provide the ID of any diffusers model that has been published on the HuggingFace models repository and it will be downloaded and installed automatically.The WebGUI offers similar functionality for model management.
For advanced users, new command-line options provide additional functionality. Launching invokeai
with the argument --autoconvert <path to directory>
takes the path to a directory of checkpoint files, automatically converts them into diffusers
models and imports them. Each time the script is launched, the directory will be scanned for new checkpoint files to be loaded. Alternatively, the --ckpt_convert
argument will cause any checkpoint or safetensors model that is already registered with InvokeAI to be converted into a diffusers
model on the fly, allowing you to take advantage of future diffusers-only features without explicitly converting the model and saving it to disk.
Please see INSTALLING MODELS for more information on model management in both the command-line and Web interfaces.
XFormers
Memory-Efficient Crossattention PackageOn CUDA (Nvidia) systems, version 2.3.0 supports the XFormers
library. Once installed, thexformers
package dramatically reduces the memory footprint of loaded Stable Diffusion models files and modestly increases image generation speed. xformers
will be installed and activated automatically if you specify a CUDA system at install time.
The caveat with using xformers
is that it introduces slightly non-deterministic behavior, and images generated using the same seed and other settings will be subtly different between invocations. Generally the changes are unnoticeable unless you rapidly shift back and forth between images, but to disable xformers
and restore fully deterministic behavior, you may launch InvokeAI using the --no-xformers
option. This is most conveniently done by opening the file invokeai/invokeai.init
with a text editor, and adding the line --no-xformers
at the bottom.
There is now a separate text input box for negative prompts in the WebUI. This is convenient for stashing frequently-used negative prompts ("mangled limbs, bad anatomy"). The [negative prompt]
syntax continues to work in the main prompt box as well.
To see exactly how your prompts are being parsed, launch invokeai
with the --log_tokenization
option. The console window will then display the tokenization process for both positive and negative prompts.
Version 2.3.0 offers an intuitive user interface for merging up to three Stable Diffusion models using an intuitive user interface. Model merging allows you to mix the behavior of models to achieve very interesting effects. To use this, each of the models must already be imported into InvokeAI and saved in diffusers
format, then launch the merger using a new menu item in the InvokeAI launcher script (invoke.sh
, invoke.bat
) or directly from the command line with invokeai-merge --gui
. You will be prompted to select the models to merge, the proportions in which to mix them, and the mixing algorithm. The script will create a new merged diffusers
model and import it into InvokeAI for your use.
See MODEL MERGING for more details.
Textual Inversion (TI) is a technique for training a Stable Diffusion model to emit a particular subject or style when triggered by a keyword phrase. You can perform TI training by placing a small number of images of the subject or style in a directory, and choosing a distinctive trigger phrase, such as "pointillist-style". After successful training, The subject or style will be activated by including <pointillist-style>
in your prompt.
Previous versions of InvokeAI were able to perform TI, but it required using a command-line script with dozens of obscure command-line arguments. Version 2.3.0 features an intuitive TI frontend that will build a TI model on top of any diffusers
model. To access training you can launch from a new item in the launcher script or from the command line using invokeai-ti --gui
.
See TEXTUAL INVERSION for further details.
The InvokeAI installer has been upgraded in order to provide a smoother and hopefully more glitch-free experience. In addition, InvokeAI is now packaged as a PyPi project, allowing developers and power-users to install InvokeAI with the command pip install InvokeAI --use-pep517
. Please see Installation for details.
Developers should be aware that the pip
installation procedure has been simplified and that the conda
method is no longer supported at all. Accordingly, the environments_and_requirements
directory has been deleted from the repository.
To install or upgrade to InvokeAI 2.3.0, please download the zip file below, unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you are upgrading from an earlier version of InvokeAI, all you have to do is to run the installer for your platform. When the installer asks you to confirm the location of the invokeai
directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI
. You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.1
. You can see which versions are available by going to The PyPI InvokeAI Project Page
All of InvokeAI's functionality, including the WebUI, command-line interface, textual inversion training and model merging, can all be accessed from the invoke.sh
and invoke.bat
launcher scripts. The menu of options has been expanded to add the new functionality. For the convenience of developers and power users, we have normalized the names of the InvokeAI command-line scripts:
invokeai
-- Command-line clientinvokeai --web
-- Web GUIinvokeai-merge --gui
-- Model merging script with graphical front endinvokeai-ti --gui
-- Textual inversion script with graphical front endinvokeai-configure
-- Configuration tool for initializing the invokeai
directory and selecting popular starter models.For backward compatibility, the old command names are also recognized, including invoke.py
and configure-invokeai.py
. However, these are deprecated and will eventually be removed.
Developers should be aware that the locations of the script's source code has been moved. The new locations are:
invokeai
=> ldm/invoke/CLI.py
invokeai-configure
=> ldm/invoke/config/configure_invokeai.py
invokeai-ti
=> ldm/invoke/training/textual_inversion.py
invokeai-merge
=> ldm/invoke/merge_diffusers
Developers are strongly encouraged to perform an "editable" install of InvokeAI using pip install -e . --use-pep517
in the Git repository, and then to call the scripts using their 2.3.0 names, rather than executing the scripts directly. Developers should also be aware that the several important data files have been relocated into a new directory named invokeai
. This includes the WebGUI's frontend
and backend
directories, and the INITIAL_MODELS.yaml
files used by the installer to select starter models. Eventually all InvokeAI modules will be in subdirectories of invokeai
.
These are known bugs that will not be fixed prior to the release.
k_dpmpp_2a
) sampler is not yet implemented for diffusers
models and will disappear from the WebUI Sampler menu when a diffusers
model is selected. Support will be added in the next diffusers
library release.k_heun
and k_dpm_2
schedulers will appear to perform twice as many sampling steps than were requested. This is an artifact of the fact that these schedulers perform two samplings per step and is a cosmetic issue only.Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
InvokeAI is the product of the loving attention of a large number of Contributors. For this release in particular, we'd like to recognize the combined efforts of Kevin Turner (@keturn), who got the diffusers
port past the finish line, Eugene Brodsky (@ebr), for his work on the new installer, and Matthias Wild (@mauwii), for his many significant improvements to the testing pipeline and for setting up the system that uploads releases to the PyPi Python module repository.
We'd also like to call out Jonathon Pollack (@JPPhoto) for tirelessly testing each release candidate, @blessedcoolant and @psychedelicious for their work on the Web UI Model Manager and other UI features, and Kent Keirsey (@hipsterusername) for his amazing videos, outreach and team management.
test-invoke-pip.yml
by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1971
.dockerignore
and add patchmatch by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1970
Dockerfile
by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/2036
uname -m
instead of arch
by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/2110
percent_through
calculation for cross-attention control by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/2342
.swap()
against diffusers 0.12 by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/2385
PYTORCH_ENABLE_MPS_FALLBACK
not set correctly by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/2508
build-container.yml
by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/2537
pypi_helper.py
by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/2533
test-invoke-pip.yml
by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/2524
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.2.4...v2.3.0-rc5
Published by lstein almost 2 years ago
We are pleased to announce a features and bugfix update to InvokeAI with the release of version 2.2.5.
If you are interested in translating InvokeAI to your language, please feel free to reach out to us on Discord.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.2.4...latest
To install InvokeAI 2.2.5 on a new system, please download the zip file below, unzip it, and run the script install.sh
(Macintosh, Linux) or install.bat
(Windows). A walkthrough can be found at Installation Overview .
InvokeAI-installer-v2.2.5p2-linux.zip
InvokeAI-installer-v2.2.5p2-mac.zip
InvokeAI-installer-v2.2.5p2-windows.zip
If you have InvokeAI 2.2.4 installed, you can upgrade it quickly using an update script. Download the zip file below, and unpack it. Place the file update.bat
(Windows) or update.sh
(Linux/Mac) into your invokeai
folder, replacing the update
script that was previously there. Then launch the new update
script from the command line or by double-clicking.
Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
Published by lstein almost 2 years ago
With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.
Version 2.2.4 is a bugfix release. The major user-visible change is that we have overhauled the installation experience to make it faster and more stable. Please see Installation Overview for instructions on using the new installer, and see the .zip files in the Assets section below for the installer for your preferred platform. Note that you will need to install Python 3.9 or 3.10 to use the new installation method.
The new installers are located here. They have been updated 13 December in order to prevent a segfault crash on certain Macintosh systems.
There are a number of installation-related changes that previous InvokeAI users should be aware of:
invokeai
directory.Previously there were two directories to worry about, the directory that contained the InvokeAI source code and the launcher scripts, and the invokeai
directory that contained the models files, embeddings, configuration and outputs. With the 2.2.4 release, this dual system is done away with, and everything, including the invoke.bat
and invoke.sh
launcher scripts, now live in a directory named invokeai
. By default this directory is located in your home directory (e.g. \Users\yourname
on Windows), but you can select where it goes at install time.
InvokeAI-installer-2.2.4-p5-linux.zip
InvokeAI-installer-2.2.4-p5-mac.zip
InvokeAI-installer-2.2.4-p5-windows.zip
After installation, you can delete the install directory (the one that the zip file creates when it unpacks). Do not delete or move the invokeai
directory!
.invokeai
initialization file has been renamed invokeai/invokeai.init
You can place frequently-used startup options in this file, such as the default number of steps or your preferred sampler. To keep everything in one place, this file has now been moved into the invokeai
directory and is named invokeai.init
.
The easiest route is to download and unpack one of the 2.2.4 installer files. When it asks you for the location of the invokeai
runtime directory, respond with the path to the directory that contains your 2.2.3 invokeai
. That is, if invokeai
lives at C:\Users\fred\invokeai
, then answer with C:\Users\fred
and answer "Y" when asked if you want to reuse the directory.
The update.sh
(update.bat
) script that came with the 2.2.3 source installer does not know about the new directory layout and won't be fully functional.
As they become available, you can update to more recent versions of InvokeAI using an update.sh
(update.bat
) script located in the invokeai
directory. Running it without any arguments will install the most recent version of InvokeAI. Alternatively, you can get set releases by running the update.sh
script with an argument in the command shell. This syntax accepts the path to the desired release's zip file, which you can find by clicking on the green "Code" button on this repository's home page. Here are some examples:
# 2.2.4 release
update.sh https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v2.2.4.zip
# 2.2.5 release (don't try; it doesn't exist yet!)
update.sh https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v2.2.5.zip
# current development version
update.sh https://github.com/invoke-ai/InvokeAI/archive/main.zip
# feature branch 3d-movies (don't try; it doesn't exist yet!)
update.sh https://github.com/invoke-ai/InvokeAI/archive/3d-movies.zip
HUGGINGFACE_TOKEN
) by @ebr in https://github.com/invoke-ai/InvokeAI/pull/1578
cp
scripts instead of linking by @tildebyte in https://github.com/invoke-ai/InvokeAI/pull/1765
docker push
github action and expand with additional metadata by @ebr in https://github.com/invoke-ai/InvokeAI/pull/1837
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.2.3...v2.2.4
Published by lstein almost 2 years ago
With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.
The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.
Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!
Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!
1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.
Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.
DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.
For those installing InvokeAI for the first time, please use this recipe:
For automated installation, open up the "Assets" section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.
For manual installation download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI
and follow the instructions in Manual Installation.
For those wishing to upgrade from an earlier version, please use this recipe:
Download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates.
Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull
Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:
Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
Replace
xxx
andyyy
with the appropriate OS and GPU codes.
Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml
When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory.
Now run the following commands in the InvokeAI directory.
conda env update
conda activate invokeai
python scripts/preload_models.py
Additional installation information, including recipes for installing without Conda, can be found in Manual Installation
Known Bugs
Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide. Unlike previous versions of InvokeAI we have now moved all development to the main
branch, so please make your pull requests against this branch.
Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.
Published by lstein almost 2 years ago
With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.
The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.
Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!
Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!
1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.
Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.
DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.
For those installing InvokeAI for the first time, please use this recipe:
For automated installation, open up the "Assets" section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.
For manual installation download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI
and follow the instructions in Manual Installation.
For those wishing to upgrade from an earlier version, please use this recipe:
Download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates.
Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull
Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:
Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
Replace
xxx
andyyy
with the appropriate OS and GPU codes.
Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml
When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory.
Now run the following commands in the InvokeAI directory.
conda env update
conda activate invokeai
python scripts/preload_models.py
Additional installation information, including recipes for installing without Conda, can be found in Manual Installation
Known Bugs
Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide. Unlike previous versions of InvokeAI we have now moved all development to the main
branch, so please make your pull requests against this branch.
Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.
Published by lstein almost 2 years ago
With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.
The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.
Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!
Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!
1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.
Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.
DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.
For those installing InvokeAI for the first time, please use this recipe:
For automated installation, open up the "Assets" section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.
For manual installation download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI
and follow the instructions in Manual Installation.
For those wishing to upgrade from an earlier version, please use this recipe:
Download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates.
Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull
Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:
Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
Replace
xxx
andyyy
with the appropriate OS and GPU codes.
Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml
When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory.
Now run the following commands in the InvokeAI directory.
conda env update
conda activate invokeai
python scripts/preload_models.py
Additional installation information, including recipes for installing without Conda, can be found in Manual Installation
Known Bugs
Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide. Unlike previous versions of InvokeAI we have now moved all development to the main
branch, so please make your pull requests against this branch.
Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.
Published by lstein almost 2 years ago
Read below for the old 2.1.3 release.
The invoke-ai team is excited to be able to share the release of InvokeAI 2.1 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit. Version 2.1 of the tool introduces multiple new features and performance enhancements.
This 14-minute YouTube video introduces you to some of the new features contained in this release. The following sections describe what's new in the Web interface (WebGUI) and the command-line interface (CLI).
Version 2.1.3 is primarily a bug fix release that improves the installation process and provides enhanced stability and usability.
Update 22 November - updated invokeAI-src-installer-mac.zip
to correct an error downloading the micromamba distirbution.
.invokeai
file. See Client
For those installing InvokeAI for the first time, please use this recipe:
InvokeAI-*.zip
files. The instructions in the Installation section of the [InvokeAI docs](https://invoke-ai.github.io/InvokeAI) will provide you with a guide to which file to download and what to do with it when you get it.InvokeAI
directory that it creates.git clone http://github.com/invoke-ai/InvokeAI
For those wishing to upgrade from an earlier version, please use this recipe:
InvokeAI
directory that it creates.git checkout main
, followed by git pull
environments-and-requirements
directory:environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
environment.yml
, or make a symbolic link from environment.yml
to the selected enrivonment file:Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
# Replace `xxx` and `yyy` with the appropriate OS and GPU codes.
Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml
When this is done, confirm that a file environment.yml
has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements
directory.
conda update
conda activate invokeai
python scripts/preload_models.py
Additional installation information, including recipes for installing without Conda, can be found in Manual Installation
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide.
The most important thing is to know about contributing code is to make your pull request against the "development" branch, and not against "main". This will help keep public breakage to a minimum and will allow you to propose more radical
changes.
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.
Published by lstein almost 2 years ago
The invoke-ai team is excited to be able to share the release of InvokeAI 2.1 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit. Version 2.1 of the tool introduces multiple new features and performance enhancements.
This 14-minute YouTube video introduces you to some of the new features contained in this release. The following sections describe what's new in the Web interface (WebGUI) and the command-line interface (CLI).
scripts/preload_models.py
) now lets you select among several popular Stable Diffusion models and downloads and installs them on your behalf. Among other models, this script will install the current Stable Diffusion 1.5 model as well as a StabilityAI variable autoencoder (VAE) which improves face generation.--hires
option in the CLI, or select the corresponding toggle in the WebGUI.--embedding_path
option. (The next version will support merging and loading of multiple simultaneous models).To install InvokeAI from scratch, please see the Installation section of the InvokeAI docs.
For those wishing to upgrade from an earlier version, please use the following recipe from within the InvokeAI directory:
conda deactivate
git checkout main
git pull
rm -rf src
conda update -f environment-mac.yml
conda activate invokeai
python scripts/preload_models.py
conda deactivate
git checkout main
git pull
rmdir src /s
conda update
conda activate invokeai
python scripts\preload_models.py
conda deactivate
git checkout main
git pull
rm -rf src
conda update
conda activate invokeai
python scripts/preload_models.py
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide.
The most important thing is to know about contributing code is to make your pull request against the "development" branch, and not against "main". This will help keep public breakage to a minimum and will allow you to propose more radical
changes.
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.0.1...2.1.0-rc1
Published by lstein about 2 years ago
The invoke-ai team is excited to be able to share the release of InvokeAI 2.0 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit named InvokeAI. Version 2.0.0 of the tool introduces an entirely new WebUI Front-end with a Desktop mode, and an optimized back-end server that can be interacted with via CLI or extended with your own fork.
Release 2.0.2 updates three Python dependencies that were recently reported to have critical security holes, and enhances documentation. Otherwise, the feature set is identical to 2.0.1.
This version of the app improves in-app workflows leveraging GFPGAN and Codeformer for face restoration, and RealESRGAN upscaling - Additionally, the CLI also supports a large variety of features:
Future updates planned included UI driven outpainting/inpainting, robust Cross Attention support, and an advanced node workflow for automating and sharing your workflows with the community.
dream.py
as legacy_api.py
by @CapableWeb in https://github.com/invoke-ai/InvokeAI/pull/1070
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.0.0...v2.0.1
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.0.1...v2.0.2
Published by lstein about 2 years ago
The invoke-ai team is excited to be able to share the release of InvokeAI 2.0 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit named InvokeAI. Version 2.0.0 of the tool introduces an entirely new WebUI Front-end with a Desktop mode, and an optimized back-end server that can be interacted with via CLI or extended with your own fork.
Release 2.0.1 corrects an error that was causing the k* samplers to produce noisy images at high step counts. Otherwise the feature set is the same as 2.0.0.
This version of the app improves in-app workflows leveraging GFPGAN and Codeformer for face restoration, and RealESRGAN upscaling - Additionally, the CLI also supports a large variety of features:
Future updates planned included UI driven outpainting/inpainting, robust Cross Attention support, and an advanced node workflow for automating and sharing your workflows with the community.
dream.py
as legacy_api.py
by @CapableWeb in https://github.com/invoke-ai/InvokeAI/pull/1070
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.0.0...v2.0.1
Published by lstein about 2 years ago
The invoke-ai team is excited to be able to share the release of InvokeAI 2.0 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit named InvokeAI. Version 2.0.0 of the tool introduces an entirely new WebUI Front-end with a Desktop mode, and an optimized back-end server that can be interacted with via CLI or extended with your own fork.
This version of the app improves in-app workflows leveraging GFPGAN and Codeformer for face restoration, and RealESRGAN upscaling - Additionally, the CLI also supports a large variety of features:
Future updates planned included UI driven outpainting/inpainting, robust Cross Attention support, and an advanced node workflow for automating and sharing your workflows with the community.
Published by lstein about 2 years ago
This is identical to release 1.14 except that it reverts the name of the conda environment from "sd-ldm" back to the original "ldm".
Features from 1.14:
Published by lstein about 2 years ago
Published by lstein about 2 years ago
New features and bug fixes: