Streamlined interface for generating images with AI in Krita. Inpaint and outpaint with optional text prompt, no tweaking required.
GPL-3.0 License
Bot releases are hidden (Show)
Published by Acly 6 months ago
There was quite a bit of confusion about the changes to samplers in the last release, and a few bugs too. This release will hopefully improve the situation!
The preset name now contains both:
There are two new entries:
Related changes:
Please check out the documentation on the Wiki for more information about Samplers and how to customize them!
Published by Acly 6 months ago
There are two new Control layer modes: Style and Composition.
They work similar to the "Reference" mode: the selected image acts as a sort of inspiration, and doesn't have to match resolution or aspect ratio of your canvas. The difference is that Style will extract mostly art style and colors, while Composition focuses on the structural layout.
Composition | Style | Result |
---|---|---|
Works best with SDXL. Probably need a different approach for SD1.5.
[!IMPORTANT]
If you are using a custom ComfyUI installation, make sure to update. Latest version of ComfyUI_IPAdapter_plus is required.
Redesigned the LoRA selection into a searchable drop-down.
There is also an option to filter via folders (applies to all added LoRA).
And adding LoRA in the text prompt will auto-complete now.
This release adds Presets for some Control layer and Sampler settings. The idea is to have a lean, easy to use UI where users can switch between a few recommended configuration options. Presets can be customized for those who want full control (but it may require editing a text file).
Control layer strength is now adjusted with a single slider, from low influence to very strong effect. This will influence the weight as well as the start and end of the interval in which ControlNet or IP-Adapter are applied.
If you want you can also modify those individually. The presets are stored in presets/control.json
, it is recommended to copy it to the user data folder before editing.
Samplers can now be fully configured by choosing from a few pre-defined options.
As before, the number of steps and guidance strength can be tweaked in the UI. New: you can edit the presets or add your own, and choose from any sampler / scheduler which ComfyUI supports.
Published by Acly 7 months ago
Published by Acly 7 months ago
[!IMPORTANT]
This release includes an upgrade for ComfyUI and introduces new models.
- If you are using the managed server, the installer will do an upgrade.
- If you are using a custom ComfyUI installation, please update to latest versions!
This release slightly changes how selections masks are interpreted.
Previously they would be either 1 (selected) or 0 (not selected), with smooth transitions at the edges for blending.
Now you can create masks with any values in between to customize how much the image changes per pixel.
Original | Mask | Result |
---|---|---|
"spooky dense forest"
For example, if you paint a selection mask 50% white in a certain region, and set strength to 80%, the actual strength in that region will be half of 80% = 40%. This is also known as "Differential Diffusion" or "Soft Inpainting". You can allow more drastic changes in some regions and only small adjustments in others - with smooth transitions - in one generation.
Original | Mask | Result |
---|---|---|
"toxic waste, acid lake"
Hint: enable global selection masks in Krita to easily edit selection masks.
I've been working on a streamlined online service for those who don't want to install or lack the hardware. It is not complete, but can be tested already. Please look for more information and leave your feedback in this discussion!
Note: Local offline will always be the option with the greatest flexibility. But it's not available to everyone, and sometimes convenient can be nice?
This project is happily eating all of my time, and it feels like it is only getting hungrier! 😵
So I've had to think about sustainability. Cloud GPU is part of that, but it's an experiment and won't appeal to everyone. If you like the project, please consider donating via GitHub ♥. Much appreciated!
--force-fp16
as default option for MPS (macOS only) #474Published by Acly 8 months ago
For those images we'd prefer to have never happened. You can multi-select with Ctrl/Shift!
User files such as settings, custom styles and logs are no longer inside the plugin installation directory. This has some advantages:
The settings are now located in a subfolder of Krita's user data. Typical paths:
C:\User\<your-name>\AppData\Roaming\krita\ai_diffusion
~/.local/share/krita/ai_diffusion
~/Library/Application Support/krita/ai_diffusion
Depending on your system the location may be different. You can find it via the "View Logs" link in the settings.
SDXL Lightning is a way to speed up generation of images with SDXL, similar to LCM and Turbo. Instructions and Discussion
This version contains a first draf for working with animations. Currently it's just a UI for batch processing, but may become a place to integrate animation checkpoints in the future. Overview and Discussion
Published by Acly 8 months ago
Published by Acly 8 months ago
[!IMPORTANT]
This release includes an upgrade for ComfyUI and introduces new models.
- If you are using the managed server, the installer will do an upgrade.
- If you are using a custom ComfyUI installation, find the new requirements at the bottom.
While SD XL has been supported for a long time, its capabilities to seamlessly fill selections were very limited. SD 1.5 almost always had superior results. This release adds the inpaint model developed by Fooocus to dramatically improve results. All SD XL checkpoints, including custom downloads, automatically profit from the change.
The new model will be downloaded as part of the "Stable Diffusion XL" workload, make sure to select it in the installer if you want to make use of it!
This release introduces more nuanced actions for filling and expanding areas of the image.
This nudges generation into a certain direction depending on what you want to do. Works especially well for SD XL, but also improves consitency for SD 1.5. Some examples are below, you can read the full documentation here.
Selection | Result | |
---|---|---|
Fill | ||
Expand | ||
Add Content | ||
Remove Content | ||
Replace Background |
Feel free to ask questions, discuss ideas, and share workflows, in the Discussions.
Documentation can now be found in the Wiki. It's a work in progress, contributions are appreciated! (I believe not everybody can edit yet, make a post if you have some content.)
models/inpaint
(create folder if needed)models/inpaint
As usual you can use the download script to fetch models, and the full list is here.
Published by Acly 9 months ago
Published by Acly 9 months ago
[!IMPORTANT]
This release includes an upgrade for ComfyUI and custom nodes.
- If you are using the managed server, the installer will do an upgrade.
- If you are using a custom ComfyUI installation, please update to latest versions!
You can now provide a reference image of a face and generate images with close likeness.
Generate entirely new images with flexibility to change style, lighting, etc:
Hint: Use a portrait as reference which includes hair and shoulders. Size of the reference image doesn't have to match your canvas, it's okay to leave transparent areas.
It also works for changing faces in existing images:
Installation:
Stable Diffusion often struggles with generating hands. Manually sketching hand posture can often be the most reliable solution.
This release adds the Hand control layer as an alternative. It automatically detects hands in the image or selected area and
tries to generate a plausible depth map. This can then be used to guide generation.
Hints and limitations:
Installation:
There is a new record button in the Live tab which imports results as animation. Maybe it's useful?
https://user-images.githubusercontent.com/6485914/298176809-49348ecc-977b-4639-a964-2ca60ee69a6b.mp4
High resolution version on YouTube
Published by Acly 10 months ago
[!IMPORTANT]
This release introduces new upscale models (Omni-SR):
- If you are using the managed server, the installer will download the new models
- If you are using a custom ComfyUI installation, download them here and place them in
models/upscale_models
The following information will now be stored in .kra
documents:
How much of the history is stored can be configured in performance settings. Keep in mind that increasing the limit also makes saving and opening documents slower! The history is compressed, so the default values are actually enough for a lot of images.
The queue button menu has been extended:
Previews thumbnails now have a context menu button. Alternatively you can always open the menu with right-click. Here you can:
Hardware limitations are still a big problem, especially when working on a high-resolution canvas. This release provides some more options to avoid running into out-of-memory situations:
0.5
will generate images at half the resolutionStable Diffusion checkpoints have a "preferred" resolution, and the plugin will automatically take it into account. For most checkpoints this is detected automatically and nothing has to be changed!
In some cases a checkpoint may be trained with a different resolution than its base model suggests - for cases where this cannot be detected you can now set it manually. See the example below for the recommended way to setup SDXL Turbo.
Published by Acly 10 months ago
The way thumbnails and previews work is motivated by the following ideas:
The preview looks exactly like the result would after accepting it, which is great to judge how it fits in -- but has the unfortunate side effect that it's really easy to forget it's only a preview and still has to be applied! The respective button was also easily missed at the bottom of the UI, and not intuitive for new users to find.
By re-designing the button and making it more prominent right next to the thumbnail I hope to at least mitigate those issues.
There are a few more small tweaks:
Published by Acly 10 months ago
This version updates the managed ComfyUI server to the latest versions. To those who manage their own ComfyUI install, please make sure you are up-to-date (including custom nodes) to avoid issues!
This release adds convenient shortcuts for editing text prompts:
<lora:filename>
Thank you @huchenlei and @Danamir for the extensions.
There are a number of small fixes and changes to iron out how preview layers are handled, and avoid confusion when results are buried somewhere in the layer stack:
download_models.py
script to make custom ComfyUI installs easier #113 #165
Published by Acly 11 months ago
You can now use selections to control the target area for live painting. This is useful to get good performance even on a larger canvas, or to avoid affecting parts of the image.
https://github.com/Acly/krita-ai-diffusion/assets/6485914/dc135403-1b16-4a95-a2ab-e28d93ae1849
Note that it is not as good for inpainting as the "traditional" workflow. The full inpainting pipeline is rather heavy, and costs too much performance for live mode.
The number of samples now scales with the strength. Lower values require fewer samples, but still yield results that are very similar (and generally just as good) as before. This means such images are now generated much faster. Some examples (SDXL at 1024px, RTX 4070):
Strength | Before | New (1.9.0) | Difference |
---|---|---|---|
100% | 20 samples (7s) | 20 samples (7s) | No change |
50% | 20 samples (7s) | 10 samples (3.5s) | 🟢 2x faster |
30% | 20 samples (7s) | 6 samples (2.3s) | 🟢 3x faster |
Note that the separate "Upscaling steps" settings have been removed, as they automatically scale now. Thanks @Danamir for this improvement!
Published by Acly 11 months ago
--listen
argument #129 (by @miabrahams)Published by Acly 11 months ago
Published by Acly 11 months ago
Also known as QRCode Monster this control layer allows you to imprint QR codes creatively into images. Its release was quickly followed by a wave of images which used it to not only hide QR codes, but all sorts of patterns and subtle (or not so subtle) messages in images. It can produce interesting results :)
Thank you @jellydreams for the implementation.
There is now preliminary support to configure at which point in the sampling process the influence of a control layer should end. Previously it was active for the whole duration - now it can be set to stop early. In some cases this improves quality in a way that the existing strength parameter cannot. For more information and a comparison, see the PR.
For now this option is hidden by default, it must be activated in the plugin settings "Interface" tab first. The goal is to eventually merge this into one parameter that adjusts influence while retaining optimal quality.
Thank you @Danamir for the implementation.
Since by now various people have successfully used the plugin on macOS, I shall declare it supported :)
If you are having trouble check out this issue.
Thanks to @yantoz, users with Apple silicon can now select the MPS (Metal Performance Shader) option to install and launch the ComfyUI server from within the plugin with hardware support.
Since last version, a great (and sometimes overwhelming) number of new users have tried out the plugin. This resulted in a lot of issues with installation or onboarding existing Comfy installs. I'm not entirely surprised - there is a huge combination of OS, hardware and system setups out there, and way too much that can go wrong. Still, I hope there are also those for which it "just worked" and made everything more accessible. As for the rest, I rely on your feedback and support to improve robustness.
Improvements:
Published by Acly 11 months ago
Published by Acly 11 months ago
The features of this release require new server versions and extensions. If you used an old version of the plugin, you will have the opportunity to upgrade the server automatically after installing the latest plugin version. (The plugin itself is still updated by extracing & overwriting.)
If you maintain your own ComfyUI installation make sure to update to the latest version! See here for model downloads.
The upgrade will remove & reinstall the server code. Models will be migrated and do not have to be re-downloaded.
This release implements the LCM sampler, which requires very few steps and is therefore much faster - albeit at a loss of quality. If you happen to own a high-end GPU (RTX 4070+) this allows for generating images in less than a second. You can select the sampler in the style settings, but more excitingly, there is also a new workspace/tab which gives you a "live" update while you paint.
The AI generated version on the right automatically reflects any changes you make in the canvas. This mode is compatible with all styles, uses LCM sampler by default, and also supports control layers! I'm not sure how useful it is yet, but it's a lot of fun :)
Published by Acly 11 months ago
Stable Diffusion XL has been quietly supported for a while, but because it still lacks good inpaint/outpaint capabilities it wasn't integrated as tightly. This release provides installation support, filtering, and better documentation for SD XL. You can read more about SD 1.5 vs SD XL here.
Server installation packages were condensed, and now offer two "workloads": SD 1.5 and SD XL. You need at least one! Recommended checkpoints and control models have also been added for XL.
Style selection now includes default styles for XL, and filters them depending on the installed workloads and checkpoints. That is, you won't see styles if you can't run them.
The settings still show all styles, and show warnings if the respective workload hasn't been installed.