🧩 (Checkpoint Mergers) Deep learning model for reviewing or processing images in anime style.
GPL-2.0 License
Welcome to Animistatics - a latent diffusion model for weebs. This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images.
e.g. girl, cafe, plants, coffee, lighting, steam, blue eyes, brown hair
We support a Gradio Web UI to run Animistatics:
We support a Google Colab to run Animistatics:
This model can be used just like any other Stable Diffusion model. For more information, please have a look at the Stable Diffusion.
You can also export the model to ONNX, MPS and/or FLAX/JAX.
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
import torch
repo_id = "Maseshi/Animistatics"
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "girl, cafe, plants, coffee, lighting, steam, blue eyes, brown hair"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("girl.png")
Below are some examples of images generated using this model:
Anime Girl:
girl, cafe, plants, coffee, lighting, steam, blue eyes, brown hair
Steps: 50, Sampler: DDIM, CFG scale: 12
Anime Boy:
boy, blonde hair, blue eyes, colorful, cumulonimbus clouds, lighting, medium hair, plants, city, hoodie, cool
Steps: 50, Sampler: DDIM, CFG scale: 12
City:
cityscape, concept art, sun shining through clouds, crepuscular rays, trending on art station, 8k
Steps: 50, Sampler: DDIM, CFG scale: 12
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: