๐ 2024-10-05 : ๐ฅ This week's updates ๐ฅ >
๐ 2024-09-28 : ๐ฅ This week's updates ๐ฅ >
๐ 2024-09-22 : ๐ฅ Enhancement of models lists for some modules ๐ฅ > Handling of locals models (manually downloaded .safetensors and .gguf models) is now modified for Stable Diffusion-based modules, Chatbot and LoRAs models. These models are now listed at the bottom of models lists, in the "Local models" category, instead of being at the top of these lists.
๐ 2024-09-21 : ๐ฅ This week's updates ๐ฅ >
๐ 2024-09-14 : ๐ฅ This week's updates ๐ฅ >
Text generation using :
Image generation and modification using :
Audio generation using :
Video generation and modification using :
3D objects generation using :
Other features
Minimal hardware :
Recommended hardware :
Operating system :
Note : biniou support Cuda or ROCm but does not require a dedicated GPU to run. You can install it in a virtual machine.
sh <(curl https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-opensuse.sh || wget -O - https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-opensuse.sh)
sh <(curl https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-rhel.sh || wget -O - https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-rhel.sh)
sh <(curl https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-debian.sh || wget -O - https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-debian.sh)
apt install git pip python3 python3-venv gcc perl make ffmpeg openssl
git clone https://github.com/Woolverine94/biniou.git
cd ./biniou
./install.sh
apt install google-perftools
Windows installation has more prerequisites than GNU/Linux one, and requires following softwares (which will be installed automatically) :
โ ๏ธ You should really make a backup of your system and datas before starting the installation process. โ ๏ธ
OR
All the installation is automated, but Windows UAC will ask you confirmation for each software installed during the "prerequisites" phase. You can avoid this by running the chosen installer as administrator.
โ ๏ธ Since commit 8d2537b Windows users can now define a custom path for biniou directory, when installing with install_win.cmd
โ ๏ธ
Proceed as follow :
set DEFAULT_BINIOU_DIR="%userprofile%"
to set DEFAULT_BINIOU_DIR="E:\datas\somedir"
(for example)E:\datas\somedir
and not .\datas\somedir
)E:\datas\somedir
and not E:\datas\somedir\
)E:\datas\somedir\biniou
), as the biniou directory will be created by the git clone commandโ ๏ธ Homebrew install is theoretically compatible with macOS Intel, but has not been tested. Use at your own risk. Also note that biniou is currently incompatible with Apple silicon. Any feedback on this procedure through discussions or an issue ticket will be really appreciated. โ ๏ธ
โ ๏ธ Update 01/09/2024: Thanks to @lepicodon, there's a workaround for Apple Silicon's users : you can install biniou in a virtual machine using OrbStack. See this comment for explainations. โ ๏ธ
Install Homebrew for your operating system
Install required homebrew "bottles" :
brew install git python3 gcc gcc@11 perl make ffmpeg openssl
python3 -m pip install virtualenv
git clone https://github.com/Woolverine94/biniou.git
cd ./biniou
./install.sh
These instructions assumes that you already have a configured and working docker environment.
docker build -t biniou https://github.com/Woolverine94/biniou.git
or, for CUDA support :
docker build -t biniou https://raw.githubusercontent.com/Woolverine94/biniou/main/CUDA/Dockerfile
docker run -it --restart=always -p 7860:7860 \
-v biniou_outputs:/home/biniou/biniou/outputs \
-v biniou_models:/home/biniou/biniou/models \
-v biniou_cache:/home/biniou/.cache/huggingface \
-v biniou_gfpgan:/home/biniou/biniou/gfpgan \
biniou:latest
or, for CUDA support :
docker run -it --gpus all --restart=always -p 7860:7860 \
-v biniou_outputs:/home/biniou/biniou/outputs \
-v biniou_models:/home/biniou/biniou/models \
-v biniou_cache:/home/biniou/.cache/huggingface \
-v biniou_gfpgan:/home/biniou/biniou/gfpgan \
biniou:latest
Note : to save storage space, the previous container launch command defines common shared volumes for all biniou containers and ensure that the container auto-restart in case of OOM crash. Remove
--restart
and-v
arguments if you didn't want these behaviors.
biniou is natively cpu-only, to ensure compatibility with a wide range of hardware, but you can easily activate CUDA support through Nvidia CUDA (if you have a functionnal CUDA 12.1 environment) or AMD ROCm (if you have a functionnal ROCm 5.6 environment) by selecting the type of optimization to activate (CPU, CUDA or ROCm for Linux), in the WebUI control module.
Currently, all modules except Chatbot, Llava and faceswap modules, could benefits from CUDA optimization.
cd /home/$USER/biniou
./webui.sh
Note : First start could be very slow on Windows 11 (comparing to others OS).
Access the webui by the url : https://127.0.0.1:7860 or https://127.0.0.1:7860/?__theme=dark for dark theme (recommended) You can also access biniou from any device (including smartphones) on the same LAN/Wifi network by replacing 127.0.0.1 in the url with biniou host ip address.
Quit by using the keyboard shortcut CTRL+C in the Terminal
Update this application (biniou + python virtual environment) by using the WebUI control updates options.
Most frequent cause of crash is not enough memory on the host. Symptom is biniou program closing and returning to/closing the terminal without specific error message. You can use biniou with 8GB RAM, but 16GB at least is recommended to avoid OOM (out of memory) error.
biniou use a lot of differents AI models, which requires a lot of space : if you want to use all the modules in biniou, you will need around 200GB of disk space only for the default model of each module. Models are downloaded on the first run of each module or when you select a new model in a module and generate content. Models are stored in the directory /models of the biniou installation. Unused models could be deleted to save some space.
... consequently, you will need a fast internet access to download models.
A backup of every content generated is available inside the /outputs directory of the biniou folder.
biniou natively only rely on CPU for all operations. It use a specific CPU-only version of PyTorch. The result is a better compatibility with a wide range of hardware, but degraded performances. Depending on your hardware, expect slowness. See here for Nvidia CUDA support and AMD ROCm experimental support (GNU/Linux only).
Defaults settings are selected to permit generation of contents on low-end computers, with the best ratio performance/quality. If you have a configuration above the minimal settings, you could try using other models, increase media dimensions or duration, modify inference parameters or others settings (like token merging for images) to obtain better quality contents.
biniou is licensed under GNU GPL3, but each model used in biniou has its own license. Please consult each model license to know what you can and cannot do with the models. For each model, you can find a link to the huggingface page of the model in the "About" section of the associated module.
Don't have too much expectations : biniou is in an early stage of development, and most open source software used in it are in development (some are still experimentals).
Every biniou modules offers 2 accordions elements About and Settings :
This application uses the following softwares and technologies :
StableDiffusionPipeline
-based modulesStableDiffusionPipeline
-based modules... and all their dependencies
GNU General Public License v3.0
GitHub @Woolverine94 ย ยทย