The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
MIT License
Bot releases are hidden (Show)
Published by mudler 9 months ago
Patch release to create /build/models
in the container images.
Full Changelog: https://github.com/mudler/LocalAI/compare/v2.5.0...v2.5.1
Published by mudler 10 months ago
This release adds more embedded models, and shrink image sizes.
You can run now phi-2
( see here for the full list ) locally by starting localai with:
docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core phi-2
LocalAI accepts now as argument a list of short-hands models and/or URLs pointing to valid yaml file. A popular way to host those files are Github gists.
For instance, you can run llava
, by starting local-ai
with:
docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core https://raw.githubusercontent.com/mudler/LocalAI/master/embedded/models/llava.yaml
Full Changelog: https://github.com/mudler/LocalAI/compare/v2.4.1...v2.5.0
Published by mudler 10 months ago
Full Changelog: https://github.com/mudler/LocalAI/compare/v2.4.0...v2.4.1
Published by mudler 10 months ago
Full Changelog: https://github.com/mudler/LocalAI/compare/v2.3.1...v2.4.0
Published by mudler 10 months ago
Full Changelog: https://github.com/mudler/LocalAI/compare/v2.3.0...v2.3.1
Published by mudler 10 months ago
Full Changelog: https://github.com/mudler/LocalAI/compare/v2.2.0...v2.3.0
Published by mudler 10 months ago
This release brings updates to the backends and includes a fix for recompilation of LocalAI with go-rwkv (https://github.com/mudler/LocalAI/issues/1473). To note, it also tries to reduce the image size by allowing some backends (transformers-based) to share the same environment.
With this release inline templates and models as URLs are supported, for example:
name: mixtral
parameters:
model: https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/resolve/main/mixtral-8x7b-v0.1.Q2_K.gguf
# or huggingface://TheBloke/Mixtral-8x7B-v0.1-GGUF/mixtral-8x7b-v0.1.Q2_K.gguf@main
template:
completion: |
Complete the following: {{.Input}}
Full Changelog: https://github.com/mudler/LocalAI/compare/v2.1.0...v2.2.0
Published by mudler 10 months ago
Full Changelog: https://github.com/mudler/LocalAI/compare/v2.0.0...v2.1.0
Published by mudler 11 months ago
Full Changelog: https://github.com/mudler/LocalAI/compare/v1.40.0...v2.0.0
Published by mudler 11 months ago
Full Changelog: https://github.com/mudler/LocalAI/compare/v1.40.0...v2.0.0_beta
Published by mudler 12 months ago
This release is a preparation before v2 - the efforts now will be to refactor, polish and add new backends. Follow up on: https://github.com/mudler/LocalAI/issues/1126
This release now brings the llama-cpp
backend which is a c++ backend tied to llama.cpp. It follows more closely and tracks recent versions of llama.cpp. It is not feature compatible with the current llama
backend but plans are to sunset the current llama
backend in favor of this one. This one will be probably be the latest release containing the older llama
backend written in go and c++. The major improvement with this change is that there are less layers that could be expose to potential bugs - and as well it ease out maintenance as well.
This release bring support for AMD thanks to @65a . See more details in https://github.com/mudler/LocalAI/pull/1100
Thanks to @jespino now the local-ai binary has more subcommands allowing to manage the gallery or try out directly inferencing, check it out!
examples/
models and starter .env
files by @jamesbraza in https://github.com/mudler/LocalAI/pull/1124
Full Changelog: https://github.com/mudler/LocalAI/compare/v1.30.0...v1.40.0
Published by mudler about 1 year ago
This is an exciting LocalAI release! Besides bug-fixes and enhancements this release brings the new backend to a whole new level by extending support to vllm
and vall-e-x
for audio generation!
/models/jobs
endpoint by @Jirubizu in https://github.com/go-skynet/LocalAI/pull/983
Full Changelog: https://github.com/go-skynet/LocalAI/compare/v1.25.0...v2.0.0
Published by mudler about 1 year ago
Full Changelog: https://github.com/go-skynet/LocalAI/compare/v1.24.1...v1.25.0
Published by mudler about 1 year ago
This is a patch release - images were not correctly pushed by the CI in 1.24.0
Full Changelog: https://github.com/go-skynet/LocalAI/compare/v.1.24.0...v1.24.1
Published by mudler about 1 year ago
Full Changelog: https://github.com/go-skynet/LocalAI/compare/v1.23.2...v.1.24.0
Published by mudler about 1 year ago
Full Changelog: https://github.com/go-skynet/LocalAI/compare/v1.23.1...v1.23.2
Published by mudler about 1 year ago
io/ioutil
by @dave-gray101 in https://github.com/go-skynet/LocalAI/pull/837
Full Changelog: https://github.com/go-skynet/LocalAI/compare/v1.23.0...v1.23.1
Published by mudler about 1 year ago
Full Changelog: https://github.com/go-skynet/LocalAI/compare/v1.22.0...v1.23.0
Published by mudler about 1 year ago
Full Changelog: https://github.com/go-skynet/LocalAI/compare/v1.21.0...v1.22.0
Published by mudler over 1 year ago
gRPC
-based backends by @mudler in https://github.com/go-skynet/LocalAI/pull/743
ggllm.cpp
by @mudler in https://github.com/go-skynet/LocalAI/pull/743
Full Changelog: https://github.com/go-skynet/LocalAI/compare/v1.20.1...v1.21.0