Chat with your favourite LLaMA models in a native macOS app
MIT License
Bot releases are hidden (Show)
Published by alexrozanski over 1 year ago
Happy Friday! This is the v1.2.0 release of LlamaChat, and the big update this week is support for configuring ✨ model hyperparameters ✨, alongside a bunch of other tweaks and improvements.
--mlock
parameter. (#4).ggml
files directly into LlamaChat without conversion, thanks to some upstream changes made to llama.cpp. (#3).ggml
artefact was left on the filesystem. This has now been fixed, and any of these artefacts left by previous versions of LlamaChat are automatically cleaned up by on launch. (#10)Published by alexrozanski over 1 year ago
Scripts/
README
Published by alexrozanski over 1 year ago
This release fixes a few niggly issues, as well as an issue related to Chat Sources (#1):
v1.0
tagPublished by alexrozanski over 1 year ago
This is the v1.0 release of LlamaChat, which allows you to run LLaMA-compatible model files in a native macOS chat-style app.
LlamaChat currently supports models from:
LlamaChat supports models in both the raw PyTorch checkpoint format (.pth
) as well as the .ggml
format, since LlamaChat is powered by the ggml and llama.cpp and llama.swift libraries.
.pth
and .ggml
models with support for pre-converting .pth
files directly within the app. Note that some manual intervention may be necessary in the case of outdated .ggml
model files; please see the llama.cpp repository for more.