Ghudsavar (Horse rider) - Is a quick llama.cpp server for CPU only runtimes
MIT License
[Work In Progress] Server/Cloud-ready FastChat Docker images.
Finetuning of Gemma-2 2B for structured output
🚀 this project aims to develop an app using an existing open-source LLM with data collected for d...
A simple "Be My Eyes" web app with a llama.cpp/llava backend
Search your favorite websites and chat with them, on your desktop🌐
100% Private & Simple. OSS 🐍 Code Interpreter for LLMs 🦙
llama_cpp provides Ruby bindings for llama.cpp
Bootstrap a server from llama-cpp in a few lines of python
Run any Large Language Model behind a unified API
LLaMA-2 in native Go
Ampere optimized llama.cpp
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llam...
LocalAGI:Locally run AGI powered by LLaMA, ChatGLM and more. | 基于 ChatGLM, LLaMA 大模型的本地运行的 AGI
llama.cpp with BakLLaVA model describes what does it see