A native macOS app that allows users to chat with a local LLM with RAG capabilities. It can gain context from resources including folders, files and websites.
MIT License
Statistics for this project are still being loaded, please check back later.
User-friendly Desktop Client App for AI Models/LLMs (GPT, Claude, Gemini, Ollama...)
The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in ...
telegram bot for self-hosted local inference of stable diffusion, text-to-speech and large langua...
LocalChat is a ChatGPT-like chat that runs on your computer
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema ...
FreeGenius AI, an advanced AI assistant that can talk and take multi-step actions. Supports numer...
Lightweight, standalone, multi-platform, and privacy focused local LLM chat interface with option...
Search your favorite websites and chat with them, on your desktop🌐
Swift powered native macOS client for Ollama, ChatGPT and compatible API-backends
🦙 Free and Open Source Large Language Model (LLM) chatbot web UI and API. Self-hosted, offline ca...
llama_cpp provides Ruby bindings for llama.cpp
Chat with your favourite LLaMA models in a native macOS app
AI-powered assistant to help you with your daily tasks, powered by Llama 3.2. It can recognize yo...
Wingman is the fastest and easiest way to run Llama models on your PC or Mac.
A simple-to-use, open source GUI for local AI chat on desktop and mobile. Powered by llama.cpp.