LLM-powered code documentation generation
MIT License
Python bindings for llama.cpp
100% Private & Simple. OSS 🐍 Code Interpreter for LLMs 🦙
Yet another `llama.cpp` Rust wrapper
♾️ toolkit for air-gapped LLMs on consumer-grade hardware
Run any open-source LLMs, such as Llama 2, Mistral, as OpenAI compatible API endpoint, locally an...
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable fo...
Local first semantic code search and chat powered by vector embeddings and LLMs
A simple "Be My Eyes" web app with a llama.cpp/llava backend
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema ...
Run any Large Language Model behind a unified API
Run LLMs locally. A clojure wrapper for llama.cpp.
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llam...
🚀 this project aims to develop an app using an existing open-source LLM with data collected for d...
LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A