A RAG LLM co-pilot for browsing the web, powered by local LLMs
MIT License
🦙 Free and Open Source Large Language Model (LLM) chatbot web UI and API. Self-hosted, offline ca...
A NodeJS RAG framework to easily work with LLMs and embeddings
A Web Interface for chatting with your local LLMs via the ollama API
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable fo...
WebAssembly binding for llama.cpp - Enabling in-browser LLM inference
A simple, intuitive toolkit for quickly implementing LLM powered applications.
Chrome Extension to Summarize or Chat with Web Pages/Local Documents Using locally running LLMs. ...
A simple RAG chatbot that can retrieve from a mediawiki data dump
Simple LLM library for JavaScript
telegram bot for self-hosted local inference of stable diffusion, text-to-speech and large langua...
The TypeScript library for building AI applications.
A Ruby gem for interacting with Ollama's API that allows you to run open source AI LLMs (Large La...
Running local Language Language Models (LLM) to perform Retrieval-Augmented Generation (RAG)
VT.ai - Multimodal AI Chatbot
Desktop AI Assistant powered by GPT-4, GPT-4 Vision, GPT-3.5, Gemini, Claude, Llama 3, DALL-E, La...