LSP server leveraging LLMs for code completion (and more?)
APACHE-2.0 License
telegram bot for self-hosted local inference of stable diffusion, text-to-speech and large langua...
LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1
LSP-AI is an open-source language server that serves as a backend for AI-powered functionality, d...
Experimental front-end client library for interacting with llama.cpp
WebAssembly binding for llama.cpp - Enabling in-browser LLM inference
Rack API application for Llama.cpp
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable fo...
A simple "Be My Eyes" web app with a llama.cpp/llava backend
š¦ Integrating LLMs into structured NLP pipelines
LLM inference in C/C++
A simple, intuitive toolkit for quickly implementing LLM powered applications.
A high-performance inference system for large language models, designed for production environments.
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for DocumentĀ Q&A
`llm-chain` is a powerful rust crate for building chains in large language models allowing you to...
Yet another `llama.cpp` Rust wrapper