This LLM generates code based on tests, and makes sure they pass.
Get up and running with Llama 3.2, Mistral, Gemma 2, and other large language models.
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable fo...
Python library for the instruction and reliable validation of structured outputs (JSON) of Large ...
The simplest way to run LLaMA on your local machine
LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.
LLM Benchmark for Throughput via Ollama (Local LLMs)
Yet another `llama.cpp` Rust wrapper
World’s first and simplest AI-oriented programming language using Ollama.
A simple, intuitive toolkit for quickly implementing LLM powered applications.
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
Query LLM with Chain-of-Tought
Experimental front-end client library for interacting with llama.cpp
Local first semantic code search and chat powered by vector embeddings and LLMs
LLM-powered code documentation generation
Self-host llmapi server, make it really easy for accessing LLMs !