This LLM generates code based on tests, and makes sure they pass.
Published by JamesVorder 3 months ago
This release is the first functional version of the tool, and was used to generate a basic rock, paper, scissors game engine.
Full Changelog: https://github.com/JamesVorder/python-tddpp/commits/v0.0.1
Experimental front-end client library for interacting with llama.cpp
LLM Benchmark for Throughput via Ollama (Local LLMs)
LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.
Get up and running with Llama 3.2, Mistral, Gemma 2, and other large language models.
Self-host llmapi server, make it really easy for accessing LLMs !
Python library for the instruction and reliable validation of structured outputs (JSON) of Large ...
Query LLM with Chain-of-Tought
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable fo...
Local first semantic code search and chat powered by vector embeddings and LLMs
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
LLM-powered code documentation generation
Yet another `llama.cpp` Rust wrapper
The simplest way to run LLaMA on your local machine
World’s first and simplest AI-oriented programming language using Ollama.
A simple, intuitive toolkit for quickly implementing LLM powered applications.