LLM Benchmark for Throughput via Ollama (Local LLMs)
MIT License
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
Local CLI Copilot, powered by CodeLLaMa. 💻🦙
Benchmark your local LLMs.
LLMCompiler: An LLM Compiler for Parallel Function Calling
A Ruby gem for interacting with Ollama's API that allows you to run open source AI LLMs (Large La...
Ollama provides Few embedding models. This plugin enables the usage of those models using Ollama.
This LLM generates code based on tests, and makes sure they pass.
WikiRag is a Retrieval-Augmented Generation (RAG) system designed for question answering, it redu...
Go manage your Ollama models
Ollamark! Benchmarking for Ollama. GUI and CLI client in one.
Easy "1-line" calling of all LLMs from OpenAI, MS Azure, AWS Bedrock, GCP Vertex, and Ollama
Go package and example utilities for using Ollama / LLMs
R library to run Ollama language models
A programming framework for knowledge management
Yet another operator for running large language models on Kubernetes with ease. Powered by Ollama! 🐫