A low code Python package for crafting GenAI applications quickly
APACHE-2.0 License
WikiRag is a Retrieval-Augmented Generation (RAG) system designed for question answering, it redu...
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llam...
A command-line productivity tool powered by AI large language models like GPT-4, will help you ac...
Local first semantic code search and chat powered by vector embeddings and LLMs
Function-calling API for LLM from multiple providers
A terminal chatbot, powered by Groq Cloud API (Windows / macOS / Linux / Android / iOS)
TUI for Ollama
Desktop AI Assistant powered by GPT-4, GPT-4 Vision, GPT-3.5, Gemini, Claude, Llama 3, DALL-E, La...
Easy "1-line" calling of all LLMs from OpenAI, MS Azure, AWS Bedrock, GCP Vertex, and Ollama
AI Chatbots in terminal without needing API keys
Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench
simple, type-safe, isomorphic LLM interactions (with power)
Your Local AI Assistant: Executes commands, interprets code, integrates vision models, and conver...
Ruby Implementation of Nano Bots: small, AI-powered bots that can be easily shared as a single fi...
Agentic components of the Llama Stack APIs