Openai-style, fast & lightweight local language model inference w/ documents
Explore large language models in 512MB of RAM
LLM Inference benchmark
Plugin for LLM adding support for the GPT4All collection of models
CLI helper tool to lookup commands based on a description
A lightweight library that leverages Language Models (LLMs) to enable natural language interactio...
LLM plugin for running models using llama.cpp
Home of StarCoder: fine-tuning & inference!
100% Local AGI with LocalAI
Run any Large Language Model behind a unified API
Chat language model that can use tools and interpret the results
Embed arbitrary modalities (images, audio, documents, etc) into large language models.
Running large language models on a single GPU for throughput-oriented scenarios.
MiniCPM-Llama3-V 2.5: A GPT-4V Level Multimodal LLM on Your Phone
Access large language models from the command-line