This is a server for vacancies generation using LLM (Saiga3)
Published by AlekseyScorpi 5 months ago
LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!
A fully-contained, ready-to-run environment to finetune Llama 3 model with custom dataset and run...
Rack API application for Llama.cpp
A simple "Be My Eyes" web app with a llama.cpp/llava backend
🚀 this project aims to develop an app using an existing open-source LLM with data collected for d...
UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click...
This project creates a real-time conversational AI, either serverless via SvelteKit/Static or usi...
Ghudsavar (Horse rider) - Is a quick llama.cpp server for CPU only runtimes
Bootstrap a server from llama-cpp in a few lines of python
Run any Large Language Model behind a unified API
✨ Fully autonomous AI Agent that can perform complicated tasks and projects using terminal, brows...
Chatbot from pretrained LLaMA-2 LLM model, fine-tuned with medical research papers using RAG (Ret...
100% Private & Simple. OSS 🐍 Code Interpreter for LLMs 🦙
LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.