Fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters
APACHE-2.0 License
Inference code for Llama models
AirLLM 70B inference with single 4GB GPU
A high-performance inference system for large language models, designed for production environments.
LLM Finetuning with peft
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.g...
Utilities intended for use with Llama models.
KoAlpaca: 한국어 명령어를 이해하는 오픈소스 언어모델
🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
🚀 Easy, open-source LLM finetuning with one-line commands, seamless cloud integration, and popula...
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provi...
The official Meta Llama 3 GitHub site
Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports...
Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Fal...
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-nod...
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable fo...