Opinionated GPT implementation and finetuning harness.
APACHE-2.0 License
Statistics for this project are still being loaded, please check back later.
Video+code lecture on building nanoGPT from scratch
Chat with your documents on your local device using GPT models. No data leaves your device and 10...
Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
python sftune, qmerge and dpo scripts with unsloth
4 bits quantization of LLaMA using GPTQ
TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iter...
Ongoing research training transformer language models at scale, including: BERT & GPT-2
Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enab...
Home of StarCoder: fine-tuning & inference!
Pytorch Lightning code guideline for conferences
20+ high-performance LLM implementations with recipes to pretrain, finetune and deploy at scale.
High performance model preprocessing library on PyTorch
CodeAssist is an advanced code completion tool that provides high-quality code completions for Py...
The simplest, fastest repository for training/finetuning medium-sized GPTs.