🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
APACHE-2.0 License
Learn about the Neumorphic engineering process of creating large-scale integration (VLSI) systems...
The simplest, fastest repository for training/finetuning medium-sized GPTs.
A flexible package for multimodal-deep-learning to combine tabular data with text and images usin...
Unsupervised Language Modeling at scale for robust sentiment classification
In this repository, I will share some useful notes and references about deploying deep learning-b...
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relat...
Ongoing research training transformer language models at scale, including: BERT & GPT-2
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating poin...
The fastai deep learning library
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video ...
Minimalistic large language model 3D-parallelism training
Train fastai models faster (and other useful tools)
End-to-end recipes for optimizing diffusion models with torchao and diffusers (inference and FP8 ...
High-level library to help with training and evaluating neural networks in PyTorch flexibly and t...
The PyTorch Implementation based on YOLOv4 of the paper: "Complex-YOLO: Real-time 3D Object Detec...