Simply, faster, sentence-transformers
MIT License
Fine tuning experiments for the GPT-2 model by OpenAI.
Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer ...
Implementation of Fast Transformer in Pytorch
Easy-to-use text representations extraction library based on the Transformers library.
Transformer models implementation for training from scratch.
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating poin...
Home of StarCoder: fine-tuning & inference!
Home of StarCoder2!
LLM Inference benchmark
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed conf...
Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding...
Naively combining transformers and Kolmogorov-Arnold Networks to learn and experiment
Train fastai models faster (and other useful tools)