Explorations into the recently proposed Taylor Series Linear Attention
MIT License
An implementation of Performer, a linear attention-based transformer, in Pytorch
Implementation of Voicebox, new SOTA Text-to-speech network from MetaAI, in Pytorch
Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Re...
Implementation of Q-Transformer, Scalable Offline Reinforcement Learning via Autoregressive Q-Fun...
Implementation of the Equiformer, SE3/E3 equivariant attention network that reaches new SOTA, and...
Pytorch implementation of Compressive Transformers, from Deepmind
Unofficial implementation of iTransformer - SOTA Time Series Forecasting using Attention networks...
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and ...
An implementation of local windowed attention for language modeling
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architectu...
Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-E...
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch
Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch