Explorations into the recently proposed Taylor Series Linear Attention
MIT License
Published by lucidrains 10 months ago
Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and ...
An implementation of local windowed attention for language modeling
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Pytorch implementation of Compressive Transformers, from Deepmind
An implementation of Performer, a linear attention-based transformer, in Pytorch
Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-E...
Implementation of Q-Transformer, Scalable Offline Reinforcement Learning via Autoregressive Q-Fun...
Implementation of Voicebox, new SOTA Text-to-speech network from MetaAI, in Pytorch
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
Unofficial implementation of iTransformer - SOTA Time Series Forecasting using Attention networks...
Implementation of the Equiformer, SE3/E3 equivariant attention network that reaches new SOTA, and...
Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Re...
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architectu...
Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch