Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper
MIT License
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch
Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research ...
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deforma...
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"
Implementation of Agent Attention in Pytorch
An implementation of Performer, a linear attention-based transformer, in Pytorch
Implementation of Block Recurrent Transformer - Pytorch
A Transformer made of Rotation-equivariant Attention using Vector Neurons
Implementation of Q-Transformer, Scalable Offline Reinforcement Learning via Autoregressive Q-Fun...
Generative Adversarial Transformers
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repo...
An implementation of local windowed attention for language modeling
Unofficial implementation of iTransformer - SOTA Time Series Forecasting using Attention networks...