Implementation of Slot Attention from GoogleAI
MIT License
A Pytorch implementation of Attention on Attention module (both self and guided variants), for Vi...
Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch
Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image gen...
Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attenti...
Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deforma...
Fast and memory-efficient exact attention
A simple cross attention that updates both the source and target in one step
Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
An implementation of (Induced) Set Attention Block, from the Set Transformers paper
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architectu...
An implementation of local windowed attention for language modeling
Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch
(Unofficial) Implementation of dilated attention from "LongNet: Scaling Transformers to 1,000,000...
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Paramete...