A repository with exploration into using transformers to predict DNA ↔ transcription factor binding
MIT License
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2
Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Re...
Implementation of the Equiformer, SE3/E3 equivariant attention network that reaches new SOTA, and...
Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
Implementation and replication of ProGen, Language Modeling for Protein Generation, in Jax
To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details...
Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Implementation of Phenaki Video, which uses Mask GIT to produce text guided videos of up to 2 min...
A simple but complete full-attention transformer with a set of promising experimental features fr...
Skeletonize densely labeled 3D image segmentations with TEASAR. (Medial Axis Transform)
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-E...
Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch
DALL·E Mini - Generate images from a text prompt