To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released
MIT License
DALL·E Mini - Generate images from a text prompt
Gibbs sampling for generating protein sequences
A repository with exploration into using transformers to predict DNA ↔ transcription factor binding
Replication attempt for the Protein Folding Model described in https://www.biorxiv.org/content/10...
Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-E...
Skeletonize densely labeled 3D image segmentations with TEASAR. (Medial Axis Transform)
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repo...
A simple but complete full-attention transformer with a set of promising experimental features fr...
Implementation of Phenaki Video, which uses Mask GIT to produce text guided videos of up to 2 min...
Implementation of the Equiformer, SE3/E3 equivariant attention network that reaches new SOTA, and...
Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Re...
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2