CaptionBot : Sequence to Sequence Modelling where Encoder is CNN(Resnet-50) and Decoder is LSTMCell with soft attention mechanism
MIT License
Text to Image Generation (Reverse image captioning): This task is just the reverse of image capti...
Sequence to Sequence from Scratch Using Pytorch
My solution to Kaggle challenge "IEEE Camera Model Identification" [top 3%]
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relat...
Explainability for Vision Transformers
A PyTorch implementation of the Transformer model in "Attention is All You Need".
This repository contains demos I made with the Transformers library by HuggingFace.
PyTorch and TensorFlow/Keras image models with automatic weight conversions and equal API/impleme...
A recurrent attention module consisting of an LSTM cell which can query its own past cell states ...
Transformer based on a variant of attention that is linear complexity in respect to sequence length
Transformer: PyTorch Implementation of "Attention Is All You Need"
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
Network-to-Network Translation with Conditional Invertible Neural Networks
Transformers are Graph Neural Networks!
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.