Implementation of the Transformer architecture described by Vaswani et al. in "Attention Is All You Need"
MIT License
Statistics for this project are still being loaded, please check back later.
Probing the representations of Vision Transformers.
Keras implementation of ViT (Vision Transformer)
This repository provides a Colab Notebook that shows how to use Spatial Transformer Networks insi...
An implementation of Fastformer: Additive Attention Can Be All You Need, a Transformer Variant in...
Transformer-based models to flash-simulate the LHCb ECAL detector
A non-exhaustive collection of vision transformer models implemented in TensorFlow.
This repository presents a Python-based implementation of the Transformer architecture on Keras T...
A TensorFlow-compatible Python library that provides models and layers to implement custom Transf...
attention block for keras Functional Model with only tensorflow backend
Keras Attention Layer (Luong and Bahdanau scores).
Collection of custom layers and utility functions for Keras which are missing in the main framework.
TensorFlow and Deep Learning Tutorials
Sentence Reconstruction using Transformer Model
Contains additional materials for two keras.io blog posts.
Keras implementation of the "Show, Attend and Tell" paper