MASS: Masked Sequence to Sequence Pre-training for Language Generation
OTHER License
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Unified-Modal Speech-Text Pre-Training for Spoken Language Processing
Large Language-and-Vision Assistant for Biomedicine, built towards multimodal GPT-4 level capabil...
CodeBERT
Machine learning for sequences.
Dedicated to building industrial foundation models for universal data intelligence across industr...
Grounded Language-Image Pre-training
This repository contains resources for accessing the official benchmarks, codes, and checkpoints ...
[CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and la...
ProbTS is a benchmarking toolkit for time series forecasting.
This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Mo...
The implementation of DeBERTa
Large-scale pretraining for dialogue
An efficient implementation of the popular sequence models for text generation, summarization, an...