Unified-Modal Speech-Text Pre-Training for Spoken Language Processing
MIT License
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
This repository contains resources for accessing the official benchmarks, codes, and checkpoints ...
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Grounded Language-Image Pre-training
AI-powered ab initio biomolecular dynamics simulation
The implementation of DeBERTa
NOTSOFAR-1 Challenge: Distant Diarization and ASR
Foundation Architecture for (M)LLMs
Large-scale pretraining for dialogue
An efficient implementation of the popular sequence models for text generation, summarization, an...
Large Language-and-Vision Assistant for Biomedicine, built towards multimodal GPT-4 level capabil...
CodeBERT
[CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and la...