Together Mixture-Of-Agents (MoA) – 65.1% on AlpacaEval with OSS models
APACHE-2.0 License
LL3M: Large Language and Multi-Modal Model in Jax
OpenChat: Advancing Open-source Language Models with Imperfect Data
MiniCPM3-4B: An edge-side LLM that surpasses GPT-3.5-Turbo.
Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
An open platform for training, serving, and evaluating large language models. Release repo for Vi...
Code Release of F-LMM: Grounding Frozen Large Multimodal Models
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Code and documentation to train Stanford's Alpaca models, and generate the data.
Mixture-of-Experts for Large Vision-Language Models
A family of open-sourced Mixture-of-Experts (MoE) Large Language Models
Dromedary: towards helpful, ethical and reliable LLMs.
Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment
Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tune...
LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA a...