Tinyllamas🦙 is an Extensible advanced language model framework, inspired by the original Llama model.
APACHE-2.0 License
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
AirLLM 70B inference with single 4GB GPU
TextGen: Implementation of Text Generation models, include LLaMA, BLOOM, GPT2, BART, T5, SongNet ...
[ICLR 2024] Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language M...
Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
Inference code for Llama models
KoAlpaca: 한국어 명령어를 이해하는 오픈소스 언어모델
[ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
Utilities intended for use with Llama models.
LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft
Finetune llama2-70b and codellama on MacBook Air without quantization
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and b...
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable fo...
Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Fal...
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用