An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta.
APACHE-2.0 License
Statistics for this project are still being loaded, please check back later.
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qw...
Multimodal-GPT
Training LLaMA language model with MMEngine! It supports LoRA fine-tuning!
LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft
LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1
Simple UI for LLM Model Finetuning
Finetune llama2-70b and codellama on MacBook Air without quantization
An Efficient "Factory" to Build Multiple LoRA Adapters
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable fo...
[CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and b...
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Underst...
Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Fal...
EAGLE: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders
Multimodal Instruction Tuning for Llama 3