GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest
OTHER License
GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest Shilong Zhang*, Peize Sun*, Shoufa Chen*, Min Xiao, Wenqi Shao ,Wenwei Zhang, Kai Chen, Ping Luo (*Equal Contribution)
GPT4RoI
git clone https://github.com/jshilong/gpt4roi.git
cd gpt4roi
conda create -n gpt4roi python=3.10 -y
conda activate gpt4roi
pip install --upgrade pip # enable PEP 660 support
pip install setuptools_scm
pip install --no-cache-dir -e .
# please use conda re-install the torch, pip may loss some runtime lib
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
flash-attn
packagepip install ninja
pip install flash-attn --no-build-isolation
mmcv-1.4.7
packagenvcc -V
is consistent with cudatookit version of python -c "import torch;print(torch.version.cuda)
.cd mmcv-1.4.7
MMCV_WITH_OPS=1 pip install -e .
Our dataset includes RefCOCO, RefCOCO+, RefCOCOg, Visual Genome, Flickr30K entities, and the VCR dataset. We are sincerely grateful to the creators of these datasets, especially for the VCR dataset, for their forward-thinking in creating these dataset.
The dataset section of this repository may appear somewhat messy, especially the VCR part(still finishing), which may cause GPT4RoI not be very user-friendly. We are currently working on formulating the datasets into a unified format and will be accompanying them with stronger models. Please stay tuned for updates.
You can download the corresponding dataset from the official website and organize it as follows. Afterwards, you can modify the gpt4roi/configs/dataset_config.json
file to select the specific dataset you want to use:
GPT4RoI
data
coco_det
annotations
instances_train2017.json
train2017/
mdetr_annotations
finetune_refcoco_train.json
finetune_refcoco+_train.json
finetune_refcocog_train.json
final_flickr_mergedGT_train.json
coco_imgs/
flickr30k-images/
visual_genome
train.json
vg_all/
llava
llava_instruct_150k.json
llava_150k_bbox_pred_results.pkl
vcr
train.jsonl
vcr1images/
vg_all
.Due to the licensing restrictions of LLaMA, the delta weights GPT4RoI-7B is produced from LLaMA-7B. To acquire the GPT4RoI weights, you need to combine our delta with the original LLaMA weights.
The original LLaMA weights are available for download. Use the following commands:
git lfs install
git clone https://huggingface.co/decapoda-research/llama-7b-hf ./llama-7b
Alternatively, access the webpage to download the file.
The delta weights for GPT4RoI-7B can be downloaded using the following commands:
git lfs install
git clone https://huggingface.co/shilongz/GPT4RoI-7B-delta-V0 ./GPT4RoI-7B-delta
You can also directly download the file from this webpage.
Apply the delta weights to the original LLaMA-7B weights. Note that this conversion command requires approximately 30 GB of CPU RAM.
export PYTHONPATH=`pwd`:$PYTHONPATH
python3 -m scripts.apply_delta \
--base ./llama-7b \
--target ./GPT4RoI-7B \
--delta ./GPT4RoI-7B-delta
GPT4RoI is trained on 8 A100 with the following code.
Vicuna-v0, an instruction-tuned chatbot, is the base model for this setup. In order to prepare it, first download the delta weights available here. To obtain the original weights, follow the instructions provided here to integrate these delta weights into LLaMA-7B.
Ensure to download the following projector weight file: LLaVA-7b-pretrain-projector-v0-CC3M-595K-original_caption.bin.
Additionally, you have the flexibility to choose from different versions of Vicuna (such as the 13B version or llama v2 chatbot) and the corresponding projector weights from LLaVA to meet your specific requirements effectively.
exp/stage1
is the work directory.
bash train_stage1.sh exp/stage1
# Resume training in stage1
# bash train_stage1.sh exp/stage1
exp/stage2
is the work directory. and you should give the work directory of stage1 so we can load the corresponding weight as pretrain model.
# At the beginning of stage2
bash train_stage2.sh exp/stage2 exp/stage1
# Resume training in stage2
# bash train_stage2.sh exp/stage2
Please install Gradio Box first.
python gpt4roi/app.py
prompt format in GPT4RoI
<region1>, <region2>...
to refer the new bounding box in the image when you first draw them. Then you can use normal region 1
in the conversation to refer the instance.clear all
buttul and waiting the clear process finished before you start a new conversation.If you find GPT4RoI useful for your your research and applications, please cite using this BibTeX:
@article{zhang2023gpt4roi,
title={Gpt4roi: Instruction tuning large language model on region-of-interest},
author={Zhang, Shilong and Sun, Peize and Chen, Shoufa and Xiao, Min and Shao, Wenqi and Zhang, Wenwei and Chen, Kai and Luo, Ping},
journal={arXiv preprint arXiv:2307.03601},
year={2023}
}