The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"
OTHER License
Zhengfeng Lai*, Haotian Zhang* , Bowen Zhang, Wentao Wu, Haoping Bai, Aleksei Timofeev, Xianzhi Du, Zhe Gan, Jiulong Shan, Chen-Nee Chuah, Yinfei Yang, Meng Cao [*: equal contribution]
git clone https://github.com/apple/ml-veclip
cd ml-veclip
conda create -n veclip python=3.9 -y
conda activate veclip
pip install -r requirements.txt
See the example notebook for details on how to simply load the different checkpoints using HuggingFace transformers.
We split our 300M data into 10 jsons: for each image, we save the web link and our caption.
wget -i vecap300m.txt -b -c
We release the checkpoints for VeCLIP, which are trained from scratch on visual-enriched captions VeCap 3M/12M/100M/200M/300M, as reported in the paper. The models are evaluated on COCO/Flickr30k image-text retrieval and ImageNet/ImageNetv2 classification in a zero-shot fashion. Use wget
or curl
to download the below checkpoints.
We further found our VeCap can also be complementary to other well-established filtering methods, e.g., Data Filtering Network (DFN). We also provide thosse checkpoints (referred to as VeCap-DFN) and report their performance below.
If you find VeCLIP useful, please cite using this BibTeX:
@misc{lai2024veclip,
title={VeCLIP: Improving CLIP Training via Visual-enriched Captions},
author={Zhengfeng Lai and Haotian Zhang and Bowen Zhang and Wentao Wu and Haoping Bai and Aleksei Timofeev and Xianzhi Du and Zhe Gan and Jiulong Shan and Chen-Nee Chuah and Yinfei Yang and Meng Cao},
year={2024},
eprint={2310.07699},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@article{fang2023data,
title={Data filtering networks},
author={Fang, Alex and Jose, Albin Madappally and Jain, Amit and Schmidt, Ludwig and Toshev, Alexander and Shankar, Vaishaal},
journal={arXiv preprint arXiv:2309.17425},
year={2023}
}