mmdetection

OpenMMLab Detection Toolbox and Benchmark

APACHE-2.0 License

Downloads
181.3K
Stars
28K
Committers
450

Bot releases are hidden (Show)

mmdetection - MMDetection v3.3.0 releases Latest Release

Published by hhaAndroid 10 months ago

MM Grounding DINO

An Open and Comprehensive Pipeline for Unified Object Grounding and Detection

Grounding-DINO is a state-of-the-art open-set detection model that tackles multiple vision tasks including Open-Vocabulary Detection (OVD), Phrase Grounding (PG), and Referring Expression Comprehension (REC). Its effectiveness has led to its widespread adoption as a mainstream architecture for various downstream applications. However, despite its significance, the original Grounding-DINO model lacks comprehensive public technical details due to the unavailability of its training code. To bridge this gap, we present MM-Grounding-DINO, an open-source, comprehensive, and user-friendly baseline, which is built with the MMDetection toolbox. It adopts abundant vision datasets for pre-training and various detection and grounding datasets for fine-tuning. We give a comprehensive analysis of each reported result and detailed settings for reproduction. The extensive experiments on the benchmarks mentioned demonstrate that our MM-Grounding-DINO-Tiny outperforms the Grounding-DINO-Tiny baseline. We release all our models to the research community.

Detail: https://github.com/open-mmlab/mmdetection/tree/main/configs/mm_grounding_dino

mmdetection - MMDetection v3.2.0 Release

Published by hhaAndroid about 1 year ago

Highlight

v3.2.0 was released in 12/10/2023:

1. Detection Transformer SOTA Model Collection
(1) Supported four updated and stronger SOTA Transformer models: DDQ, CO-DETR, AlignDETR, and H-DINO.
(2) Based on CO-DETR, MMDet released a model with a COCO performance of 64.1 mAP.
(3) Algorithms such as DINO support AMP/Checkpoint/FrozenBN, which can effectively reduce memory usage.

2. Comprehensive Performance Comparison between CNN and Transformer
RF100 consists of a dataset collection of 100 real-world datasets, including 7 domains. It can be used to assess the performance differences of Transformer models like DINO and CNN-based algorithms under different scenarios and data volumes. Users can utilize this benchmark to quickly evaluate the robustness of their algorithms in various scenarios.

3. Support for GLIP and Grounding DINO fine-tuning, the only algorithm library that supports Grounding DINO fine-tuning
The Grounding DINO algorithm in MMDet is the only library that supports fine-tuning. Its performance is one point higher than the official version, and of course, GLIP also outperforms the official version.
We also provide a detailed process for training and evaluating Grounding DINO on custom datasets. Everyone is welcome to give it a try.

Model Backbone Style COCO mAP Official COCO mAP
Grounding DINO-T Swin-T Zero-shot 48.5 48.4
Grounding DINO-T Swin-T Finetune 58.1(+0.9) 57.2
Grounding DINO-B Swin-B Zero-shot 56.9 56.7
Grounding DINO-B Swin-B Finetune 59.7
Grounding DINO-R50 R50 Scratch 48.9(+0.8) 48.1

4. Support for the open-vocabulary detection algorithm Detic and multi-dataset joint training.
5. Training detection models using FSDP and DeepSpeed.

ID AMP GC of Backbone GC of Encoder FSDP Peak Mem (GB) Iter Time (s)
1 49 (A100) 0.9
2 39 (A100) 1.2
3 33 (A100) 1.1
4 25 (A100) 1.3
5 18 2.2
6 13 1.6
7 14 2.9
8 8.5 2.4

6. Support for the V3Det dataset, a large-scale detection dataset with over 13,000 categories.

亮点

v3.2.0 版本已经在 2023.10.12 发布:

1. 检测 Transformer SOTA 模型大合集
(1) 支持了 DDQCO-DETRAlignDETRH-DINO 4 个更新更强的 SOTA Transformer 模型
(2) 基于 CO-DETR, MMDet 中发布了 COCO 性能为 64.1 mAP 的模型
(3) DINO 等算法支持 AMP/Checkpoint/FrozenBN,可以有效降低显存

2. 提供了全面的 CNN 和 Transformer 的性能对比
RF100 是由 100 个现实收集的数据集组成,包括 7 个域,可以验证 DINO 等 Transformer 模型和 CNN 类算法在不同场景不同数据量下的性能差异。用户可以用这个 Benchmark 快速验证自己的算法在不同场景下的鲁棒性。

3. 支持了 GLIPGrounding DINO 微调,全网唯一支持 Grounding DINO 微调
MMDet 中的 Grounding DINO 是全网唯一支持微调的算法库,且性能高于官方 1 个点,当然 GLIP 也比官方高。
我们还提供了详细的 Grounding DINO 在自定义数据集上训练评估的流程,欢迎大家试用。

Model Backbone Style COCO mAP Official COCO mAP
Grounding DINO-T Swin-T Zero-shot 48.5 48.4
Grounding DINO-T Swin-T Finetune 58.1(+0.9) 57.2
Grounding DINO-B Swin-B Zero-shot 56.9 56.7
Grounding DINO-B Swin-B Finetune 59.7
Grounding DINO-R50 R50 Scratch 48.9(+0.8) 48.1

4. 支持开放词汇检测算法 Detic 并提供多数据集联合训练可能

5. 轻松使用 FSDP 和 DeepSpeed 训练检测模型

ID AMP GC of Backbone GC of Encoder FSDP Peak Mem (GB) Iter Time (s)
1 49 (A100) 0.9
2 39 (A100) 1.2
3 33 (A100) 1.1
4 25 (A100) 1.3
5 18 2.2
6 13 1.6
7 14 2.9
8 8.5 2.4

6. 支持了 V3Det 1.3w+ 类别的超大词汇检测数据集

mmdetection - MMDetection v3.1.0 Release

Published by hhaAndroid over 1 year ago

Highlights

  • Supports tracking algorithms including multi-object tracking (MOT) algorithms SORT, DeepSORT, StrongSORT, OCSORT, ByteTrack, QDTrack, and video instance segmentation (VIS) algorithm MaskTrackRCNN, Mask2Former-VIS.
  • Support ViTDet
  • Supports inference and evaluation of multimodal algorithms GLIP and XDecoder, and also supports datasets such as COCO semantic segmentation, COCO Caption, ADE20k general segmentation, and RefCOCO. GLIP fine-tuning will be supported in the future.
  • Provides a gradio demo for image type tasks of MMDetection, making it easy for users to experience.

Exciting Features

GLIP inference and evaluation

s multimodal vision algorithms continue to evolve, MMDetection has also supported such algorithms. This section demonstrates how to use the demo and eval scripts corresponding to multimodal algorithms using the GLIP algorithm and model as the example. Moreover, MMDetection integrated a gradio_demo project, which allows developers to quickly play with all image input tasks in MMDetection on their local devices. Check the document for more details.

Preparation

Please first make sure that you have the correct dependencies installed:

# if source
pip install -r requirements/multimodal.txt

# if wheel
mim install mmdet[multimodal]

MMDetection has already implemented GLIP algorithms and provided the weights, you can download directly from urls:

cd mmdetection
wget https://download.openmmlab.com/mmdetection/v3.0/glip/glip_tiny_a_mmdet-b3654169.pth

Inference

Once the model is successfully downloaded, you can use the demo/image_demo.py script to run the inference.

python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts bench

Demo result will be similar to this:

If users would like to detect multiple targets, please declare them in the format of xx . xx . after the --texts.

python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts 'bench . car .'

And the result will be like this one:

You can also use a sentence as the input prompt for the --texts field, for example:

python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts 'There are a lot of cars here.'

The result will be similar to this:

Evaluation

The GLIP implementation in MMDetection does not have any performance degradation, our benchmark is as follows:

Model official mAP mmdet mAP
glip_A_Swin_T_O365.yaml 42.9 43.0
glip_Swin_T_O365.yaml 44.9 44.9
glip_Swin_L.yaml 51.4 51.3

Users can use the test script we provided to run evaluation as well. Here is a basic example:

# 1 gpu
python tools/test.py configs/glip/glip_atss_swin-t_fpn_dyhead_pretrain_obj365.py glip_tiny_a_mmdet-b3654169.pth

# 8 GPU
./tools/dist_test.sh configs/glip/glip_atss_swin-t_fpn_dyhead_pretrain_obj365.py glip_tiny_a_mmdet-b3654169.pth 8

The result will be similar to this:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.428
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.594
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.466
Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.300
Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.477
Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.534
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.634
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=300 ] = 0.634
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.634
Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.473
Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.690
Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.789

XDecoder

Installation

# if source
pip install -r requirements/multimodal.txt

# if wheel
mim install mmdet[multimodal]

How to use it?

For convenience, you can download the weights to the mmdetection root dir

wget https://download.openmmlab.com/mmdetection/v3.0/xdecoder/xdecoder_focalt_last_novg.pt
wget https://download.openmmlab.com/mmdetection/v3.0/xdecoder/xdecoder_focalt_best_openseg.pt

The above two weights are directly copied from the official website without any modification. The specific source is https://github.com/microsoft/X-Decoder

For convenience of demonstration, please download the folder and place it in the root directory of mmdetection.

(1) Open Vocabulary Semantic Segmentation

cd projects/XDecoder
python demo.py ../../images/animals.png configs/xdecoder-tiny_zeroshot_open-vocab-semseg_coco.py --weights ../../xdecoder_focalt_last_novg.pt --texts zebra.giraffe

(2) Open Vocabulary Instance Segmentation

cd projects/XDecoder
python demo.py ../../images/owls.jpeg configs/xdecoder-tiny_zeroshot_open-vocab-instance_coco.py --weights ../../xdecoder_focalt_last_novg.pt --texts owl

(3) Open Vocabulary Panoptic Segmentation

cd projects/XDecoder
python demo.py ../../images/street.jpg configs/xdecoder-tiny_zeroshot_open-vocab-panoptic_coco.py --weights ../../xdecoder_focalt_last_novg.pt  --text car.person --stuff-text tree.sky

(4) Referring Expression Segmentation

cd projects/XDecoder
python demo.py ../../images/fruit.jpg configs/xdecoder-tiny_zeroshot_open-vocab-ref-seg_refcocog.py --weights ../../xdecoder_focalt_last_novg.pt  --text "The larger watermelon. The front white flower. White tea pot."

(5) Image Caption

cd projects/XDecoder
python demo.py ../../images/penguin.jpeg configs/xdecoder-tiny_zeroshot_caption_coco2014.py --weights ../../xdecoder_focalt_last_novg.pt

(6) Referring Expression Image Caption

cd projects/XDecoder
python demo.py ../../images/fruit.jpg configs/xdecoder-tiny_zeroshot_ref-caption.py --weights ../../xdecoder_focalt_last_novg.pt --text 'White tea pot'

(7) Text Image Region Retrieval

cd projects/XDecoder
python demo.py ../../images/coco configs/xdecoder-tiny_zeroshot_text-image-retrieval.py --weights ../../xdecoder_focalt_last_novg.pt --text 'pizza on the plate'
The image that best matches the given text is ../../images/coco/000.jpg and probability is 0.998

We have also prepared a gradio program in the projects/gradio_demo directory, which you can run interactively all the inference supported by mmdetection in your browser.

Models and results

Semantic segmentation on ADE20K

Prepare your dataset according to the docs.

Test Command

Since semantic segmentation is a pixel-level task, we don't need to use a threshold to filter out low-confidence predictions. So we set model.test_cfg.use_thr_for_mc=False in the test command.

./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-semseg_ade20k.py xdecoder_focalt_best_openseg.pt 8 --cfg-options model.test_cfg.use_thr_for_mc=False
Model mIoU mIOU(official) Config
xdecoder_focalt_best_openseg.pt 25.24 25.13 config

Instance segmentation on ADE20K

Prepare your dataset according to the docs.

./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-instance_ade20k.py xdecoder_focalt_best_openseg.pt 8
Model mIoU mIOU(official) Config
xdecoder_focalt_best_openseg.pt 10.1 10.1 config

Panoptic segmentation on ADE20K

Prepare your dataset according to the docs.

./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-panoptic_ade20k.py xdecoder_focalt_best_openseg.pt 8
Model mIoU mIOU(official) Config
xdecoder_focalt_best_openseg.pt 19.11 18.97 config

Semantic segmentation on COCO2017

Prepare your dataset according to the docs of (2) use panoptic dataset part.

./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-semseg_coco.py xdecoder_focalt_last_novg.pt 8 --cfg-options model.test_cfg.use_thr_for_mc=False
Model mIOU mIOU(official) Config
xdecoder-tiny_zeroshot_open-vocab-semseg_coco 62.1 62.1 config

Instance segmentation on COCO2017

Prepare your dataset according to the docs.

./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-instance_coco.py xdecoder_focalt_last_novg.pt 8
Model Mask mAP Mask mAP(official) Config
xdecoder-tiny_zeroshot_open-vocab-instance_coco 39.8 39.7 config

Panoptic segmentation on COCO2017

Prepare your dataset according to the docs.

./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-panoptic_coco.py xdecoder_focalt_last_novg.pt 8
Model PQ PQ(official) Config
xdecoder-tiny_zeroshot_open-vocab-panoptic_coco 51.42 51.16 config

Referring segmentation on RefCOCO

Prepare your dataset according to the docs.

./tools/dist_test.sh  projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-ref-seg_refcocog.py xdecoder_focalt_last_novg.pt 8  --cfg-options test_dataloader.dataset.split='val'
Model text mode cIoU cIOU(official) Config
xdecoder_focalt_last_novg.pt select first 58.8415 57.85 config
xdecoder_focalt_last_novg.pt original 60.0321 - config
xdecoder_focalt_last_novg.pt concat 60.3551 - config

Note:

  1. If you set the scale of Resize to (1024, 512), the result will be 57.69.
  2. text mode is the RefCoCoDataset parameter in MMDetection, it determines the texts loaded to the data list. It can be set to select_first, original, concat and random.
    • select_first: select the first text in the text list as the description to an instance.
    • original: use all texts in the text list as the description to an instance.
    • concat: concatenate all texts in the text list as the description to an instance.
    • random: randomly select one text in the text list as the description to an instance, usually used for training.

Image Caption on COCO2014

Prepare your dataset according to the docs.

Before testing, you need to install jdk 1.8, otherwise it will prompt that java does not exist during the evaluation process

./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_caption_coco2014.py xdecoder_focalt_last_novg.pt 8
Model BLEU-4 CIDER Config
xdecoder-tiny_zeroshot_caption_coco2014 35.26 116.81 config

Gradio Demo

Please refer to https://github.com/open-mmlab/mmdetection/blob/dev-3.x/projects/gradio_demo/README.md for details.

Contributors

A total of 30 developers contributed to this release.

Thanks @jjjkkkjjj @lovelykite, @minato-ellie, @freepoet, @wufan-tb, @yalibian, @keyakiluo, @gihanjayatilaka, @i-aki-y, @xin-li-67, @RangeKing, @JingweiZhang12, @MambaWong, @lucianovk, @tall-josh, @xiuqhou, @jamiechoi1995, @YQisme, @yechenzhi, @bjzhb666, @xiexinch, @jamiechoi1995, @yarkable, @Renzhihan, @nijkah, @amaizr, @Lum1104, @zwhus, @Czm369, @hhaAndroid

mmdetection - MMDetection releases v3.0.0

Published by hhaAndroid over 1 year ago

v3.0.0 (6/4/2023)

We have released the official version of MMDetection v3.0.0

Highlights

New Features

  • File I/O migration and reconstruction (#9709)
  • Release DINO Swin-L 36e model (#9927)

Bug Fixes

  • Fix benchmark script (#9865)
  • Fix the crop method of PolygonMasks (#9858)
  • Fix Albu augmentation with the mask shape (#9918)
  • Fix RTMDetIns prior generator device error (#9964)
  • Fix img_shape in data pipeline (#9966)
  • Fix cityscapes import error (#9984)
  • Fix solov2_r50_fpn_ms-3x_coco.py config error (#10030)
  • Fix Conditional DETR AP and Log (#9889)
  • Fix accepting an unexpected argument local-rank in PyTorch 2.0 (#10050)
  • Fix common/ms_3x_coco-instance.py config error (#10056)
  • Fix compute flops error (#10051)
  • Delete data_root in CocoOccludedSeparatedMetric to fix bug (#9969)
  • Unifying metafile.yml (#9849)

Improvements

  • Added BoxInst r101 config (#9967)
  • Added config migration guide (#9960)
  • Added more social networking links (#10021)
  • Added RTMDet config introduce (#10042)
  • Added visualization docs (#9938, #10058)
  • Refined data_prepare docs (#9935)
  • Added support for setting the cache_size_limit parameter of dynamo in PyTorch 2.0 (#10054)
  • Updated coco_metric.py (#10033)
  • Update type hint (#10040)

Contributors

A total of 19 developers contributed to this release.

Thanks @IRONICBo, @vansin, @RangeKing, @Ghlerrix, @okotaku, @JosonChan1998, @zgzhengSEU, @bobo0810, @yechenzhi, @Zheng-LinXiao, @LYMDLUT, @yarkable, @xiejiajiannb, @chhluo, @BIGWangYuDong, @RangiLyu, @zwhus, @hhaAndroid, @ZwwWayne

mmdetection - MMDetection V2.28.2 Release

Published by ZwwWayne over 1 year ago

New Features and Improvements

  • Add Twitter, Discord, Medium and YouTube link (#9774)
  • Update customize_runtime.md (#9797)

Bug Fixes

  • Fix WIDERFace SSD loss for Nan problem (#9734)
  • Fix missing API documentation in Readthedoc (#9729)
  • Fix the configuration file and log path of CenterNet (#9791)

New Contributors

Contributors

A total of 4 developers contributed to this release.
Thanks @co63oc, @Ginray, @vansin, @RangiLyu

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.28.1...v2.28.2

mmdetection - MMDetection V3.0.0rc6 Release

Published by ZwwWayne over 1 year ago

Highlights

New Features

  • Support Boxinst (#9525)
  • Support Objects365 Dataset (#9600)
  • Support ConvNeXt-V2 in Projects (#9619)
  • Support DiffusionDet in Projects (#9639, #9768)
  • Support Detic inference in Projects (#9645)
  • Support EfficientDet inference in Projects (#9645)
  • Support Separated and Occluded COCO metric (#9710)
  • Support auto import modules from registry (#9143)
  • Refactor DETR series and support Conditional-DETR, DAB-DETR and DINO (#9646)
  • Support DetInferencer for inference (#9561)
  • Support Test Time Augmentation (#9452)
  • Support calculating FLOPs of detectors (#9777)

Bug Fixes

  • Fix deprecating old type alias due to new version of numpy (#9625, #9537)
  • Fix VOC metrics (#9784)
  • Fix the wrong link of RTMDet-x log (#9549)
  • Fix RTMDet link in README (#9575)
  • Fix MMDet get flops error (#9589)
  • Fix use_depthwise in RTMDet (#9624)
  • Fix albumentations augmentation post process with masks (#9551)
  • Fix DETR series Unit Test (#9647)
  • Fix LoadPanopticAnnotations bug (#9703)
  • Fix isort CI (#9680)
  • Fix amp pooling overflow (#9670)
  • Fix docstring about noise in DINO (#9747)
  • Fix potential bug in MultiImageMixDataset (#9764)

Improvements

  • Replace NumPy transpose with PyTorch permute to speed-up (#9762)
  • Deprecate sklearn (#9725)
  • Add RTMDet-Ins deployment guide (#9823)
  • Update RTMDet config and README (#9603)
  • Replace the models used in the tutorial document with RTMDet (#9843)
  • Adjust the minimum supported python version to 3.7 (#9602)
  • Support modifying palette through configuration (#9445)
  • Update README document in Project (#9599)
  • Replace github with gitee in .pre-commit-config-zh-cn.yaml file (#9586)
  • Use official isort in .pre-commit-config.yaml file (#9701)
  • Change MMCV minimum version to 2.0.0rc4 for dev-3.x (#9695)
  • Add Chinese version of single_stage_as_rpn.md and test_results_submission.md (#9434)
  • Add OpenDataLab download link (#9605, #9738)
  • Add type hints of several layers (#9346)
  • Add typehint for DarknetBottleneck (#9591)
  • Add dockerfile that is easier to use in China (#9659)
  • Add twitter, discord, medium, and youtube link (#9775)
  • Prepare for merging refactor-detr (#9656)
  • Add metafile to ConditionalDETR, DABDETR and DINO (#9715)
  • Support to modify non_blocking parameters (#9723)
  • Comment repeater visualizer register (#9740)
  • Update user guide: finetune.md and inference.md (#9578)

New Contributors

Contributors

A total of 27 developers contributed to this release.

Thanks @JosonChan1998, @RangeKing, @NoFish-528, @likyoo, @Xiangxu-0103, @137208, @PeterH0323, @tianleiSHI, @wufan-tb, @lyviva, @zwhus, @jshilong, @Li-Qingyun, @sanbuphy, @zylo117, @triple-Mu, @KeiChiTse, @LYMDLUT, @nijkah, @chg0901, @DanShouzhu, @zytx121, @vansin, @BIGWangYuDong, @hhaAndroid, @RangiLyu, @ZwwWayne

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v3.0.0rc5...v3.0.0rc6

mmdetection - MMDetection V2.28.1 Release

Published by RangiLyu over 1 year ago

Bug Fixes

  • Enable to set float mlp_ratio in SwinTransformer (#8670)
  • Fix import error that causes training failure (#9694)
  • Fix isort version in lint (#9685)
  • Fix init_cfg of YOLOF (#8243)

Contributors

A total of 4 developers contributed to this release.
Thanks @triple-Mu, @i-aki-y, @twmht, @RangiLyu

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.28.0...v2.28.1

mmdetection - MMDetection V2.28.0 Release

Published by ZwwWayne over 1 year ago

Highlights

  • Support Objects365 Dataset and Separated and Occluded COCO metric
  • Support acceleration of RetinaNet and SSD on Ascend
  • Deprecate the support of Python 3.6

New Features and Improvements

  • Support Objects365 Dataset (#7525)
  • Support Separated and Occluded COCO metric (#9574)
  • Support acceleration of RetinaNet and SSD on Ascend with documentation (#9648, #9614)
  • Added missing - to --format-only in documentation.

Deprecations

  • Upgrade the minimum Python version to 3.7, the support of Python 3.6 is no longer guaranteed (#9604)

Bug Fixes

  • Fix validation loss logging by (#9663)
  • Fix inconsistent float precision between mmdet and mmcv (#9570)
  • Fix argument name for fp32 in DeformableDETRHead (#9607)
  • Fix typo of all config file path in Metafile.yml (#9627)

Contributors

A total of 11 developers contributed to this release.
Thanks @eantono, @akstt, @@lpizzinidev, @RangiLyu, @kbumsik, @tianleiSHI, @nijkah, @BIGWangYuDong, @wangjiangben-hw, @@jamiechoi1995, @ZwwWayne

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.27.0...v2.28.0

mmdetection - MMDetection V2.27.0 Release

Published by RangiLyu almost 2 years ago

Highlights

Bug Fixes

  • Fix deadlock issue related with MMDetWandbHook (#9476)

Improvements

  • Add minimum GitHub token permissions for workflows (#8928)
  • Delete compatible code for parrots in roi extractor (#9503)
  • Deprecate np.bool Type Alias (#9498)
  • Replace numpy transpose with torch permute to speed-up data pre-processing (#9533)

Documents

  • Fix typo in docs/zh_cn/tutorials/config.md (#9416)
  • Fix Faster RCNN FP16 config link in README (#9366)

Contributors

A total of 12 developers contributed to this release.
Thanks @Min-Sheng, @gasvn, @lzyhha, @jbwang1997, @zachcoleman, @chenyuwang814, @MilkClouds, @Fizzez, @boahc077, @apatsekin, @zytx121, @DonggeunYu

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.26.0...v2.27.0

mmdetection - MMDetection V3.0.0rc5 Release

Published by ZwwWayne almost 2 years ago

Highlights

New Features

Bug Fixes

  • Fix CondInst predict error when batch_size is greater than 1 in inference (#9400)
  • Fix the bug of visualization when the dtype of the pipeline output image is not uint8 in browse dataset (#9401)
  • Fix analyze_logs.py to plot mAP and calculate train time correctly (#9409)
  • Fix backward inplace error with PAFPN (#9450)
  • Fix config import links in model converters (#9441)
  • Fix DeformableDETRHead object has no attribute loss_single (#9477)
  • Fix the logic of pseudo bboxes predicted by teacher model in SemiBaseDetector (#9414)
  • Fix demo API in instance segmentation tutorial (#9226)
  • Fix analyze_results (#9380)
  • Fix the error that Readthedocs API cannot be displayed (#9510)

Improvements

  • Remove legacy builder.py (#9479)
  • Make sure the pipeline argument shape is in (width, height) order (#9324)
  • Add .pre-commit-config-zh-cn.yaml file (#9388)
  • Refactor dataset metainfo to lowercase (#9469)
  • Add PyTorch 1.13 checking in CI (#9478)
  • Adjust FocalLoss and QualityFocalLoss to allow different kinds of targets (#9481)
  • Refactor setup.cfg (#9370)
  • Clip saturation value to valid range [0, 1] (#9391)
  • Only keep meta and state_dict when publishing model (#9356)
  • Add segm evaluator in ms-poly_3x_coco_instance config (#9524)
  • Update deployment guide (#9527)
  • Update zh_cn faq.md (#9396)
  • Update get_started (#9480)
  • Update the zh_cn user_guides of useful_tools.md and useful_hooks.md (#9453)
  • Add type hints for bfp and channel_mapper (#9410)
  • Add type hints of several losses (#9397)
  • Add type hints and update docstring for task modules (#9468)

Contributors

A total of 20 developers contributed to this release.

Thanks @liuyanyi, @RangeKing, @lihua199710, @MambaWong, @sanbuphy, @Xiangxu-0103, @twmht, @JunyaoHu, @Chan-Sun, @tianleiSHI, @zytx121, @kitecats, @QJC123654, @JosonChan1998, @lvhan028, @Czm369, @BIGWangYuDong, @RangiLyu, @hhaAndroid, @ZwwWayne

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v3.0.0rc4...v3.0.0rc5

mmdetection - MMDetection V3.0.0rc4 Release

Published by ZwwWayne almost 2 years ago

Highlights

  • Support CondInst
  • Add projects/ folder, which will be a place for some experimental models/features.
  • Support SparseInst in projects

New Features

  • Support CondInst (#9223)
  • Add projects/ folder, which will be a place for some experimental models/features (#9341)
  • Support SparseInst in projects (#9377)

Bug Fixes

  • Fix pixel_decoder_type discrimination in MaskFormer Head. (#9176)
  • Fix wrong padding value in cached MixUp (#9259)
  • Rename utils/typing.py to utils/typing_utils.py to fix collect_env error (#9265)
  • Fix resume arg conflict (#9287)
  • Fix the configs of Faster R-CNN with caffe backbone (#9319)
  • Fix torchserve and update related documentation (#9343)
  • Fix bbox refine bug with sigmooid activation (#9538)

Improvements

  • Update the docs of GIoU Loss in README (#8810)
  • Handle dataset wrapper in inference_detector (#9144)
  • Update the type of counts in COCO’s compressed RLE (#9274)
  • Support saving config file in print_config (#9276)
  • Update docs about video inference (#9305)
  • Update guide about model deployment (#9344)
  • Fix doc typos of useful tools (#9177)
  • Allow to resume from specific checkpoint in CLI (#9284)
  • Update FAQ about windows installation issues of pycocotools (#9292)

Contributors

A total of 13 developers contributed to this release.

Thanks @JunyaoHu, @sanbuphy, @Czm369, @Daa98, @jbwang1997, @BIGWangYuDong, @JosonChan1998, @lvhan028, @RunningLeon, @RangiLyu, @Daa98, @ZwwWayne, @hhaAndroid

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v3.0.0rc3...v3.0.0rc4

mmdetection - MMDetection V2.26.0 Release

Published by ZwwWayne almost 2 years ago

Highlights

  • Support training on NPU (#9267)

Bug Fixes

  • Fix RPN visualization (#9151)
  • Fix readthedocs by freezing the dependency versions (#9154)
  • Fix device argument error in MMDet_Tutorial.ipynb (#9112)
  • Fix solov2 cannot dealing with empty gt image (#9185)
  • Fix random flipping ratio comparison of mixup image (#9336)

Improvements

  • Complement necessary argument of seg_suffix of cityscapes (#9330)
  • Support copy paste based on bbox when there is no gt mask (#8905)
  • Make scipy as a default dependency in runtime (#9186)

Documents

  • Delete redundant Chinese characters in docs (#9175)
  • Add MMEval in README (#9217)

Contributors

A total of 11 developers contributed to this release.
Thanks @wangjiangben-hw, @motokimura, @AdorableJiang, @BainOuO, @JarvisKevin, @wanghonglie, @zytx121, @BIGWangYuDong, @hhaAndroid, @RangiLyu, @ZwwWayne

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.25.3...v2.26.0

mmdetection - MMDetection V3.0.0rc3 Release

Published by ZwwWayne almost 2 years ago

Highlights

  • Support CrowdDet and EIoU Loss
  • Support training detection models in Detectron2
  • Refactor Fast R-CNN
  • Note: In this version, we upgrade the minimum version requirement of MMEngine to 0.3.0 to use ignore_key of ConcatDataset for training VOC datasets (#9058)

New Features

  • Support CrowdDet (#8744)
  • Support training detection models in Detectron2 with examples of Mask R-CNN, Faster R-CNN, and RetinaNet (#8672)
  • Support EIoU Loss (#9086)

Bug Fixes

  • Fix XMLDataset image size error (#9216)
  • Fix bugs of empty_instances when predicting without nms in roi_head (#9015)
  • Fix the config file of DETR (#9158)
  • Fix SOLOv2 cannot dealing with empty gt image (#9192)
  • Fix inference demo (#9153)
  • Add ignore_key in VOC ConcatDataset (#9058)
  • Fix dumping results issue in test scripts. (#9241)
  • Fix configs of training coco subsets on MMDet 3.x (#9225)
  • Fix corner2hbox of HorizontalBoxes for supporting empty bboxes (#9140)

Improvements

  • Refactor Fast R-CNN (#9132)
  • Clean requirements of mmcv-full due to SyncBN (#9207)
  • Support training detection models in detectron2 (#8672)
  • Add box_type support for DynamicSoftLabelAssigner (#9179)
  • Make scipy as a default dependency in runtime (#9187)
  • Update eval_metric (#9062)
  • Add seg_map_suffix in BaseDetDataset (#9088)

New Contributors

Contributors

A total of 13 developers contributed to this release.

Thanks @wanghonglie, @Wwupup, @sanbuphy, @BIGWangYuDong, @liuyanyi, @cxiang26, @jbwang1997, @ZwwWayne, @yuyoujiang, @RangiLyu, @hhaAndroid, @JosonChan1998, @Czm369

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v3.0.0rc2...v3.0.0rc3

mmdetection - MMDetection V2.25.3 Release

Published by RangiLyu almost 2 years ago

Bug Fixes

  • Skip remote sync when wandb is offline (#8755)
  • Fix jpg to png bug when using seg maps (#9078)

Improvements

  • Fix typo in warning (#8844)
  • Fix CI for timm, pycocotools, onnx (#9034)
  • Upgrade pre-commit hooks (#8964)

Documents

  • Update BoundedIoULoss config in readme (#8808)
  • Fix Faster R-CNN Readme (#8803)
  • Update location of test_cfg and train_cfg (#8792)
  • Fix issue template (#8966)
  • Update random sampler docstring (#9033)
  • Fix wrong image link (#9054)
  • Fix FPG readme (#9041)

Contributors

A total of 13 developers contributed to this release.
Thanks @Zheng-LinXiao, @i-aki-y, @fbagci, @sudoAimer, @Czm369, @DrRyanHuang, @RangiLyu, @wanghonglie, @shinya7y, @Ryoo72, @akshaygulabrao, @gy-7, @Neesky

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.25.2...v2.25.3

mmdetection - MMDetection V3.0.0rc2 Release

Published by ZwwWayne almost 2 years ago

Highlights

New Features

  • Support imagenet pre-training for RTMDet's backbone (#8887)
  • Add CrowdHumanDataset and Metric (#8430)
  • Add FixShapeResize to support resize of fixed shape (#8665)

Bug Fixes

  • Fix ConcatDataset Import Error (#8909)
  • Fix CircleCI and readthedoc build failed (#8980, #8963)
  • Fix bitmap mask translate when out_shape is different (#8993)
  • Fix inconsistency in Conv2d weight channels (#8948)
  • Fix bugs when plotting loss curve by analyze_logs.py (#8944)
  • Fix type change of labels in albumentations (#9074)
  • Fix some docs and types error (#8818)
  • Update memory occupation of RTMDet in metafile (#9098)
  • Fix wrong arguments of OpenImageMetrics in the config (#9061)

Improvements

  • Refactor standard roi head with box type (#8658)
  • Support mask concatenation in BitmapMasks and PolygonMasks (#9006)
  • Update PyTorch and dependencies' version in dockerfile (#8845)
  • Update robustness_eval.py and print_config (#8452)
  • Make compatible with ConfigDict and dict in dense_heads (#8942)
  • Support logging coco metric copypaste (#9012)
  • Remove Normalize transform (#8913)
  • Support jittering the color of different instances of the same class (#8988)
  • Add assertion for missing key in PackDetInputs (#8982)

Contributors

A total of 13 developers contributed to this release.

Thanks @RangiLyu, @jbwang1997, @wanghonglie, @Chan-Sun, @RangeKing, @chhluo, @MambaWong, @yuyoujiang, @hhaAndroid, @sltlls, @Nioolek, @ZwwWayne, @wufan-tb

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v3.0.0rc1...v3.0.0rc2

mmdetection - MMDetection V3.0.0rc1 Release

Published by ZwwWayne about 2 years ago

Highlights

  • Release a high-precision, low-latency single-stage object detector RTMDet.

Bug Fixes

  • Fix UT to be compatible with PyTorch 1.6 (#8707)
  • Fix NumClassCheckHook bug when model is wrapped (#8794)
  • Update the right URL of R-50-FPN with BoundedIoULoss (#8805)
  • Fix potential bug of indices in RandAugment (#8826)
  • Fix some types and links (#8839, #8820, #8793, #8868)
  • Fix incorrect background fill values in FSAF and RepPoints Head (#8813)

Improvements

  • Refactored anchor head and base head with box type (#8625)
  • Refactored SemiBaseDetector and SoftTeacher (#8786)
  • Add list to dict keys to avoid modify loss dict (#8828)
  • Update analyze_results.py , analyze_logs.py and loading.py (#8430, #8402, #8784)
  • Support dump results in test.py (#8814)
  • Check empty predictions in DetLocalVisualizer._draw_instances (#8830)
  • Fix floordiv warning in SOLO (#8738)

Contributors

A total of 16 developers contributed to this release.

Thanks @ZwwWayne, @jbwang1997, @Czm369, @ice-tong, @Zheng-LinXiao, @chhluo, @RangiLyu, @liuyanyi, @wanghonglie, @levan92, @JiayuXu0, @nye0, @hhaAndroid, @xin-li-67, @shuxp, @zytx121

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v3.0.0rc0...v3.0.0rc1

mmdetection - MMDetection V2.25.2 Release

Published by ZwwWayne about 2 years ago

Bug Fixes

  • Fix DyDCNv2 RuntimeError (#8485)
  • Fix repeated import of CascadeRPNHead (#8578)
  • Fix absolute positional embedding of swin backbone (#8127)
  • Fix get train_pipeline method of val workflow (#8575)

Improvements

  • Upgrade onnxsim to at least 0.4.0 (#8383)
  • Support tuple format in analyze_results script (#8549)
  • Fix floordiv warning (#8648)

Documents

  • Fix typo in HTC link (#8487)
  • Fix docstring of BboxOverlaps2D (#8512)
  • Added missed Chinese tutorial link (#8564)
  • Fix mistakes in gaussian radius formula (#8607)
  • Update config documentation about how to Add WandB Hook (#8663)
  • Add mmengine link in readme (#8799)
  • Update issue template (#8802)

Ongoing changes

  • Support training YOLOv3 on IPU #8552
  • Support RF-NeXt #8191

Contributors

A total of 16 developers contributed to this release.
Thanks @daquexian, @lyq10085, @ZwwWayne, @fbagci, @BubblyYi, @fathomson, @ShunchiZhang, @ceasona, @Happylkx, @normster, @chhluo, @Lehsuby, @JiayuXu0, @Nourollah, @hewanru-bit, @RangiLyu

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.25.1...v2.25.2

mmdetection - MMDetection V3.0.0rc0 Release

Published by ZwwWayne about 2 years ago

We are excited to announce the release of MMDetection 3.0.0rc0. MMDet 3.0.0rc0 is the first version of MMDetection 3.x, a part of the OpenMMLab 2.0 projects. Built upon the new training engine, MMDet 3.x unifies the interfaces of the dataset, models, evaluation, and visualization with faster training and testing speed. It also provides a general semi-supervised object detection framework and strong baselines.

Highlights

  1. New engine. MMDet 3.x is based on MMEngine, which provides a universal and powerful runner that allows more flexible customizations and significantly simplifies the entry points of high-level interfaces.

  2. Unified interfaces. As a part of the OpenMMLab 2.0 projects, MMDet 3.x unifies and refactors the interfaces and internal logic of training, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logic to allow the emergence of multi-task/modality algorithms.

  3. Faster speed. We optimize the training and inference speed for common models and configurations, achieving faster or similar speed in comparison with Detection2. Model details of benchmark will be updated in this note.

  4. General semi-supervised object detection. Benefitting from the unified interfaces, we support a general semi-supervised learning framework that works with all the object detectors supported in MMDet 3.x. Please refer to semi-supervised object detection for details.

  5. Strong baselines. We release strong baselines of many popular models to enable fair comparisons among state-of-the-art models.

  6. New features and algorithms:

  7. More documentation and tutorials. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it here.

Breaking Changes

MMDet 3.x has gone through big changes to have better design, higher efficiency, more flexibility, and more unified interfaces.
Besides the changes in API, we briefly list the major breaking changes in this section.
We will update the migration guide to provide complete details and migration instructions.
Users can also refer to the API doc for more details.

Dependencies

  • MMDet 3.x runs on PyTorch>=1.6. We have deprecated the support of PyTorch 1.5 to embrace mixed precision training and other new features since PyTorch 1.6. Some models can still run on PyTorch 1.5, but the full functionality of MMDet 3.x is not guaranteed.
  • MMDet 3.x relies on MMEngine to run. MMEngine is a new foundational library for training deep learning models of OpenMMLab and is the core dependency of OpenMMLab 2.0 projects. The dependencies of file IO and training are migrated from MMCV 1.x to MMEngine.
  • MMDet 3.x relies on MMCV>=2.0.0rc0. Although MMCV no longer maintains the training functionalities since 2.0.0rc0, MMDet 3.x relies on the data transforms, CUDA operators, and image processing interfaces in MMCV. Note that the package mmcv is the version that provides pre-built CUDA operators and mmcv-lite does not since MMCV 2.0.0rc0, while mmcv-full has been deprecated since 2.0.0rc0.

Training and testing

  • MMDet 3.x uses Runner in MMEngine rather than that in MMCV. The new Runner implements and unifies the building logic of the dataset, model, evaluation, and visualizer. Therefore, MMDet 3.x no longer maintains the building logic of those modules in mmdet.train.apis and tools/train.py. Those codes have been migrated into MMEngine. Please refer to the migration guide of Runner in MMEngine for more details.
  • The Runner in MMEngine also supports testing and validation. The testing scripts are also simplified, which has similar logic to that in training scripts to build the runner.
  • The execution points of hooks in the new Runner have been enriched to allow more flexible customization. Please refer to the migration guide of Hook in MMEngine for more details.
  • Learning rate and momentum schedules have been migrated from Hook to Parameter Scheduler in MMEngine. Please refer to the migration guide of Parameter Scheduler in MMEngine for more details.

Configs

  • The Runner in MMEngine uses a different config structure to ease the understanding of the components in the runner. Users can read the config example of MMDet 3.x or refer to the migration guide in MMEngine for migration details.
  • The file names of configs and models are also refactored to follow the new rules unified across OpenMMLab 2.0 projects. The names of checkpoints are not updated for now as there is no BC-breaking of model weights between MMDet 3.x and 2.x. We will progressively replace all the model weights with those trained in MMDet 3.x. Please refer to the user guides of config for more details.

Dataset

The Dataset classes implemented in MMDet 3.x all inherit from the BaseDetDataset, which inherits from the BaseDataset in MMEngine. In addition to the changes in interfaces, there are several changes in Dataset in MMDet 3.x.

  • All the datasets support serializing the internal data list to reduce the memory when multiple workers are built for data loading.
  • The internal data structure in the dataset is changed to be self-contained (without losing information like class names in MMDet 2.x) while keeping simplicity.
  • The evaluation functionality of each dataset has been removed from the dataset so that some specific evaluation metrics like COCO AP can be used to evaluate the prediction on other datasets.

Data Transforms

The data transforms in MMDet 3.x all inherits from BaseTransform in MMCV>=2.0.0rc0, which defines a new convention in OpenMMLab 2.0 projects.
Besides the interface changes, there are several changes listed below:

  • The functionality of some data transforms (e.g., Resize) are decomposed into several transforms to simplify and clarify the usages.
  • The format of data dict processed by each data transform is changed according to the new data structure of dataset.
  • Some inefficient data transforms (e.g., normalization and padding) are moved into the data preprocessor of the model to improve data loading and training speed.
  • The same data transforms in different OpenMMLab 2.0 libraries have the same augmentation implementation and the logic given the same arguments, i.e., Resize in MMDet 3.x and MMSeg 1.x will resize the image in the exact same manner given the same arguments.

Model

The models in MMDet 3.x all inherit from BaseModel in MMEngine, which defines a new convention of models in OpenMMLab 2.0 projects.
Users can refer to the tutorial of the model in MMengine for more details.
Accordingly, there are several changes as the following:

  • The model interfaces, including the input and output formats, are significantly simplified and unified following the new convention in MMDet 3.x.
    Specifically, all the input data in training and testing are packed into inputs and data_samples, where inputs contains model inputs like a list of image tensors, and data_samples contains other information of the current data sample such as ground truths, region proposals, and model predictions. In this way, different tasks in MMDet 3.x can share the same input arguments, which makes the models more general and suitable for multi-task learning and some flexible training paradigms like semi-supervised learning.
  • The model has a data preprocessor module, which is used to pre-process the input data of the model. In MMDet 3.x, the data preprocessor usually does the necessary steps to form the input images into a batch, such as padding. It can also serve as a place for some special data augmentations or more efficient data transformations like normalization.
  • The internal logic of the model has been changed. In MMdet 2.x, model uses forward_train, forward_test, simple_test, and aug_test to deal with different model forward logics. In MMDet 3.x and OpenMMLab 2.0, the forward function has three modes: 'loss', 'predict', and 'tensor' for training, inference, and tracing or other purposes, respectively.
    The forward function calls self.loss, self.predict, and self._forward given the modes 'loss', 'predict', and 'tensor', respectively.

Evaluation

The evaluation in MMDet 2.x strictly binds with the dataset. In contrast, MMDet 3.x decomposes the evaluation from the dataset so that all the detection datasets can evaluate with COCO AP and other metrics implemented in MMDet 3.x.
MMDet 3.x mainly implements corresponding metrics for each dataset, which are manipulated by Evaluator to complete the evaluation.
Users can build an evaluator in MMDet 3.x to conduct offline evaluation, i.e., evaluate predictions that may not produce in MMDet 3.x with the dataset as long as the dataset and the prediction follow the dataset conventions. More details can be found in the tutorial in mmengine.

Visualization

The functions of visualization in MMDet 2.x are removed. Instead, in OpenMMLab 2.0 projects, we use Visualizer to visualize data. MMDet 3.x implements DetLocalVisualizer to allow visualization of ground truths, model predictions, feature maps, etc., at any place. It also supports sending the visualization data to any external visualization backends such as Tensorboard.

Improvements

  • Optimized training and testing speed of FCOS, RetinaNet, Faster R-CNN, Mask R-CNN, and Cascade R-CNN. The training speed of those models with some common training strategies is also optimized, including those with synchronized batch normalization and mixed precision training.
  • Support mixed precision training of all the models. However, some models may get Nan results due to some numerical issues. We will update the documentation and list the results (accuracy of failure) of mixed precision training.
  • Release strong baselines of some popular object detectors. Their accuracy and pre-trained checkpoints will be released.

Bug Fixes

  • DeepFashion dataset: the config and results have been updated.

New Features

  1. Support a general semi-supervised learning framework that works with all the object detectors supported in MMDet 3.x. Please refer to semi-supervised object detection for details.
  2. Enable all the single-stage detectors to serve as region proposal networks. We give an example of using FCOS as RPN.
  3. Support a semi-supervised object detection algorithm: SoftTeacher.
  4. Support the updated CenterNet.
  5. Support data structures HorizontalBoxes and BaseBoxes to encapsulate different kinds of bounding boxes. We are migrating to use data structures of boxes to replace the use of pure tensor boxes. This will unify the usages of different kinds of bounding boxes in MMDet 3.x and MMRotate 1.x to simplify the implementation and reduce redundant codes.

Planned changes

We list several planned changes of MMDet 3.0.0rc0 so that the community could more comprehensively know the progress of MMDet 3.x. Feel free to create a PR, issue, or discussion if you are interested, have any suggestions and feedback, or want to participate.

  1. Test-time augmentation: which is supported in MMDet 2.x, is not implemented in this version due to the limited time slot. We will support it in the following releases with a new and simplified design.
  2. Inference interfaces: unified inference interfaces will be supported in the future to ease the use of released models.
  3. Interfaces of useful tools that can be used in Jupyter Notebook or Colab: more useful tools that are implemented in the tools directory will have their python interfaces so that they can be used in Jupyter Notebook, Colab, and downstream libraries.
  4. Documentation: we will add more design docs, tutorials, and migration guidance so that the community can deep dive into our new design, participate the future development, and smoothly migrate downstream libraries to MMDet 3.x.
  5. Wandb visualization: MMDet 2.x supports data visualization since v2.25.0, which has not been migrated to MMDet 3.x for now. Since WandB provides strong visualization and experiment management capabilities, a DetWandbVisualizer and maybe a hook are planned to fully migrate those functionalities from MMDet 2.x.
  6. Full support of WiderFace dataset (#8508) and Fast R-CNN: we are verifying their functionalities and will fix related issues soon.
  7. Migrate DETR-series algorithms (#8655, #8533) and YOLOv3 on IPU (#8552) from MMDet 2.x.

Contributors

A total of 11 developers contributed to this release.
Thanks @shuxp, @wanghonglie, @Czm369, @BIGWangYuDong, @zytx121, @jbwang1997, @chhluo, @jshilong, @RangiLyu, @hhaAndroid, @ZwwWayne

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.25.0...v3.0.0rc0

mmdetection - MMDetection V2.25.1 Release

Published by ZwwWayne about 2 years ago

Bug Fixes

  • Fix single GPU distributed training of cuda device specifying (#8176)
  • Fix PolygonMask bug in FilterAnnotations (#8136)
  • Fix mdformat version to support python3.6 (#8195)
  • Fix GPG key error in Dockerfile (#8215)
  • Fix WandbLoggerHook error (#8273)
  • Fix Pytorch 1.10 incompatibility issues (#8439)

Improvements

  • Add mim to extras_require in setup.py (#8194)
  • Support get image shape on macOS (#8434)
  • Add test commands of mim in CI (#8230 & #8240)
  • Update maskformer to be compatible when cfg is a dictionary (#8263)
  • Clean Pillow version check in CI (#8229)

Documents

  • Change example hook name in tutorials (#8118)
  • Update projects (#8120)
  • Update metafile and release new models (#8294)
  • Add download link in tutorials (#8391)

Contributors

A total of 15 developers contributed to this release.
Thanks @ZwwWayne, @ayulockin, @Mxbonn, @p-mishra1, @Youth-Got, @MiXaiLL76, @chhluo, @jbwang1997, @atinfinity, @shinya7y, @duanzhihua, @STLAND-admin, @BIGWangYuDong, @grimoire, @xiaoyuan0203

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.25.0...v2.25.1

mmdetection - MMDetection V2.25.0 Release

Published by ZwwWayne over 2 years ago

Highlights

Backwards incompatible changes

  • Rename config files of Mask2Former (#7571)

    • mask2former_xxx_coco.py represents config files for panoptic segmentation.
    • mask2former_xxx_coco.py represents config files for instance segmentation.
    • mask2former_xxx_coco-panoptic.py represents config files for panoptic segmentation.

New Features

Bug Fixes

  • Enable YOLOX training on different devices (#7912)
  • Fix the log plot error when evaluation with interval != 1 (#7784)
  • Fix RuntimeError of HTC (#8083)

Improvements

  • Support dedicated WandbLogger hook (#7459)

    Users can set

    cfg.log_config.hooks = [
      dict(type='MMDetWandbHook',
           init_kwargs={'project': 'MMDetection-tutorial'},
           interval=10,
           log_checkpoint=True,
           log_checkpoint_metadata=True,
           num_eval_images=10)]
    

    in the config to use MMDetWandbHook. Example can be found in this colab tutorial

  • Add AvoidOOM to avoid OOM (#7434, #8091)

    Try to use AvoidCUDAOOM to avoid GPU out of memory. It will first retry after calling torch.cuda.empty_cache(). If it still fails, it will then retry by converting the type of inputs to FP16 format. If it still fails, it will try to copy inputs from GPUs to CPUs to continue computing. Try AvoidOOM in code to make the code continue to run when GPU memory runs out:

    from mmdet.utils import AvoidCUDAOOM
    
    output = AvoidCUDAOOM.retry_if_cuda_oom(some_function)(input1, input2)
    

    Users can also try AvoidCUDAOOM as a decorator to make the code continue to run when GPU memory runs out:

    from mmdet.utils import AvoidCUDAOOM
    
    @AvoidCUDAOOM.retry_if_cuda_oom
    def function(*args, **kwargs):
        ...
        return xxx
    
  • Support reading gpu_collect from cfg.evaluation.gpu_collect (#7672)

  • Speedup the Video Inference by Accelerating data-loading Stage (#7832)

  • Support replacing the ${key} with the value of cfg.key (#7492)

  • Accelerate result analysis in analyze_result.py. The evaluation time is speedup by 10 ~ 15 times and only tasks 10 ~ 15 minutes now. (#7891)

  • Support to set block_dilations in DilatedEncoder (#7812)

  • Support panoptic segmentation result analysis (#7922)

  • Release DyHead with Swin-Large backbone (#7733)

  • Documentations updating and adding

    • Fix wrong default type of act_cfg in SwinTransformer (#7794)
    • Fix text errors in the tutorials (#7959)
    • Rewrite the installation guide (#7897)
    • Useful hooks (#7810)
    • Fix heading anchor in documentation (#8006)
    • Replace markdownlint with mdformat for avoiding installing ruby (#8009)

Contributors

A total of 20 developers contributed to this release.

Thanks @ZwwWayne, @DarthThomas, @solyaH, @LutingWang, @chenxinfeng4, @Czm369, @Chenastron, @chhluo, @austinmw, @Shanyaliux @hellock, @Y-M-Y, @jbwang1997, @hhaAndroid, @Irvingao, @zhanggefan, @BIGWangYuDong, @Keiku, @PeterVennerstrom, @ayulockin

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.24.1...v2.25.0

Package Rankings
Top 0.67% on Pypi.org
Top 3.33% on Proxy.golang.org
Top 17.99% on Conda-forge.org
Badges
Extracted from project README's
PyPI docs badge codecov license open issues issue resolution Open in OpenXLab PWC PWC PWC Open in Colab Open in Colab