Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming

The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving. We here provide an easy-to-use benchmark to assess how object detection models perform when image quality degrades. The three resulting benchmark datasets, termed Pascal-C, Coco-C and Cityscapes-C, contain a large variety of image corruptions. We show that a range of standard object detection models suffer a severe performance loss on corrupted images (down to 30--60\% of the original performance). However, a simple data augmentation trick---stylizing the training images---leads to a substantial increase in robustness across corruption type, severity and dataset. We envision our comprehensive benchmark to track future progress towards building robust object detection models. Benchmark, code and data are publicly available.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Robust Object Detection Cityscapes Stylized Training Data mPC [AP] 17.2 # 4
Robust Object Detection Cityscapes test Faster R-CNN with Stylized Training Data mPC [AP] 17.2 # 1
rPC [%] 47.4 # 1
Robust Object Detection Cityscapes test Faster R-CNN mPC [AP] 12.2 # 2
rPC [%] 33.4 # 2
Robust Object Detection MS COCO Faster R-CNN mPC [AP] 18.2 # 2
rPC [%] 50.2 # 2
Robust Object Detection MS COCO Faster R-CNN with Stylized Training Data mPC [AP] 20.4 # 1
rPC [%] 58.9 # 1
Robust Object Detection PASCAL VOC 2007 Faster R-CNN mPC [AP50] 48.6 # 2
rPC [%] 60.4 # 2
Robust Object Detection PASCAL VOC 2007 Faster R-CNN with Stylized Training Data mPC [AP50] 56.2 # 1
rPC [%] 69.9 # 1

Methods


No methods listed for this paper. Add relevant methods here