Method Details


Details for method 'Panoptic-DeepLab [Mapillary Vistas]'

 

Method overview

name Panoptic-DeepLab [Mapillary Vistas]
challenge instance-level semantic labeling
details Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This entry fixes a minor inference bug (i.e., same trained model) for instance segmentation, compared to the previous submission.
publication Panoptic-DeepLab
Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen
https://arxiv.org/abs/1910.04751
project page / code
used Cityscapes data fine annotations
used external data ImageNet, Mapillary Vistas Research Edition.
runtime n/a
subsampling no
submission date October, 2019
previous submissions 1

 

Average results

Metric Value
AP 38.9999
AP50% 64.0165
AP100m 54.7208
AP50m 56.7982

 

Class results

Class AP AP50% AP100m AP50m
person 36.049 67.0847 55.2242 55.2578
rider 30.2488 64.8331 45.4199 45.9427
car 56.7412 80.1095 77.7311 80.1654
truck 41.512 52.2542 57.1205 61.8166
bus 50.8102 66.3465 71.9532 79.9224
train 42.5046 64.2813 53.5174 53.7167
motorcycle 30.4396 61.3349 40.6422 41.4127
bicycle 23.6938 55.8877 36.1577 36.1513

 

Links

Download results as .csv file

Benchmark page