Method Details


Details for method 'Panoptic-DeepLab [Cityscapes-fine]'

 

Method overview

name Panoptic-DeepLab [Cityscapes-fine]
challenge pixel-level semantic labeling
details Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations.
publication Panoptic-DeepLab
Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen
https://arxiv.org/abs/1910.04751
project page / code
used Cityscapes data fine annotations
used external data ImageNet
runtime n/a
subsampling no
submission date September, 2019
previous submissions

 

Average results

Metric Value
IoU Classes 79.3965
iIoU Classes 58.4543
IoU Categories 91.7422
iIoU Categories 78.3524

 

Class results

Class IoU iIoU
road 98.7201 -
sidewalk 87.2394 -
building 93.5564 -
wall 57.6703 -
fence 60.7853 -
pole 70.8418 -
traffic light 78.0404 -
traffic sign 81.2138 -
vegetation 93.777 -
terrain 74.1333 -
sky 95.7122 -
person 88.2444 70.5832
rider 76.3553 55.1291
car 96.0355 86.7311
truck 55.3402 34.5246
bus 75.1293 55.1544
train 79.5916 55.7355
motorcycle 72.1225 52.0138
bicycle 74.0253 57.7626

 

Category results

Category IoU iIoU
flat 98.7184 -
nature 93.4831 -
object 76.4848 -
sky 95.7122 -
construction 93.8735 -
human 88.3858 71.8021
vehicle 95.5376 84.9026

 

Links

Download results as .csv file

Benchmark page