Benchmark Suite


We offer a benchmark suite together with an evaluation server, such that authors can upload their results and get a ranking regarding the different tasks (pixel-level and instance-level semantic labeling). Our evaluation concept is designed such that a single algorithm can contribute to multiple challenges. If you would like to submit your results, please register, login, and follow the instructions on our submission page.

 

Pixel-Level Semantic Labeling Task

The first Cityscapes task involves predicting a per-pixel semantic labeling of the image without considering higher-level object instance or boundary information.

Metrics

To assess performance, we rely on the standard Jaccard Index, commonly known as the PASCAL VOC intersection-over-union metric \text{IoU} = \frac{\text{TP}}{\text{TP}+\text{FP}+\text{FN}} [1], where \text{TP}, \text{FP}, and \text{FN} are the numbers of true positive, false positive, and false negative pixels, respectively, determined over the whole test set. Owing to the two semantic granularities, i.e. classes and categories, we report two separate mean performance scores: \text{IoU}_{\text{category}} and \text{IoU}_{\text{class}}. In either case, pixels labeled as void do not contribute to the score.

It is well-known that the global \text{IoU} measure is biased toward object instances that cover a large image area. In street scenes with their strong scale variation this can be problematic. Specifically for traffic participants, which are the key classes in our scenario, we aim to evaluate how well the individual instances in the scene are represented in the labeling. To address this, we additionally evaluate the semantic labeling using an instance-level intersection-over-union metric \text{iIoU} = \frac{\text{iTP}}{\text{iTP}+\text{FP}+\text{iFN}}. Again \text{iTP}, \text{FP}, and \text{iFN} denote the numbers of true positive, false positive, and false negative pixels, respectively. However, in contrast to the standard \text{IoU} measure, \text{iTP} and \text{iFN} are computed by weighting the contribution of each pixel by the ratio of the class’ average instance size to the size of the respective ground truth instance. It is important to note here that unlike the instance-level task below, we assume that the methods only yield a standard per-pixel semantic class labeling as output. Therefore, the false positive pixels are not associated with any instance and thus do not require normalization. The final scores, \text{iIoU}_{\text{category}} and \text{iIoU}_{\text{class}}, are obtained as the means for the two semantic granularities.

Results

Detailed results

Detailed results including performances regarding individual classes and categories can be found here.

Usage
Use the buttons in the first row to hide columns or to export the visible data to various formats. Use the widgets in the second row to filter the table by selecting values of interest (multiple selections possible). Click the numeric columns for sorting.

namefinefinecoarsecoarse16-bit16-bitdepthdepthvideovideosubsubIoU classiIoU classIoU categoryiIoU categoryRuntime [s]codecodetitleauthorsvenuedescription
FCN 8syesyesnononononononononono65.341.785.770.10.5noyesFully Convolutional Networks for Semantic SegmentationJ. Long, E. Shelhamer, and T. DarrellCVPR 2015Trained by Marius Cordts on a pre-release version of the dataset
more details
RRR-ResNet152-MultiScaleyesyesyesyesnononononononono75.848.589.374.0n/anonoAnonymousupdate: this submission actually used the coarse labels, which was previously not marked accordingly
more details
Dilation10yesyesnononononononononono67.142.086.571.14.0noyesMulti-Scale Context Aggregation by Dilated ConvolutionsFisher Yu and Vladlen KoltunICLR 2016Dilation10 is a convolutional network that consists of a front-end prediction module and a context aggregation module. Both are described in the paper. The combined network was trained jointly. The context module consists of 10 layers, each of which has C=19 feature maps. The larger number of layers in the context module (10 for Cityscapes versus 8 for Pascal VOC) is due to the high input resolution. The Dilation10 model is a pure convolutional network: there is no CRF and no structured prediction. Dilation10 can therefore be used as the baseline input for structured prediction models. Note that the reported results were produced by training on the training set only; the network was not retrained on train+val.
more details
Adelaideyesyesnononononononononono66.446.782.867.435.0nonoEfficient Piecewise Training of Deep Structured Models for Semantic SegmentationG. Lin, C. Shen, I. Reid, and A. van den HengelarXiv preprint 2015Trained on a pre-release version of the dataset
more details
DeepLab LargeFOV StrongWeakyesyesyesyesnononononono2264.834.981.358.74.0noyesWeakly- and Semi-Supervised Learning of a DCNN for Semantic Image SegmentationG. Papandreou, L.-C. Chen, K. Murphy, and A. L. YuilleICCV 2015Trained on a pre-release version of the dataset
more details
DeepLab LargeFOV Strongyesyesnononononononono2263.134.581.258.74.0noyesSemantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFsL.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. YuilleICLR 2015Trained on a pre-release version of the dataset
more details
DPNyesyesyesyesnononononono3359.128.179.557.9n/anonoSemantic Image Segmentation via Deep Parsing NetworkZ. Liu, X. Li, P. Luo, C. C. Loy, and X. TangICCV 2015Trained on a pre-release version of the dataset
more details
Segnet basicyesyesnononononononono4457.032.079.161.90.06noyesSegNet: A Deep Convolutional Encoder-Decoder Architecture for Image SegmentationV. Badrinarayanan, A. Kendall, and R. CipollaarXiv preprint 2015Trained on a pre-release version of the dataset
more details
Segnet extendedyesyesnononononononono4456.134.279.866.40.06noyesSegNet: A Deep Convolutional Encoder-Decoder Architecture for Image SegmentationV. Badrinarayanan, A. Kendall, and R. CipollaarXiv preprint 2015Trained on a pre-release version of the dataset
more details
CRFasRNNyesyesnononononononono2262.534.482.766.00.7noyesConditional Random Fields as Recurrent Neural NetworksS. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. TorrICCV 2015Trained on a pre-release version of the dataset
more details
Scale invariant CNN + CRFyesyesnonononoyesyesnononono66.344.985.071.2n/anoyesConvolutional Scale Invariance for Semantic SegmentationI. Kreso, D. Causevic, J. Krapac, and S. SegvicGCPR 2016We propose an effective technique to address large scale variation in images taken from a moving car by cross-breeding deep learning with stereo reconstruction. Our main contribution is a novel scale selection layer which extracts convolutional features at the scale which matches the corresponding reconstructed depth. The recovered scaleinvariant representation disentangles appearance from scale and frees the pixel-level classifier from the need to learn the laws of the perspective. This results in improved segmentation results due to more effi- cient exploitation of representation capacity and training data. We perform experiments on two challenging stereoscopic datasets (KITTI and Cityscapes) and report competitive class-level IoU performance.
more details
DPNyesyesnononononononononono66.839.186.069.1n/anonoSemantic Image Segmentation via Deep Parsing NetworkZ. Liu, X. Li, P. Luo, C. C. Loy, and X. TangICCV 2015DPN trained on full resolution images
more details
Pixel-level Encoding for Instance Segmentationyesyesnonononoyesyesnononono64.341.685.973.9n/anonoPixel-level Encoding and Depth Layering for Instance-level Semantic LabelingJ. Uhrig, M. Cordts, U. Franke, and T. BroxGCPR 2016We predict three encoding channels from a single image using an FCN: semantic labels, depth classes, and an instance-aware representation based on directions towards instance centers. Using low-level computer vision techniques, we obtain pixel-level and instance-level semantic labeling paired with a depth estimate of the instances.
more details
Adelaide_contextyesyesnononononononononono71.651.787.374.1n/anonoEfficient Piecewise Training of Deep Structured Models for Semantic SegmentationGuosheng Lin, Chunhua Shen, Anton van den Hengel, Ian ReidCVPR 2016We explore contextual information to improve semantic image segmentation. Details are described in the paper. We trained contextual networks for coarse level prediction and a refinement network for refining the coarse prediction. Our models are trained on the training set only (2975 images) without adding the validation set.
more details
NVSegNetyesyesnononononononononono67.441.487.268.10.4nonoAnonymousIn the inference, we use the image of 2 different scales. The same for training!
more details
ENetyesyesnononononononono2258.334.480.464.00.013noyesENet: A Deep Neural Network Architecture for Real-Time Semantic SegmentationAdam Paszke, Abhishek Chaurasia, Sangpil Kim, Eugenio Culurciello
more details
DeepLabv2-CRFyesyesnononononononononono70.442.686.467.7n/anoyesDeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFsLiang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L. YuillearXiv preprintDeepLabv2-CRF is based on three main methods. First, we employ convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool to repurpose ResNet-101 (trained on image classification task) in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within DCNNs. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and fully connected Conditional Random Fields (CRFs). The model is only trained on train set.
more details
m-TCFsyesyesyesyesnononononononono71.843.687.670.61.0nonoAnonymousConvolutional Neural Network
more details
DeepLab+DynamicCRFyesyesnononononononononono64.538.383.762.4n/anonoru.nl
more details
LRR-4xyesyesnononononononononono69.748.088.274.7n/anoyesLaplacian Pyramid Reconstruction and Refinement for Semantic SegmentationGolnaz Ghiasi, Charless C. FowlkesECCV 2016We introduce a CNN architecture that reconstructs high-resolution class label predictions from low-resolution feature maps using class-specific basis functions. Our multi-resolution architecture also uses skip connections from higher resolution feature maps to successively refine segment boundaries reconstructed from lower resolution maps. The model used for this submission is based on VGG-16 and it was trained on the training set (2975 images). The segmentation predictions were not post-processed using CRF. (This is a revision of a previous submission in which we didn't use the correct basis functions; the method name changed from 'LLR-4x' to 'LRR-4x')
more details
LRR-4xyesyesyesyesnononononononono71.847.988.473.9n/anoyesLaplacian Pyramid Reconstruction and Refinement for Semantic SegmentationGolnaz Ghiasi, Charless C. FowlkesECCV 2016We introduce a CNN architecture that reconstructs high-resolution class label predictions from low-resolution feature maps using class-specific basis functions. Our multi-resolution architecture also uses skip connections from higher resolution feature maps to successively refine segment boundaries reconstructed from lower resolution maps. The model used for this submission is based on VGG-16 and it was trained using both coarse and fine annotations. The segmentation predictions were not post-processed using CRF.
more details
Le_Selfdriving_VGGyesyesnononononononononono65.935.684.464.3n/anonoAnonymous
more details
SQyesyesnononononononononono59.832.384.366.00.06nonoSpeeding up Semantic Segmentation for Autonomous DrivingMichael Treml, José Arjona-Medina, Thomas Unterthiner, Rupesh Durgesh, Felix Friedmann, Peter Schuberth, Andreas Mayr, Martin Heusel, Markus Hofmarcher, Michael Widrich, Bernhard Nessler, Sepp HochreiterNIPS 2016 Workshop - MLITS Machine Learning for Intelligent Transportation Systems Neural Information Processing Systems 2016, Barcelona, Spain
more details
SAITyesyesyesyesnononononononono76.951.889.675.54.0nonoAnonymousAnonymous
more details
FoveaNetyesyesnononononononononono74.152.489.377.6n/anonoFoveaNetXin Li, Jiashi Feng1.caffe-master
2.resnet-101
3.single scale testing

Previously listed as "LXFCRN".
more details
RefineNetyesyesnononononononononono73.647.287.970.6n/anoyesRefineNet: Multi-Path Refinement Networks for High-Resolution Semantic SegmentationGuosheng Lin; Anton Milan; Chunhua Shen; Ian Reid;Please refer to our technical report for details: "RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation" (https://arxiv.org/abs/1611.06612). Our source code is available at: https://github.com/guosheng/refinenet
2975 images (training set with fine labels) are used for training.
more details
SegModelyesyesnononononononononono78.556.189.875.90.8nonoAnonymousBoth train set (2975) and val set (500) are used to train model for this submission.
more details
TuSimpleyesyesnononononononononono77.653.690.175.2n/anonoAnonymous
more details
Global-Local-Refinementyesyesnononononononononono77.353.490.076.8n/anonoAnonymousglobal-residual and local-boundary refinement

The method was previously listed as "RefineNet". To avoid confusions with a recently appeared and similarly named approach, the submission name was updated.
more details
XPARSEyesyesnononononononononono73.449.288.774.2n/anonoAnonymous
more details
ResNet-38yesyesnononononononononono78.459.190.981.1n/anoyesWider or Deeper: Revisiting the ResNet Model for Visual RecognitionZifeng Wu, Chunhua Shen, Anton van den Hengelarxivsingle model, single scale, no post-processing with CRFs
Model A2, 2 conv., fine only, single scale testing

The submissions was previously listed as "Model A2, 2 conv.". The name was changed for consistency with the other submission of the same work.
more details
SegModelyesyesyesyesnononononononono79.256.490.477.0n/anonoAnonymous
more details
Deep Layer Cascade (LC)yesyesnononononononononono71.147.088.174.1n/anonoNot All Pixels Are Equal: Difficulty-aware Semantic Segmentation via Deep Layer CascadeXiaoxiao Li, Ziwei Liu, Ping Luo, Chen Change Loy, Xiaoou TangCVPR 2017We propose a novel deep layer cascade (LC) method to improve the accuracy and speed of semantic segmentation. Unlike the conventional model cascade (MC) that is composed of multiple independent models, LC treats a single deep model as a cascade of several sub-models. Earlier sub-models are trained to handle easy and confident regions, and they progressively feed-forward harder regions to the next sub-model for processing. Convolutions are only calculated on these regions to reduce computations. The proposed method possesses several advantages. First, LC classifies most of the easy regions in the shallow stage and makes deeper stage focuses on a few hard regions. Such an adaptive and 'difficulty-aware' learning improves segmentation performance. Second, LC accelerates both training and testing of deep network thanks to early decisions in the shallow stage. Third, in comparison to MC, LC is an end-to-end trainable framework, allowing joint learning of all sub-models. We evaluate our method on PASCAL VOC and
more details
FRRNyesyesnononononononono2271.845.588.975.1n/anoyesFull-Resolution Residual Networks for Semantic Segmentation in Street ScenesTobias Pohlen, Alexander Hermans, Markus Mathias, Bastian LeibeArxivFull-Resolution Residual Networks (FRRN) combine multi-scale context with pixel-level accuracy by using two processing streams within one network: One stream carries information at the full image resolution, enabling precise adherence to segment boundaries. The other stream undergoes a sequence of pooling operations to obtain robust features for recognition.
more details
MNet_MPRGyesyesnononononononononono71.946.689.377.90.6nonoChubu University, MPRGwithout val dataset, external dataset (e.g. image net) and post-processing
more details
ResNet-38yesyesyesyesnononononononono80.657.891.079.1n/anoyesWider or Deeper: Revisiting the ResNet Model for Visual RecognitionZifeng Wu, Chunhua Shen, Anton van den Hengelarxivsingle model, no post-processing with CRFs
Model A2, 2 conv., fine+coarse, multi scale testing
more details
FCN8s-QunjieYuyesyesnononononononononono57.434.581.868.7n/anonoAnonymous
more details
RGB-D FCNyesyesyesyesnonoyesyesnononono67.442.187.571.0n/anonoAnonymousGoogLeNet + depth branch, single model
no data augmentation, no training on validation set, no graphical model
Used coarse labels to initialize depth branch
more details
MultiBoostyesyesyesyesnonoyesyesnono2259.332.581.960.20.25nonoAnonymousBoosting based solution.
Publication is under review.
more details
GoogLeNet FCNyesyesnononononononononono63.038.685.869.8n/anonoGoing Deeper with ConvolutionsChristian Szegedy , Wei Liu , Yangqing Jia , Pierre Sermanet , Scott Reed , Dragomir Anguelov , Dumitru Erhan , Vincent Vanhoucke , Andrew RabinovichCVPR 2015GoogLeNet
No data augmentation, no graphical model
Trained by Lukas Schneider, following "Fully Convolutional Networks for Semantic Segmentation", Long et al. CVPR 2015
more details
ERFNet (pretrained)yesyesnononononononono2269.744.187.372.70.02noyesEfficient ConvNet for Real-time Semantic SegmentationEduardo Romera, Jose M. Alvarez, Luis M. Bergasa and Roberto ArroyoIV2017ERFNet pretrained on ImageNet and trained only on the fine train (2975) annotated images
more details
ERFNet (from scratch)yesyesnononononononono2268.040.486.570.40.02noyesEfficient ConvNet for Real-time Semantic SegmentationEduardo Romera, Jose M. Alvarez, Luis M. Bergasa and Roberto ArroyoIV2017ERFNet trained entirely on the fine train set (2975 images) without any pretraining nor coarse labels
more details
TuSimple_Coarseyesyesyesyesnononononononono80.156.990.777.8n/anoyesUnderstanding Convolution for Semantic SegmentationPanqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, Garrison CottrellHere we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are better for practical use. First, we implement dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields of the network to aggregate global information; 2) alleviates what we call the "gridding issue" caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a new state-of-art result of 80.1% mIOU in the test set. We also are state-of-the-art overall on the KITTI road estimation benchmark and the
PASCAL VOC2012 segmentation task. Pretrained models are available at https://goo.gl/DQMeun.
more details
SAC-multipleyesyesnononononononononono78.155.290.678.3n/anonoAnonymous
more details
NetWarpyesyesyesyesnonononoyesyesnono80.559.591.079.8n/anonoAnonymous
more details
depthAwareSeg_RNN_ffyesyesnononononononononono78.256.089.776.9n/anoyesAnonymoustraining with fine-annotated training images only (val set is not used); flip-augmentation only in training; single GPU for train&test; softmax loss; resnet101 as front end; multiscale test.
more details
Ladder DenseNetyesyesnononononononononono74.351.689.779.50.45nonoAnonymousAnonymous ICCV submission 3205.
more details
GridNetyesyesnononononononononono69.544.187.971.1n/anonoAnonymousConv-Deconv Grid-Network for semantic segmentation.
Using only the training set without extra coarse annotated data (only 2975 images).
No pre-training (ImageNet).
No post-processing (like CRF).
more details
PEARLyesyesnonononononoyesyesnono75.451.689.275.1n/anonoVideo Scene Parsing with Predictive Feature LearningXiaojie Jin, Xin Li, Huaxin Xiao, Xiaohui Shen, Zhe Lin, Jimei Yang, Yunpeng Chen, Jian Dong, Luoqi Liu, Zequn Jie, Jiashi Feng, and Shuicheng YanICCV 2017We proposed a novel Parsing with prEdictive feAtuRe Learning (PEARL) model to address the following two problems in video scene parsing: firstly, how to effectively learn meaningful video representations for producing the temporally consistent labeling maps; secondly, how to overcome the problem of insufficient labeled video training data, i.e. how to effectively conduct unsupervised deep learning. To our knowledge, this is the first model to employ predictive feature learning in the video scene parsing.
more details
pruned & dilated inception-resnet-v2 (PD-IR2)yesyesyesyesnononononononono67.342.186.568.30.69noyesAnonymous
more details
PSPNetyesyesyesyesnononononononono81.259.691.279.2n/anoyesPyramid Scene Parsing NetworkHengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, Jiaya JiaCVPR 2017This submission is trained on coarse+fine(train+val set, 2975+500 images).

Former submission is trained on coarse+fine(train set, 2975 images) which gets 80.2 mIoU: https://www.cityscapes-dataset.com/method-details/?submissionID=314

Previous versions of this method were listed as "SenseSeg_1026".
more details
motovisyesyesyesyesnononononononono81.357.791.580.7n/anonomotovis.com
more details
ML-CRNNyesyesnononononononononono71.247.187.772.5n/anonoMulti-level Contextual RNNs with Attention Model for Scene LabelingHeng Fan, Xue Mei, Danil Prokhorov, Haibin LingarXivA framework based on CNNs and RNNs is proposed, in which the RNNs are used to model spatial dependencies among image units. Besides, to enrich deep features, we use different features from multiple levels, and adopt a novel attention model to fuse them.
more details
Hybrid Modelyesyesnononononononononono65.841.285.268.5n/anonoAnonymous
more details

 

Instance-Level Semantic Labeling Task

In the second Cityscapes task we focus on simultaneously detecting objects and segmenting them. This is an extension to both traditional object detection, since per-instance segments must be provided, and pixel-level semantic labeling, since each instance is treated as a separate label. Therefore, algorithms are required to deliver a set of detections of traffic participants in the scene, each associated with a confidence score and a per-instance segmentation mask.

Metrics

To assess instance-level performance, we compute the average precision on the region level (AP [2]) for each class and average it across a range of overlap thresholds to avoid a bias towards a specific value. Specifically, we follow [3] and use 10 different overlaps ranging from 0.5 to 0.95 in steps of 0.05. The overlap is computed at the region level, making it equivalent to the \text{IoU} of a single instance. We penalize multiple predictions of the same ground truth instance as false positives. To obtain a single, easy to compare compound score, we report the mean average precision \text{AP}, obtained by also averaging over the class label set. As minor scores, we add \text{AP}^{50\%} for an overlap value of 50 %, as well as \text{AP}^{100\text{m}} and \text{AP}^{50\text{m}} where the evaluation is restricted to objects within 100 m and 50 m distance, respectively.

Results

Detailed results

Detailed results including performances regarding individual classes and categories can be found here.

Usage
Use the buttons in the first row to hide columns or to export the visible data to various formats. Use the widgets in the second row to filter the table by selecting values of interest (multiple selections possible). Click the numeric columns for sorting.

namefinefinecoarsecoarse16-bit16-bitdepthdepthvideovideosubsubAPAP 50%AP 100mAP 50mRuntime [s]codecodetitleauthorsvenuedescription
R-CNN + MCG convex hullyesyesnononononononono224.612.97.710.360.0nonoThe Cityscapes Dataset for Semantic Urban Scene UnderstandingM. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, B. SchieleCVPR 2016We compute MCG object proposals [1] and use their convex hulls as instance candidates. These proposals are scored by a Fast R-CNN detector [2].

[1] P. Arbelaez, J. Pont-Tuset, J. Barron, F. Marqués, and J. Malik. Multiscale combinatorial grouping. In CVPR, 2014.
[2] R. Girshick. Fast R-CNN. In ICCV, 2015.
more details
Pixel-level Encoding for Instance Segmentationyesyesnonononoyesyesnononono8.921.115.316.7n/anonoPixel-level Encoding and Depth Layering for Instance-level Semantic LabelingJ. Uhrig, M. Cordts, U. Franke, and T. BroxGCPR 2016We predict three encoding channels from a single image using an FCN: semantic labels, depth classes, and an instance-aware representation based on directions towards instance centers. Using low-level computer vision techniques, we obtain pixel-level and instance-level semantic labeling paired with a depth estimate of the instances.
more details
Instance-level Segmentation of Vehicles by Deep Contoursyesyesnononononononono222.33.73.94.90.2nonoInstance-level Segmentation of Vehicles by Deep ContoursJan van den Brand, Matthias Ochs and Rudolf MesterAsian Conference on Computer Vision - Workshop on Computer Vision Technologies for Smart VehicleOur method uses the fully convolutional network (FCN) for semantic labeling and for estimating the boundary of each vehicle. Even though a contour is in general a one pixel wide structure which cannot be directly learned by a CNN, our network addresses this by providing areas around the contours. Based on these areas, we separate the individual vehicle instances.
more details
Boundary-aware Instance Segmentationyesyesnononononononono2217.436.729.334.0n/anonoBoundary-aware Instance SegmentationZeeshan Hayder, Xuming He, Mathieu SalzmannCVPR 2017End-to-end model for instance segmentation using VGG16 network

Previously listed as "Shape-Aware Instance Segmentation"
more details
RecAttendyesyesnononononononono449.518.916.820.9n/anonoAnonymous
more details
Joint Graph Decomposition and Node Labelingyesyesnononononononono889.823.216.820.3n/anonoJoint Graph Decomposition and Node Labeling: Problem, Algorithms, ApplicationsEvgeny Levinkov, Jonas Uhrig, Siyu Tang, Mohamed Omran, Eldar Insafutdinov, Alexander Kirillov, Carsten Rother, Thomas Brox, Bernt Schiele, Bjoern AndresComputer Vision and Pattern Recognition (CVPR) 2017
more details
InstanceCutyesyesyesyesnononononononono13.027.922.126.1n/anonoInstanceCut: from Edges to Instances with MultiCutA. Kirillov, E. Levinkov, B. Andres, B. Savchynskyy, C. RotherComputer Vision and Pattern Recognition (CVPR) 2017InstanceCut represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard CNN for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation.
more details
Pixelwise Instance Segmentation with a Dynamically Instantiated Networkyesyesyesyesnononononononono20.038.832.637.6n/anonoPixelwise Instance Segmentation with a Dynamically Instantiated NetworkAnurag Arnab and Philip TorrComputer Vision and Pattern Recognition (CVPR) 2017We propose an Instance Segmentation system that produces a segmentation map where each pixel is assigned an object class and instance identity label. Our method is based on an initial semantic segmentation module which feeds into an instance subnetwork. This subnetwork uses the initial category-level segmentation, along with cues from the output of an object detector, within an end-to-end CRF to predict instances. This part of our model is dynamically instantiated to produce a variable number of instances per image. Our end-to-end approach requires no post-processing and considers the image holistically, instead of processing independent proposals. As a result, it reasons about occlusions (unlike some related work, a single pixel cannot belong to multiple instances).
more details
PPLossyesyesnononononononono2217.535.927.831.0n/anonoAnonymous
more details
SGNyesyesyesyesnononononononono25.044.938.944.5n/anonoAnonymous
more details
Mask R-CNN [COCO]yesyesnononononononononono32.058.145.849.5n/anonoMask R-CNNKaiming He, Georgia Gkioxari, Piotr Dollár, Ross GirshickMask R-CNN, ResNet-50-FPN, Cityscapes [fine-only] + COCO
more details
Mask R-CNN [fine-only]yesyesnononononononononono26.249.937.640.1n/anonoMask R-CNNKaiming He, Georgia Gkioxari, Piotr Dollár, Ross GirshickMask R-CNN, ResNet-50-FPN, Cityscapes fine-only
more details
Deep Watershed Transformationyesyesnononononononono2219.435.331.436.8n/anonoDeep Watershed Transformation for Instance SegmentationMin Bai and Raquel UrtasunCVPR 2017Instance segmentation using a watershed transformation inspired CNN. The input RGB image is augmented using the semantic segmentation from the recent PSPNet by H. Zhao et al.
Previously named "DWT".
more details

 

Meta Information

In addition to the previously introduced measures, we report additional meta information for each method, such as timings or the kind of information each algorithm is using, e.g. depth data or multiple video frames. Please refer to the result tables for further details.

 

References

[1] M. Everingham, A. S. M. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, "The Pascal Visual Object Classes challenge: A retrospective," IJCV, vol. 111, iss. 1, 2014.
[2] B. Hariharan, P. Arbeláez, R. B. Girshick, and J. Malik, "Simultaneous detection and segmentation," in ECCV, 2014.
[3] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and L. C. Zitnick, "Microsoft COCO: Common objects in context," in ECCV, 2014.