Main Content

semanticSegmentationMetrics

Semantic segmentation quality metrics

Description

A semanticSegmentationMetrics object encapsulates semantic segmentation quality metrics for a set of images.

Creation

Create a semanticSegmentationMetrics object using the evaluateSemanticSegmentation function.

Properties

expand all

This property is read-only.

Confusion matrix, specified as a table with C rows and columns, where C is the number of classes in the semantic segmentation. Each table element (i,j) is the count of pixels known to belong to class i but predicted to belong to class j.

This property is read-only.

Normalized confusion matrix, specified as a table with C rows and columns, where C is the number of classes in the semantic segmentation. The NormalizedConfusionMatrx represents a confusion matrix normalized by the number of pixels known to belong to each class. Each table element (i,j) is the count of pixels known to belong to class i but predicted to belong to class j, divided by the total number of pixels predicted in class i. Elements are in the range [0, 1].

This property is read-only.

Semantic segmentation metrics aggregated over the data set, specified as a table with one row. DataSetMetrics has up to five columns, corresponding to the metrics that were specified by the 'Metrics' name-value pair used with evaluateSemanticSegmentation:

  • GlobalAccuracy — Ratio of correctly classified pixels to total pixels, regardless of class.

  • MeanAccuracy — Ratio of correctly classified pixels in each class to total pixels, averaged over all classes. The value is equal to the mean of ClassMetrics.Accuracy.

  • MeanIoU — Average intersection over union (IoU) of all classes. The value is equal to the mean of ClassMetrics.IoU.

  • WeightedIoU — Average IoU of all classes, weighted by the number of pixels in the class.

  • MeanBFScore — Average boundary F1 (BF) score of all images. The value is equal to the mean of ImageMetrics.BFScore. This metric is not available when you create a semanticSegmentationMetrics object by using a confusion matrix as the input to evaluateSemanticSegmentation.

Note

A value of NaN in the dataset, class, or image metrics, indicates that one or more classes were missing during the computation of the metrics when using the evaluateSemanticSegmentation function. In this case, the software was unable to accurately compute the metrics.

The missing classes can be found by looking at the ClassMetrics property, which provides the metrics for each class. To more accurately evaluate your network, augment your ground truth with more data that includes the missing classes.

This property is read-only.

Semantic segmentation metrics for each class, specified as a table with C rows, where C is the number of classes in the semantic segmentation. ClassMetrics has up to three columns, corresponding to the metrics that were specified by the 'Metrics' name-value pair used with evaluateSemanticSegmentation:

  • Accuracy — Ratio of correctly classified pixels in each class to the total number of pixels belonging to that class according to the ground truth. Accuracy can be expressed as:

    Accuracy = (TP + TN ) / (TP + TN + FP + FN)

     PositiveNegative
    PositiveTP: True PositiveFN: False Negative
    NegativeFP: False PositiveTN: True Negative

    TP: True positives and FN is the number of false negatives.

  • IoU — Ratio of correctly classified pixels to the total number of pixels that are assigned that class by the ground truth and the predictor. IoU can be expressed as:

    IoU = TP / (TP + FP + FN)

    The image describes the true positives (TP), false positives (FP), and false negatives (FN).

  • MeanBFScore — Boundary F1 score for each class, averaged over all images. This metric is not available when you create a semanticSegmentationMetrics object by using a confusion matrix as the input to evaluateSemanticSegmentation.

This property is read-only.

Semantic segmentation metrics for each image in the data set, specified as a table with N rows, where N is the number of images in the data set. ImageMetrics has up to five columns, corresponding to the metrics that were specified by the 'Metrics' name-value pair used with evaluateSemanticSegmentation:

  • GlobalAccuracy — Ratio of correctly classified pixels to total pixels, regardless of class.

  • MeanAccuracy — Ratio of correctly classified pixels to total pixels, averaged over all classes in the image.

  • MeanIoU — Average IoU of all classes in the image.

  • WeightedIoU — Average IoU of all classes in the image, weighted by the number of pixels in each class.

  • MeanBFScore — Average BF score of each class in the image. This metric is not available when you create a semanticSegmentationMetrics object by using a confusion matrix as the input to evaluateSemanticSegmentation.

Each image metric returns a vector, with one element for each image in the data set. The order of the rows matches the order of the images defined by the input PixelLabelDatastore objects representing the data set.

Examples

collapse all

The triangleImages data set has 100 test images with ground truth labels. Define the location of the data set.

dataSetDir = fullfile(toolboxdir("vision"),"visiondata","triangleImages");

Define the location of the test images and ground truth labels.

testImagesDir = fullfile(dataSetDir,"testImages");
testLabelsDir = fullfile(dataSetDir,"testLabels");

Create an imageDatastore holding the test images.

imds = imageDatastore(testImagesDir);

Define the class names and their associated label IDs.

classNames = ["triangle" "background"];
labelIDs   = [255 0];

Create a pixelLabelDatastore holding the ground truth pixel labels for the test images.

pxdsTruth = pixelLabelDatastore(testLabelsDir,classNames,labelIDs);

Load a semantic segmentation network that has been trained on the training images of triangleImages.

net = load("triangleSegmentationNetwork");
net = net.net;

Run the network on the test images. Predicted labels are written to disk in a temporary directory and returned as a pixelLabelDatastore.

pxdsResults = semanticseg(imds,net,Classes=classNames,WriteLocation=tempdir);
Running semantic segmentation network
-------------------------------------
* Processed 100 images.

Evaluate the prediction results against the ground truth.

metrics = evaluateSemanticSegmentation(pxdsResults,pxdsTruth);
Evaluating semantic segmentation results
----------------------------------------
* Selected metrics: global accuracy, class accuracy, IoU, weighted IoU, BF score.
* Processed 100 images.
* Finalizing... Done.
* Data set metrics:

    GlobalAccuracy    MeanAccuracy    MeanIoU    WeightedIoU    MeanBFScore
    ______________    ____________    _______    ___________    ___________

       0.99074          0.99183       0.91118      0.98299        0.80563  

Display the properties of the semanticSegmentationMetrics object.

metrics
metrics = 
  semanticSegmentationMetrics with properties:

              ConfusionMatrix: [2x2 table]
    NormalizedConfusionMatrix: [2x2 table]
               DataSetMetrics: [1x5 table]
                 ClassMetrics: [2x3 table]
                 ImageMetrics: [100x5 table]

Display the classification accuracy, the intersection over union, and the boundary F-1 score for each class. These values are stored in the ClassMetrics property.

metrics.ClassMetrics
ans=2×3 table
                  Accuracy      IoU      MeanBFScore
                  ________    _______    ___________

    triangle      0.99302     0.83206      0.67208  
    background    0.99063      0.9903      0.93918  

Display the normalized confusion matrix that is stored in the NormalizedConfusionMatrix property.

metrics.ConfusionMatrix
ans=2×2 table
                  triangle    background
                  ________    __________

    triangle        4697           33   
    background       915        96755   

Version History

Introduced in R2017b