Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

evaluateDetectionMissRate

Evaluate miss rate metric for object detection

Syntax

logAverageMissRate = evaluateDetectionMissRate(detectionResults,groundTruthTable)
[logAverageMissRate,fppi,missRate] = evaluateDetectionMissRate(___)
[___] = evaluateDetectionMissRate(___,threshold)

Description

logAverageMissRate = evaluateDetectionMissRate(detectionResults,groundTruthTable) returns the log-average miss rate of the detectionResults compared to groundTruthTable, which is used to measure the performance of the object detector. For a multiclass detector, the log-average miss rate is a vector of scores for each object class in the order specified by groundTruthTable.

example

[logAverageMissRate,fppi,missRate] = evaluateDetectionMissRate(___) returns data points for plotting the log miss rate–false positives per image (FPPI) curve, using input arguments from the previous syntax.

[___] = evaluateDetectionMissRate(___,threshold) specifies the overlap threshold for assigning a detection to a ground truth box.

Examples

collapse all

Train an ACF-based detector using pre-loaded ground truth information. Run the detector on the training images. Evaluate the detector and display the precision-recall curve.

Load the ground truth table.

load('stopSignsAndCars.mat')
stopSigns = stopSignsAndCars(:,1:2);
stopSigns.imageFilename = fullfile(toolboxdir('vision'),'visiondata', ...
    stopSigns.imageFilename);

Train an ACF-based detector.

detector = trainACFObjectDetector(stopSigns,'NumStages',3);
ACF Object Detector Training
The training will take 3 stages. The model size is 34x31.
Sample positive examples(~100% Completed)
Compute approximation coefficients...Completed.
Compute aggregated channel features...Completed.
--------------------------------------------
Stage 1:
Sample negative examples(~100% Completed)
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 210 negative examples...Completed.
The trained classifier has 19 weak learners.
--------------------------------------------
Stage 2:
Sample negative examples(~100% Completed)
Found 210 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 210 negative examples...Completed.
The trained classifier has 51 weak learners.
--------------------------------------------
Stage 3:
Sample negative examples(~100% Completed)
Found 210 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 210 negative examples...Completed.
The trained classifier has 87 weak learners.
--------------------------------------------
ACF object detector training is completed. Elapsed time is 20.0043 seconds.

Create a table to store the results.

numImages = height(stopSigns);
results(numImages) = struct('Boxes',[],'Scores',[]);

Run the detector on the training images. Store the results as a table.

for i = 1:numImages
    I = imread(stopSigns.imageFilename{i});
    [bboxes,scores] = detect(detector,I);
    results(i).Boxes = bboxes;
    results(i).Scores = scores;
end

results = struct2table(results);

Evaluate the results against the ground truth data. Get the precision statistics.

[ap,recall,precision] = evaluateDetectionPrecision(results,stopSigns(:,2));

Plot the precision-recall curve.

figure
plot(recall,precision)
grid on
title(sprintf('Average Precision = %.1f',ap))

Input Arguments

collapse all

Object locations and scores, specified as a two-column table containing the bounding boxes and scores for each detected object. For multiclass detection, a third column contains the predicted label for each detection.

When detecting objects, you can create the detection results table by using struct2table to combine the bboxes and scores outputs:

   for i = 1:numImages
        I = imread(imageFilename{i});
        [bboxes,scores] = detect(detector,I);
        results(i).Boxes = bboxes;
        results(i).Scores = scores;
    end
 results = struct2table(results); 

Data Types: table

Labeled ground truth images, specified as a table with two or more columns. The first column must contain paths and file names to grayscale or truecolor (RGB) images. The remaining columns must contain bounding boxes related to the corresponding image. Each column represents a single object class, such as a car, dog, flower, or stop sign.

Each bounding box must be in the format [x,y,width,height]. The format specifies the upper-left corner location and the size of the object in the corresponding image. The table variable name defines the object class name. To create the ground truth table, use the Training Image Labeler app.

Overlap threshold for a detection assigned to a ground truth box, specified as a numeric scalar. The overlap ratio is computed as the intersection over union.

Output Arguments

collapse all

Log-average miss rate metric, returned as either a numeric scalar or vector. For a multiclass detector, the log-average miss rate is returned as a vector of values that correspond to the data points for each class.

False positives per image, returned as either a vector of numeric scalars or as a cell array. For a multiclass detector, the FPPI and log miss rate are cell arrays, where each cell contains the data points for each object class.

Log miss rate, returned as either a vector of numeric scalars or as a cell array. For a multiclass detector, the FPPI and log miss rate are cell arrays, where each cell contains the data points for each object class.

Introduced in R2017a


Was this topic helpful?