Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

trainingOptions

Options for training neural network

Syntax

options = trainingOptions(solverName)
options = trainingOptions(solverName,Name,Value)

Description

options = trainingOptions(solverName) returns a set of training options for the solver specified by solverName.

example

options = trainingOptions(solverName,Name,Value) returns a set of training options with additional options specified by one or more name-value pair arguments.

Examples

collapse all

Create a set of options for training a network using stochastic gradient descent with momentum. Reduce the learning rate by a factor of 0.2 every 5 epochs. Set the maximum number of epochs for training to 20, and use a mini-batch with 64 observations at each iteration. Plot the training progress during training.

options = trainingOptions('sgdm',...
    'LearnRateSchedule','piecewise',...
    'LearnRateDropFactor',0.2,...
    'LearnRateDropPeriod',5,...
    'MaxEpochs',20,...
    'MiniBatchSize',64,...
    'Plots','training-progress')
options = 

  TrainingOptionsSGDM with properties:

                     Momentum: 0.9000
             InitialLearnRate: 0.0100
    LearnRateScheduleSettings: [1x1 struct]
             L2Regularization: 1.0000e-04
                    MaxEpochs: 20
                MiniBatchSize: 64
                      Verbose: 1
             VerboseFrequency: 50
               ValidationData: []
          ValidationFrequency: 50
           ValidationPatience: 5
                      Shuffle: 'once'
               CheckpointPath: ''
         ExecutionEnvironment: 'auto'
                   WorkerLoad: []
                    OutputFcn: []
                        Plots: 'training-progress'
               SequenceLength: 'longest'
         SequencePaddingValue: 0

When you train networks for deep learning, it is often useful to monitor the training progress. By plotting various metrics during training, you can learn how the training is progressing. For example, you can determine if and how quickly the network accuracy is improving, and whether the network is starting to overfit the training data.

When you specify 'training-progress' as the 'Plots' value in trainingOptions and start network training, trainNetwork creates a figure and displays training metrics at every iteration. Each iteration is an estimation of the gradient and an update of the network parameters. If you specify validation data in trainingOptions, then the figure shows validation metrics each time trainNetwork validates the network. The figure plots the following:

  • Training accuracy — Classification accuracy on each individual mini-batch.

  • Smoothed training accuracy — Smoothed training accuracy, obtained by applying a smoothing algorithm to the training accuracy. It is less noisy than the unsmoothed accuracy, making it easier to spot trends.

  • Validation accuracy — Classification accuracy on the entire validation set (specified using trainingOptions).

  • Training loss, smoothed training loss, and validation loss The loss on each mini-batch, its smoothed version, and the loss on the validation set, respectively. If the final layer of your network is a classificationLayer, then the loss function is the cross entropy loss. For more information about loss functions for classification and regression problems, see Output Layers.

For regression networks, the figure plots the root mean square error (RMSE) instead of the accuracy.

The figure marks each training Epoch using a shaded background. An epoch is a full pass through the entire data set.

During training, you can stop training and return the current state of the network by clicking the stop button in the top-right corner. For example, you might want to stop training when the accuracy of the network reaches a plateau and it is clear that the accuracy is no longer improving. After you click the stop button, it can take a while for the training to complete. Once training is complete, trainNetwork returns the trained network.

When training finishes, view the Results showing the final validation accuracy and the reason that training finished.

On the right, view information about the training time and settings. To learn more about training options, see Set Up Parameters and Train Convolutional Neural Network.

Plot Training Progress During Training

This example shows how to train a network and plot the training progress during training.

Load the training data, which contains 5000 images of digits. Set aside 1000 of the images for network validation.

[trainImages,trainLabels] = digitTrain4DArrayData;

idx = randperm(size(trainImages,4),1000);
valImages = trainImages(:,:,:,idx);
trainImages(:,:,:,idx) = [];
valLabels = trainLabels(idx);
trainLabels(idx) = [];

Construct a network to classify the digit image data.

layers = [
    imageInputLayer([28 28 1])
    
    convolution2dLayer(3,16,'Padding',1)
    batchNormalizationLayer
    reluLayer   
    
    maxPooling2dLayer(2,'Stride',2)
    
    convolution2dLayer(3,32,'Padding',1)
    batchNormalizationLayer
    reluLayer   
    
    maxPooling2dLayer(2,'Stride',2)
    
    convolution2dLayer(3,64,'Padding',1)
    batchNormalizationLayer
    reluLayer   
    
    fullyConnectedLayer(10)
    softmaxLayer
    classificationLayer];

Specify options for network training. To validate the network at regular intervals during training, specify validation data. Choose the 'ValidationFrequency' value so that the network is validated about once per epoch. To plot training progress during training, specify 'training-progress' as the 'Plots' value.

options = trainingOptions('sgdm',...
    'MaxEpochs',6, ...
    'ValidationData',{valImages,valLabels},...
    'ValidationFrequency',30,...
    'Verbose',false,...
    'Plots','training-progress');

Train the network.

net = trainNetwork(trainImages,trainLabels,layers,options);

By default, using the 'ValidationData' name-value pair argument stops training when the loss on the validation set stops decreasing. You can add additional stopping criteria using output functions. This example shows how to create an output function that stops training when the classification accuracy on the validation data stops improving. The output function is defined at the end of the script.

Load the training data, which contains 5000 images of digits. Set aside 1000 of the images for network validation.

[trainImages,trainLabels] = digitTrain4DArrayData;

idx = randperm(size(trainImages,4),1000);
valImages = trainImages(:,:,:,idx);
trainImages(:,:,:,idx) = [];
valLabels = trainLabels(idx);
trainLabels(idx) = [];

Construct a network to classify the digit image data.

layers = [
    imageInputLayer([28 28 1])
    
    convolution2dLayer(3,16,'Padding',1)
    batchNormalizationLayer
    reluLayer   
    
    maxPooling2dLayer(2,'Stride',2)
    
    convolution2dLayer(3,32,'Padding',1)
    batchNormalizationLayer
    reluLayer   
    
    maxPooling2dLayer(2,'Stride',2)
    
    convolution2dLayer(3,64,'Padding',1)
    batchNormalizationLayer
    reluLayer   
    
    fullyConnectedLayer(10)
    softmaxLayer
    classificationLayer];

Specify options for network training. To validate the network at regular intervals during training, specify validation data. Choose the 'ValidationFrequency' value so that the network is validated twice per epoch. Turn off the built-in validation stopping criterion (which uses the loss) by setting the 'ValidationPatience' value to Inf.

To stop training when the classification accuracy on the validation set stops improving, specify stopIfAccuracyNotImproving as an output function. The second input argument of stopIfAccuracyNotImproving is the number of times that the accuracy on the validation set can be smaller than or equal to the previously highest accuracy before network training stops. Choose any large value for the maximum number of epochs to train. Training should not reach the final epoch because training stops automatically.

miniBatchSize = 256;
numValidationsPerEpoch = 2;
validationFrequency = floor(size(trainImages,4)/miniBatchSize/numValidationsPerEpoch);
options = trainingOptions('sgdm',...
    'InitialLearnRate',0.01,...
    'MaxEpochs',100,...
    'MiniBatchSize',miniBatchSize,...
    'VerboseFrequency',validationFrequency,...
    'ValidationData',{valImages,valLabels},...
    'ValidationFrequency',validationFrequency,...
    'ValidationPatience',Inf,...
    'Plots','training-progress',...
    'OutputFcn',@(info)stopIfAccuracyNotImproving(info,3));

Train the network. Training stops when the validation accuracy stops increasing.

net = trainNetwork(trainImages,trainLabels,layers,options);
Training on single GPU.
Initializing image normalization.
|=======================================================================================================================|
|     Epoch    |   Iteration  | Time Elapsed |  Mini-batch  |  Validation  |  Mini-batch  |  Validation  | Base Learning|
|              |              |  (seconds)   |     Loss     |     Loss     |   Accuracy   |   Accuracy   |     Rate     |
|=======================================================================================================================|
|            1 |            1 |         0.49 |       2.3773 |       2.2120 |       11.33% |       18.50% |       0.0100 |
|            1 |            7 |         0.88 |       1.5561 |       1.3503 |       50.78% |       63.10% |       0.0100 |
|            1 |           14 |         1.19 |       0.9200 |       0.8031 |       67.97% |       76.10% |       0.0100 |
|            2 |           21 |         1.52 |       0.5295 |       0.4740 |       85.16% |       86.30% |       0.0100 |
|            2 |           28 |         1.84 |       0.3041 |       0.2798 |       93.75% |       92.60% |       0.0100 |
|            3 |           35 |         2.23 |       0.1823 |       0.2166 |       95.70% |       94.40% |       0.0100 |
|            3 |           42 |         2.59 |       0.1137 |       0.1338 |       98.83% |       97.80% |       0.0100 |
|            4 |           49 |         2.98 |       0.0854 |       0.0984 |       99.22% |       98.40% |       0.0100 |
|            4 |           56 |         3.32 |       0.0756 |       0.0913 |       99.61% |       98.40% |       0.0100 |
|            5 |           63 |         3.65 |       0.0564 |       0.0670 |      100.00% |       98.90% |       0.0100 |
|            5 |           70 |         3.97 |       0.0455 |       0.0614 |       99.61% |       99.30% |       0.0100 |
|            6 |           77 |         4.29 |       0.0336 |       0.0455 |      100.00% |       99.40% |       0.0100 |
|            6 |           84 |         4.61 |       0.0318 |       0.0435 |      100.00% |       99.80% |       0.0100 |
|            7 |           91 |         4.94 |       0.0294 |       0.0453 |      100.00% |       99.10% |       0.0100 |
|            7 |           98 |         5.28 |       0.0197 |       0.0333 |      100.00% |       99.90% |       0.0100 |
|            7 |          105 |         5.61 |       0.0224 |       0.0353 |      100.00% |       99.60% |       0.0100 |
|            8 |          112 |         5.98 |       0.0215 |       0.0264 |      100.00% |       99.90% |       0.0100 |
|            8 |          119 |         6.32 |       0.0204 |       0.0321 |      100.00% |       99.60% |       0.0100 |
|=======================================================================================================================|

Output Function

Define the output function stopIfAccuracyNotImproving(info,N), which stops network training if the best classification accuracy on the validation data does not improve for N network validations in a row. This criterion is similar to the built-in stopping criterion using the validation loss, except that it applies to the classification accuracy instead of the loss.

function stop = stopIfAccuracyNotImproving(info,N)

stop = false;

% Keep track of the best validation accuracy and the number of validations for which
% there has not been an improvement of the accuracy.
persistent bestValAccuracy
persistent valLag

% Clear the variables when training starts.
if info.State == "start"
    bestValAccuracy = 0;
    valLag = 0;
    
elseif ~isempty(info.ValidationLoss)
    
    % Compare the current validation accuracy to the best accuracy so far,
    % and either set the best accuracy to the current accuracy, or increase
    % the number of validations for which there has not been an improvement.
    if info.ValidationAccuracy > bestValAccuracy
        valLag = 0;
        bestValAccuracy = info.ValidationAccuracy;
    else
        valLag = valLag + 1;
    end
    
    % If the validation lag is at least N, that is, the validation accuracy
    % has not improved for at least N validations, then return true and
    % stop training.
    if valLag >= N
        stop = true;
    end
    
end

end

Input Arguments

collapse all

Solver to use for training the network. You must specify 'sgdm' (Stochastic Gradient Descent with Momentum).

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'InitialLearningRate',0.03,'L2Regularization',0.0005,'LearnRateSchedule','piecewise' specifies the initial learning rate as 0.03 and the L2 regularization factor as 0.0005, and instructs the software to drop the learning rate every given number of epochs by multiplying with a certain factor.

collapse all

Plots to display during network training, specified as the comma-separated pair consisting of 'Plots' and one of the following:

  • 'none' — Do not display plots during training.

  • 'training-progress'— Plot training progress. The plot shows mini-batch loss and accuracy, validation loss and accuracy, and additional information on the training progress. The plot has a stop button in the top-right corner. Click the button to stop training and return the current state of the network. For more information on the training progress plot, see Monitor Deep Learning Training Progress.

Example: 'Plots','training-progress'

Data Types: char

Path for saving the checkpoint networks, specified as the comma-separated pair consisting of 'CheckpointPath' and a character vector.

  • If you do not specify a path (that is, you use the default ''), then the software does not save any checkpoint networks.

  • If you specify a path, then trainNetwork saves checkpoint networks to this path after every epoch and assigns a unique name to each network. You can then load any checkpoint network and resume training from that network.

    If the directory does not already exist, then you must first create it before specifying the path for saving the checkpoint networks. If the path you specify does not exist, then trainingOptions returns an error.

Example: 'CheckpointPath','C:\Temp\checkpoint'

Data Types: char

Hardware resource for training network, specified as one of the following:

  • 'auto' — Use a GPU if one is available. Otherwise, use the CPU.

  • 'cpu' — Use the CPU.

  • 'gpu' — Use the GPU.

  • 'multi-gpu' — Use multiple GPUs on one machine, using a local parallel pool. If no pool is already open, then the software opens one with one worker per supported GPU device.

  • 'parallel' — Use a local parallel pool or compute cluster. If no pool is already open, then the software opens one using the default cluster profile. If the pool has access to GPUs, then the software uses them and leaves excess workers idle. If the pool does not have GPUs, then the training takes place on all cluster CPUs.

GPU, multi-GPU, and parallel options require Parallel Computing Toolbox™.To use a GPU, you must also have a CUDA® enabled NVIDIA® GPU with compute capability 3.0 or higher. If you choose one of these options and Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.

To see an improvement in performance when training in parallel, try increasing MiniBatchSize to offset the communication overhead.

To train directed acyclic graph (DAG) or long short-term memory (LSTM) networks, the hardware resource must be 'auto', 'cpu', or 'gpu'.

Example: 'ExecutionEnvironment','cpu'

Initial learning rate used for training, specified as the comma-separated pair consisting of 'InitialLearnRate' and a positive scalar. If the learning rate is too low, then training takes a long time. If the learning rate is too high, then training might reach a suboptimal result.

Example: 'InitialLearnRate',0.03

Data Types: single | double

Option for dropping the learning rate during training, specified as the comma-separated pair consisting of 'LearnRateSchedule' and one of the following:

  • 'none' — The learning rate remains constant throughout training.

  • 'piecewise' — The software updates the learning rate every certain number of epochs by multiplying with a certain factor. Use the LearnRateDropFactor name-value pair argument to specify the value of this factor. Use the LearnRateDropPeriod name-value pair argument to specify the number of epochs between multiplications.

Example: 'LearnRateSchedule','piecewise'

Factor for dropping the learning rate, specified as the comma-separated pair consisting of 'LearnRateDropFactor' and a scalar from 0 to 1. This option is valid only when the value of LearnRateSchedule is 'piecewise'.

LearnRateDropFactor is a multiplicative factor to apply to the learning rate every time a certain number of epochs passes. Specify the number of epochs using the LearnRateDropPeriod name-value pair argument.

Example: 'LearnRateDropFactor',0.02

Data Types: single | double

Number of epochs for dropping the learning rate, specified as the comma-separated pair consisting of 'LearnRateDropPeriod' and a positive integer. This option is valid only when the value of LearnRateSchedule is 'piecewise'.

The software multiplies the global learning rate with the drop factor every time the specified number of epochs passes. Specify the drop factor using the LearnRateDropFactor name-value pair argument.

Example: 'LearnRateDropPeriod',3

Factor for L2 regularizer (weight decay), specified as the comma-separated pair consisting of 'L2Regularization' and a nonnegative scalar.

You can specify a multiplier for the L2 regularizer for network layers with learnable parameters.

Example: 'L2Regularization',0.0005

Data Types: single | double

Maximum number of epochs to use for training, specified as the comma-separated pair consisting of 'MaxEpochs' and a positive integer.

An iteration is one step taken in the gradient descent algorithm towards minimizing the loss function using a mini-batch. An epoch is the full pass of the training algorithm over the entire training set.

Example: 'MaxEpochs',20

Size of the mini-batch to use for each training iteration, specified as the comma-separated pair consisting of 'MiniBatchSize' and a positive integer. A mini-batch is a subset of the training set that is used to evaluate the gradient of the loss function and update the weights. See Stochastic Gradient Descent with Momentum.

Example: 'MiniBatchSize',256

Contribution of the gradient step from the previous iteration to the current iteration of the training, specified as the comma-separated pair consisting of 'Momentum' and a scalar from 0 to 1. A value of 0 means no contribution from the previous step, whereas a value of 1 means maximal contribution from the previous step.

Example: 'Momentum',0.8

Data Types: single | double

Option to pad, truncate, or split input sequences, specified as one of the following:

  • 'longest' — Pad sequences in the each mini-batch to have the same length as the longest sequence.

  • 'shortest' — Truncate sequences in each mini-batch to have the same length as the shortest sequence.

  • Positive integer — Pad sequences in each mini-batch to have the same length as the longest sequence, then split into smaller sequences of the specified length. If splitting occurs, then the function creates extra mini-batches.

Example: 'SequenceLength','shortest'

Value by which to pad input sequences, specified as a scalar. The option is valid only when SequenceLength is 'longest' or a positive integer. Do not pad sequences with NaN, because doing so can propagate errors throughout the network.

Example: 'SequencePaddingValue',-1

Option for data shuffling, specified as the comma-separated pair consisting of 'Shuffle' and one of the following:

  • 'once' — Shuffle the training and validation data once before training.

  • 'never' — Do not shuffle the data.

  • 'every-epoch' — Shuffle the training data before each training epoch, and shuffle the validation data before each network validation. If the mini-batch size does not evenly divide the number of training samples, then trainNetwork discards the training data that does not fit into the final complete mini-batch of each epoch. To avoid discarding the same data every epoch, set the 'Shuffle' value to 'every-epoch'.

Example: 'Shuffle','every-epoch'

Indicator to display training progress information in the command window, specified as the comma-separated pair consisting of 'Verbose' and either 1 (true) or 0 (false).

The displayed information includes the epoch number, iteration number, time elapsed, mini-batch loss, mini-batch accuracy, and base learning rate. When you train a regression network, root mean square error (RMSE) is shown instead of accuracy. If you validate the network during training, then the displayed information also includes the validation loss and validation accuracy (or RMSE). Use the 'ValidationData' name-value pair to specify validation data.

Example: 'Verbose',0

Data Types: logical

Frequency of verbose printing, which is the number of iterations between printing to the command window, specified as the comma-separated pair consisting of 'VerboseFrequency' and a positive integer. This option only has an effect when the 'Verbose' value equals true.

If you validate the network during training, then trainNetwork also prints to the command window every time validation occurs.

Worker load division for GPUs or CPUs, specified as the comma-separated pair consisting of 'WorkerLoad' and a numeric vector. This option has an effect only when the 'ExecutionEnvironment' value equals 'multi-gpu' or 'parallel'. The specified vector must contain one value per worker in the parallel pool. For a vector w, each worker gets wi/iwi of the work. Use this option to balance the workload between unevenly performing hardware.

Output functions to call during training, specified as the comma-separated pair consisting of 'OutputFcn' and a function handle or cell array of function handles. trainNetwork calls the specified functions once before the start of training, after each iteration, and once after training has finished. trainNetwork passes a structure containing information in the following fields:

FieldDescription
EpochCurrent epoch number
IterationCurrent iteration number
TimeSinceStartTime in seconds since the start of training
TrainingLossCurrent mini-batch loss
ValidationLossLoss on the validation data
BaseLearnRateCurrent base learning rate
TrainingAccuracy Accuracy on the current mini-batch (classification networks)
TrainingRMSERMSE on the current mini-batch (regression networks)
ValidationAccuracyAccuracy on the validation data (classification networks)
ValidationRMSERMSE on the validation data (regression networks)
StateCurrent training state, with a possible value of "start", "iteration", or "done"

If a field is not calculated or relevant for a certain call to the output functions, then that field contains an empty array.

You can use output functions to display or plot progress information, or to stop training. To stop training early, make your output function return true. If any output function returns true, then training finishes and trainNetwork returns the latest network. For an example showing how to stop training early, see Stop Training Using Output Function.

Data Types: function_handle | cell

Data to use for validation during training, specified as the comma-separated pair consisting of 'ValidationData' and one of the following:

  • ImageDatastore with categorical labels for image classification problems.

  • table, where the first column contains either image paths or images, and the subsequent columns contain the responses. For an image classification problem, the response must be a categorical variable in the second table column. For a regression problem, the responses can be either in multiple columns as scalars, or in a single column as numeric vectors or cell arrays containing numeric 3-D arrays.

  • cell array {X,Y}, where X is a numeric array of images and Y contains the responses. The first three dimensions of X are the height, width, and channels, and the last dimension is the image index. For an image classification problem, Y must be a categorical vector. For a regression problem, Y must be a numeric array. For more information on the allowed shape of Y, see details on the trainNetwork page.

During training, trainNetwork predicts the labels of the validation data and calculates the validation accuracy and validation loss. To specify the validation frequency, use the 'ValidationFrequency' name-value pair argument. By default, if the validation loss is larger than or equal to the previously smallest loss five times in a row, then network training stops. To change the number of times that the validation loss is allowed to not decrease before training stops, use the 'ValidationPatience' name-value pair argument.

If your network has layers that behave differently during prediction than during training (for example, dropout layers), then the validation accuracy can be higher than the training (mini-batch) accuracy.

The validation data is shuffled according to the 'Shuffle' value. If the 'Shuffle' value equals 'every-epoch', then the validation data is shuffled before each network validation.

You cannot specify validation data when training long short-term memory (LSTM) networks.

Note

You cannot specify validation data using an augmentedImageSource, denoisingImageSource, or pixelLabelImageSource.

Example: 'ValidationData',imds

Frequency of network validation in number of iterations, specified as the comma-separated pair consisting of 'ValidationFrequency' and a positive integer.

The 'ValidationFrequency' value is the number of iterations between evaluations of validation metrics. To specify validation data, use the 'ValidationData' name-value pair argument.

Example: 'ValidationFrequency',20

Patience of validation stopping of network training, specified as the comma-separated pair consisting of 'ValidationPatience' and a positive integer or Inf.

The 'ValidationPatience' value is the number of times that the loss on the validation set can be larger than or equal to the previously smallest loss before network training stops. To turn off automatic validation stopping, specify Inf as the 'ValidationPatience' value. To specify validation data, use the 'ValidationData' name-value pair argument.

Example: 'ValidationPatience',4

Output Arguments

collapse all

Training options, returned as an object.

For the sgdm training solver, options is a TrainingOptionsSGDM object.

Algorithms

collapse all

Initial Weights and Biases

The default for the initial weights is a Gaussian distribution with a mean of 0 and a standard deviation of 0.01. The default for the initial bias value is 0. You can manually change the initialization for the weights and biases. See Specify Initial Weights and Biases in Convolutional Layer and Specify Initial Weights and Biases in Fully Connected Layer.

Stochastic Gradient Descent with Momentum

The gradient descent algorithm updates the parameters (weights and biases) to minimize the error function by taking small steps in the direction of the negative gradient of the loss function [1]

θ+1=θαE(θ),

where stands for the iteration number, α>0 is the learning rate, θ is the parameter vector, and E(θ) is the loss function. The gradient of the loss function, E(θ), is evaluated using the entire training set, and the standard gradient descent algorithm uses the entire data set at once. The stochastic gradient descent algorithm evaluates the gradient and updates the parameters using a subset of the training set. This subset is called a mini-batch.

Each evaluation of the gradient using the mini-batch is an iteration. At each iteration, the algorithm takes one step towards minimizing the loss function. The full pass of the training algorithm over the entire training set using mini-batches is an epoch. You can specify the mini-batch size and the maximum number of epochs using the MiniBatchSize and MaxEpochs name-value pair arguments, respectively.

The gradient descent algorithm might oscillate along the steepest descent path to the optimum. Adding a momentum term to the parameter update is one way to prevent this oscillation [2]. The stochastic gradient descent update with momentum is

θ+1=θαE(θ)+γ(θθ1),

where γ determines the contribution of the previous gradient step to the current iteration. You can specify this value using the Momentum name-value pair argument.

By default, the software shuffles the data once before training. You can change this setting using the Shuffle name-value pair argument.

L2 Regularization

Adding a regularization term for the weights to the loss function E(θ) is one way to reduce overfitting [1], [2]. The regularization term is also called weight decay. The loss function with the regularization term takes the form

ER(θ)=E(θ)+λΩ(w),

where w is the weight vector, λ is the regularization factor (coefficient), and the regularization function Ω(w) is

Ω(w)=12wTw.

Note that the biases are not regularized [2]. You can specify the regularization factor, λ, using the L2Regularization name-value pair argument.

Save Checkpoint Networks and Resume Training

trainNetwork enables you to save checkpoint networks as .mat files during training. You can then resume training from any checkpoint network. If you want trainNetwork to save checkpoint networks, then you must specify the name of the path by using the CheckpointPath name-value pair argument of trainingOptions. If the path you specify is incorrect, then trainingOptions returns an error.

trainNetwork automatically assigns a unique name to each checkpoint network file, for example, convnet_checkpoint__351__2016_11_09__12_04_23.mat. In this example, 351 is the iteration number, 2016_11_09 is the date, and 12_04_23 is the time at which trainNetwork saves the network. You can load a checkpoint network file by double-clicking it or entering the load command at the command line. For example:

load convnet_checkpoint__351__2016_11_09__12_04_23.mat
You can then resume training by using the layers of this network in the call to trainNetwork. For example:

trainNetwork(Xtrain,Ytrain,net.Layers,options)
You must manually specify the training options and the input data because the checkpoint network does not contain this information.

References

[1] Bishop, C. M. Pattern Recognition and Machine Learning. Springer, New York, NY, 2006.

[2] Murphy, K. P. Machine Learning: A Probabilistic Perspective. The MIT Press, Cambridge, Massachusetts, 2012.

Introduced in R2016a


Was this topic helpful?