Train a Feedforward NN with Error Weights for the performance function

19 次查看(过去 30 天)
Hello, my name is Jansen Acosta and I am currently trying to develop a Feedforward NN to predict 6 different responses.
At this time I am using the Mean squared error performance function - mse. However, for the function mse(net,targets,outputs,errorWeights,...parameters...), the default error weight is {1}, which weights the importance of all targets equally. Is there any way to set different errorWeights values for my Feedforward NN training process? My code looks like this:
nnsize = [10 6] ; nn_Hls = length(nnsize) ; % Hidden Layers
nnTrainFun = 'traingdx' ; % TRAINING
NN = feedforwardnet(nnsize, nnTrainFun) ; % 8 -10,6- 6
nnActFunc = 'tansig' ; % ACTIVATION
for h=1:nn_Hls ; NN.layers{h}.transferFcn = nnActFunc ; end % | HIDDEN Layers
NN.layers{end}.transferFcn = 'purelin'; % Linear transfer function | OUTPUT Layer
NN.performFcn = 'mse' ; % Mean Squared Error performance metric <-------- HELP NEEDED
NN.performParam.normalization = 'standard' ; % OUTPUTS ERROR
NN.performParam.regularization = 0.2 ; %
Thanks in advance
  1 个评论
Juan Miguel Serrano Rodríguez
编辑:Juan Miguel Serrano Rodríguez 2024-3-14
I tried the other given answer and it was a waste of time since it was a made up response from chatGPT, errorWeights is not a parameter of the train function.
Checking the train docs, I think it should be possible by setting the error weights EW
trainedNet = train(net,X,T,Xi,Ai,EW)
The problem I am facing is that I am not able to set Xi and Ai to its defaults so I can just speficy Ew. This is my (unsuccesful) attempt:
inputs_net = rand(10,5); % 5 inputs
outputs_net = rand(10,2); % 2 outputs
errorWeights = {1; 0.5}; % Half the weight for the second output as in https://www.mathworks.com/help/deeplearning/ref/network.perform.html
net = feedforwardnet();
net = configure(net, inputs_net', outputs_net')
net = Neural Network name: 'Feed-Forward Neural Network' userdata: (your custom info) dimensions: numInputs: 1 numLayers: 2 numOutputs: 1 numInputDelays: 0 numLayerDelays: 0 numFeedbackDelays: 0 numWeightElements: 82 sampleTime: 1 connections: biasConnect: [1; 1] inputConnect: [1; 0] layerConnect: [0 0; 1 0] outputConnect: [0 1] subobjects: input: Equivalent to inputs{1} output: Equivalent to outputs{2} inputs: {1x1 cell array of 1 input} layers: {2x1 cell array of 2 layers} outputs: {1x2 cell array of 1 output} biases: {2x1 cell array of 2 biases} inputWeights: {2x1 cell array of 1 weight} layerWeights: {2x2 cell array of 1 weight} functions: adaptFcn: 'adaptwb' adaptParam: (none) derivFcn: 'defaultderiv' divideFcn: 'dividerand' divideParam: .trainRatio, .valRatio, .testRatio divideMode: 'sample' initFcn: 'initlay' performFcn: 'mse' performParam: .regularization, .normalization plotFcns: {'plotperform', 'plottrainstate', 'ploterrhist', 'plotregression'} plotParams: {1x4 cell array of 4 params} trainFcn: 'trainlm' trainParam: .showWindow, .showCommandLine, .show, .epochs, .time, .goal, .min_grad, .max_fail, .mu, .mu_dec, .mu_inc, .mu_max weight and bias values: IW: {2x1 cell} containing 1 input weight matrix LW: {2x2 cell} containing 1 layer weight matrix b: {2x1 cell} containing 2 bias vectors methods: adapt: Learn while in continuous use configure: Configure inputs & outputs gensim: Generate Simulink model init: Initialize weights & biases perform: Calculate performance sim: Evaluate network outputs given inputs train: Train network with examples view: View diagram unconfigure: Unconfigure inputs & outputs
% train(net, inputs_net', outputs_net', zeros, zeros, errorWeights)
train(net, inputs_net', outputs_net', zeros(net.numInputs, net.numInputDelays), zeros(net.numLayers, net.numLayerDelays), errorWeights)
Error using network/train
Inputs and input states have different numbers of samples.

请先登录,再进行评论。

回答(1 个)

Mrutyunjaya Hiremath
  • Yes, you can set different error weights for your Feedforward Neural Network training process. The error weight values can be specified for each target separately to weigh their importance differently. To achieve this, you need to adjust the "errorWeights" parameter when training the network.
  • In MATLAB's Neural Network Toolbox, you can use the "train" function to train the network and set the error weights. The train function allows you to pass additional training parameters, including the error weights.
  • Here's how you can modify your code to set different error weights for the targets:
nnsize = [10 6]; % Hidden Layers
nnTrainFun = 'traingdx'; % TRAINING
NN = feedforwardnet(nnsize, nnTrainFun); % 8 -10,6- 6
nnActFunc = 'tansig'; % ACTIVATION
for h = 1:numel(nn.layers)
NN.layers{h}.transferFcn = nnActFunc;
end % | HIDDEN Layers
NN.layers{end}.transferFcn = 'purelin'; % Linear transfer function | OUTPUT Layer
NN.performFcn = 'mse'; % Mean Squared Error performance metric
% Set different error weights for each target
errorWeights = [w1, w2, w3, w4, w5, w6]; % Replace w1, w2, ..., w6 with your desired weights
NN.performParam.normalization = 'standard'; % OUTPUTS ERROR
NN.performParam.regularization = 0.2;
% Set the error weights for the network
NN.divideFcn = 'divideind'; % Use individual errors (per sample) for validation and test
NN.divideParam.trainInd = your_train_indices; % Replace with indices of your training data
NN.divideParam.valInd = your_validation_indices; % Replace with indices of your validation data
NN.divideParam.testInd = your_test_indices; % Replace with indices of your test data
NN.performParam.normalization = 'standard';
% Train the network with specified error weights
[NN, tr] = train(NN, inputs, targets, 'useGPU', 'yes', 'errorWeights', errorWeights);
% You can then use the trained network 'NN' for prediction or evaluation.
  • In the code above, you need to replace w1, w2, ..., w6 with the desired error weight values for each target. Additionally, replace "your_train_indices", "your_validation_indices", and "your_test_indices" with the appropriate indices for your training, validation, and test datasets, respectively.

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

产品


版本

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by