Main Content

非线性数据拟合

此示例说明如何使用几种 Optimization Toolbox™ 算法对数据进行非线性函数拟合。

问题设立

假设有如下数据:

Data = ...
  [0.0000    5.8955
   0.1000    3.5639
   0.2000    2.5173
   0.3000    1.9790
   0.4000    1.8990
   0.5000    1.3938
   0.6000    1.1359
   0.7000    1.0096
   0.8000    1.0343
   0.9000    0.8435
   1.0000    0.6856
   1.1000    0.6100
   1.2000    0.5392
   1.3000    0.3946
   1.4000    0.3903
   1.5000    0.5474
   1.6000    0.3459
   1.7000    0.1370
   1.8000    0.2211
   1.9000    0.1704
   2.0000    0.2636];

让我们绘制这些数据点。

t = Data(:,1);
y = Data(:,2);
% axis([0 2 -0.5 6])
% hold on
plot(t,y,'ro')
title('Data points')

% hold off

我们要对数据进行

y = c(1)*exp(-lam(1)*t) + c(2)*exp(-lam(2)*t)

函数拟合。

使用 lsqcurvefit 的求解方法

lsqcurvefit 函数可以轻松求解这类问题。

首先,定义涉及一个变量 x 的参数:

x(1) = c(1)

x(2) = lam(1)

x(3) = c(2)

x(4) = lam(2)

然后将曲线定义为参数 x 和数据 t 的函数:

F = @(x,xdata)x(1)*exp(-x(2)*xdata) + x(3)*exp(-x(4)*xdata);

我们任意设置一个初始点 x0 如下:c(1) = 1,lam(1) = 1,c(2) = 1,lam(2) = 0:

x0 = [1 1 1 0];

我们运行求解器并绘制拟合结果。

[x,resnorm,~,exitflag,output] = lsqcurvefit(F,x0,t,y)
Local minimum possible.

lsqcurvefit stopped because the final change in the sum of squares relative to 
its initial value is less than the value of the function tolerance.
x = 1×4

    3.0068   10.5869    2.8891    1.4003

resnorm = 0.1477
exitflag = 3
output = struct with fields:
      firstorderopt: 7.8830e-06
         iterations: 6
          funcCount: 35
       cgiterations: 0
          algorithm: 'trust-region-reflective'
           stepsize: 0.0096
            message: 'Local minimum possible....'
       bestfeasible: []
    constrviolation: []

hold on
plot(t,F(x,t))
hold off

使用 fminunc 的求解方法

为了使用 fminunc 求解问题,我们将目标函数设置为残差的平方和。

Fsumsquares = @(x)sum((F(x,t) - y).^2);
opts = optimoptions('fminunc','Algorithm','quasi-newton');
[xunc,ressquared,eflag,outputu] = ...
    fminunc(Fsumsquares,x0,opts)
Local minimum found.

Optimization completed because the size of the gradient is less than
the value of the optimality tolerance.
xunc = 1×4

    2.8890    1.4003    3.0069   10.5862

ressquared = 0.1477
eflag = 1
outputu = struct with fields:
       iterations: 30
        funcCount: 185
         stepsize: 0.0017
     lssteplength: 1
    firstorderopt: 2.9662e-05
        algorithm: 'quasi-newton'
          message: 'Local minimum found....'

请注意,fminunc 找到了与 lsqcurvefit 相同的解,但为此进行了更多的函数计算。fminunc 的参数与 lsqcurvefit 的参数顺序相反;较大的 lam 是 lam(2),而不是 lam(1)。这并不奇怪,变量的顺序是任意的。

fprintf(['There were %d iterations using fminunc,' ...
    ' and %d using lsqcurvefit.\n'], ...
    outputu.iterations,output.iterations)
There were 30 iterations using fminunc, and 6 using lsqcurvefit.
fprintf(['There were %d function evaluations using fminunc,' ...
    ' and %d using lsqcurvefit.'], ...
    outputu.funcCount,output.funcCount)
There were 185 function evaluations using fminunc, and 35 using lsqcurvefit.

拆分线性和非线性问题

请注意,拟合问题对于参数 c(1) 和 c(2) 是线性的。这意味着对于 lam(1) 和 lam(2) 的任何值,我们可以使用反斜杠运算符来找到求解最小二乘问题的 c(1) 和 c(2) 的值。

我们现在将此问题作为二维问题重新处理,搜索 lam(1) 和 lam(2) 的最佳值。在每步中按上面所述使用反斜杠运算符来计算 c(1) 和 c(2) 的值。

type fitvector
function yEst = fitvector(lam,xdata,ydata)
%FITVECTOR  Used by DATDEMO to return value of fitting function.
%   yEst = FITVECTOR(lam,xdata) returns the value of the fitting function, y
%   (defined below), at the data points xdata with parameters set to lam.
%   yEst is returned as a N-by-1 column vector, where N is the number of
%   data points.
%
%   FITVECTOR assumes the fitting function, y, takes the form
%
%     y =  c(1)*exp(-lam(1)*t) + ... + c(n)*exp(-lam(n)*t)
%
%   with n linear parameters c, and n nonlinear parameters lam.
%
%   To solve for the linear parameters c, we build a matrix A
%   where the j-th column of A is exp(-lam(j)*xdata) (xdata is a vector).
%   Then we solve A*c = ydata for the linear least-squares solution c,
%   where ydata is the observed values of y.

A = zeros(length(xdata),length(lam));  % build A matrix
for j = 1:length(lam)
   A(:,j) = exp(-lam(j)*xdata);
end
c = A\ydata; % solve A*c = y for linear parameters c
yEst = A*c; % return the estimated response based on c

使用 lsqcurvefit 求解问题,从二维初始点 lam(1), lam(2) 开始:

x02 = [1 0];
F2 = @(x,t) fitvector(x,t,y);

[x2,resnorm2,~,exitflag2,output2] = lsqcurvefit(F2,x02,t,y)
Local minimum possible.

lsqcurvefit stopped because the final change in the sum of squares relative to 
its initial value is less than the value of the function tolerance.
x2 = 1×2

   10.5861    1.4003

resnorm2 = 0.1477
exitflag2 = 3
output2 = struct with fields:
      firstorderopt: 4.4087e-06
         iterations: 10
          funcCount: 33
       cgiterations: 0
          algorithm: 'trust-region-reflective'
           stepsize: 0.0080
            message: 'Local minimum possible....'
       bestfeasible: []
    constrviolation: []

二维求解的效率类似于四维求解的效率:

fprintf(['There were %d function evaluations using the 2-d ' ...
    'formulation, and %d using the 4-d formulation.'], ...
    output2.funcCount,output.funcCount)
There were 33 function evaluations using the 2-d formulation, and 35 using the 4-d formulation.

对于初始估计值来说,拆分问题更稳健

为最初的四参数问题选择的起点不理想会得到局部解,而非全局解。为拆分的两参数问题选择具有同样不理想的 lam(1) 和 lam(2) 值的起点会得到全局解。为了说明这一点,我们使用会得到相对较差的局部解的起点重新运行原始问题,并将得到的拟合与全局解进行比较。

x0bad = [5 1 1 0];
[xbad,resnormbad,~,exitflagbad,outputbad] = ...
    lsqcurvefit(F,x0bad,t,y)
Local minimum possible.

lsqcurvefit stopped because the final change in the sum of squares relative to 
its initial value is less than the value of the function tolerance.
xbad = 1×4

  -22.9036    2.4792   28.0273    2.4791

resnormbad = 2.2173
exitflagbad = 3
outputbad = struct with fields:
      firstorderopt: 0.0056
         iterations: 32
          funcCount: 165
       cgiterations: 0
          algorithm: 'trust-region-reflective'
           stepsize: 0.0021
            message: 'Local minimum possible....'
       bestfeasible: []
    constrviolation: []

hold on
plot(t,F(xbad,t),'g')
legend('Data','Global fit','Bad local fit','Location','NE')
hold off

fprintf(['The residual norm at the good ending point is %f,' ...
   ' and the residual norm at the bad ending point is %f.'], ...
   resnorm,resnormbad)
The residual norm at the good ending point is 0.147723, and the residual norm at the bad ending point is 2.217300.

相关主题