# LinearModel.fit

Class: LinearModel

Create linear regression model

`LinearModel.fit` will be removed in a future release. Use `fitlm` instead.

## Syntax

```mdl = LinearModel.fit(tbl)mdl = LinearModel.fit(X,y)mdl = LinearModel.fit(...,modelspec)mdl = LinearModel.fit(...,Name,Value)mdl = LinearModel.fit(...,modelspec,Name,Value)```

## Description

`mdl = LinearModel.fit(tbl)` creates a linear model of a table or dataset array `tbl`.

`mdl = LinearModel.fit(X,y)` creates a linear model of the responses `y` to a data matrix `X`.

`mdl = LinearModel.fit(...,modelspec)` creates a linear model of the specified type.

`mdl = LinearModel.fit(...,Name,Value)` or ```mdl = LinearModel.fit(...,modelspec,Name,Value)``` creates a linear model with additional options specified by one or more `Name,Value` pair arguments.

## Tips

• Use robust fitting (`RobustOpts` name-value pair) to reduce the effect of outliers automatically.

• Do not use robust fitting when you want to subsequently adjust a model using `step`.

• For other methods or properties of the `LinearModel` object, see `LinearModel`.

## Input Arguments

collapse all

### `tbl` — Input datatable | dataset array

Input data, specified as a table or dataset array. When `modelspec` is a `formula`, it specifies the variables to be used as the predictors and response. Otherwise, if you do not specify the predictor and response variables, the last variable is the response variable and the others are the predictor variables by default.

Predictor variables can be numeric, or any grouping variable type, such as logical or categorical (see Grouping Variables). The response must be numeric or logical.

To set a different column as the response variable, use the `ResponseVar` name-value pair argument. To use a subset of the columns as predictors, use the `PredictorVars` name-value pair argument.

Data Types: `single` | `double` | `logical`

### `X` — Predictor variablesmatrix

Predictor variables, specified as an n-by-p matrix, where n is the number of observations and p is the number of predictor variables. Each column of `X` represents one variable, and each row represents one observation.

By default, there is a constant term in the model, unless you explicitly remove it, so do not include a column of 1s in `X`.

Data Types: `single` | `double` | `logical`

### `y` — Response variablevector

Response variable, specified as an n-by-1 vector, where n is the number of observations. Each entry in `y` is the response for the corresponding row of `X`.

Data Types: `single` | `double`

### `modelspec` — Model specificationstring naming the model | t-by-(p + 1) terms matrix | string of the form `'Y ~ terms'`

Model specification, specified as one of the following. The choice is the starting model for `stepwiselm`.

• A string naming the model.

StringModel Type
`'constant'`Model contains only a constant (intercept) term.
`'linear'`Model contains an intercept and linear terms for each predictor.
`'interactions'`Model contains an intercept, linear terms, and all products of pairs of distinct predictors (no squared terms).
`'purequadratic'`Model contains an intercept, linear terms, and squared terms.
`'quadratic'`Model contains an intercept, linear terms, interactions, and squared terms.
`'polyijk'`Model is a polynomial with all terms up to degree `i` in the first predictor, degree `j` in the second predictor, etc. Use numerals `0` through `9`. For example, `'poly2111'` has a constant plus all linear and product terms, and also contains terms with predictor 1 squared.

• t-by-(p + 1) matrix, namely terms matrix, specifying terms to include in the model, where t is the number of terms and p is the number of predictor variables, and plus 1 is for the response variable.

• A string representing a formulain the form

```'Y ~ terms'```,

where the `terms` are in Wilkinson Notation.

Example: `'quadratic'`

Example: `'y ~ X1 + X2^2 + X1:X2'`

### Name-Value Pair Arguments

Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside single quotes (`' '`). You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.

### `'CategoricalVars'` — Categorical variablescell array of strings | logical or numeric index vector

Categorical variables in the fit, specified as the comma-separated pair consisting of `'CategoricalVars'` and either a cell array of strings of the names of the categorical variables in the table or dataset array `tbl`, or a logical or numeric index vector indicating which columns are categorical.

• If data is in a table or dataset array `tbl`, then the default is to treat all categorical or logical variables, character arrays, or cell arrays of strings as categorical variables.

• If data is in matrix `X`, then the default value of this name-value pair argument is an empty matrix `[]`. That is, no variable is categorical unless you specify it.

For example, you can specify the observations 2 and 3 out of 6 as categorical using either of the following examples.

Example: `'CategoricalVars',[2,3]`

Example: `'CategoricalVars',logical([0 1 1 0 0 0])`

Data Types: `single` | `double` | `logical`

### `'Exclude'` — Observations to excludelogical or numeric index vector

Observations to exclude from the fit, specified as the comma-separated pair consisting of `'Exclude'` and a logical or numeric index vector indicating which observations to exclude from the fit.

For example, you can exclude observations 2 and 3 out of 6 using either of the following examples.

Example: `'Exclude',[2,3]`

Example: `'Exclude',logical([0 1 1 0 0 0])`

Data Types: `single` | `double` | `logical`

### `'Intercept'` — Indicator for constant term`true` (default) | `false`

Indicator the for constant term (intercept) in the fit, specified as the comma-separated pair consisting of `'Intercept'` and either `true` to include or `false` to remove the constant term from the model.

Use `'Intercept'` only when specifying the model using a string, not a formula or matrix.

Example: `'Intercept',false`

### `'PredictorVars'` — Predictor variablescell array of strings | logical or numeric index vector

Predictor variables to use in the fit, specified as the comma-separated pair consisting of `'PredictorVars'` and either a cell array of strings of the variable names in the table or dataset array `tbl`, or a logical or numeric index vector indicating which columns are predictor variables.

The strings should be among the names in `tbl`, or the names you specify using the `'VarNames'` name-value pair argument.

The default is all variables in `X`, or all variables in `tbl` except for `ResponseVar`.

For example, you can specify the second and third variables as the predictor variables using either of the following examples.

Example: `'PredictorVars',[2,3]`

Example: `'PredictorVars',logical([0 1 1 0 0 0])`

Data Types: `single` | `double` | `logical` | `cell`

### `'ResponseVar'` — Response variablelast column in `tbl` (default) | string for variable name | logical or numeric index vector

Response variable to use in the fit, specified as the comma-separated pair consisting of `'ResponseVar'` and either a string of the variable name in the table or dataset array `tbl`, or a logical or numeric index vector indicating which column is the response variable. You typically need to use `'ResponseVar'` when fitting a table or dataset array `tbl`.

For example, you can specify the fourth variable, say `yield`, as the response out of six variables, in one of the following ways.

Example: `'ResponseVar','yield'`

Example: `'ResponseVar',[4]`

Example: `'ResponseVar',logical([0 0 0 1 0 0])`

Data Types: `single` | `double` | `logical` | `char`

### `'RobustOpts'` — Indicator of robust fitting type`'off'` (default) | `'on'` | string | structure with string or function handle

Indicator of the robust fitting type to use, specified as the comma-separated pair consisting of `'RobustOpts'` and one of the following.

• `'off'` — No robust fitting. `fitlm` uses ordinary least squares.

• `'on'` — Robust fitting. When you use robust fitting, `'bisquare'` weight function is the default.

• String — Name of the robust fitting weight function from the following table. `fitlm` uses the corresponding default tuning constant in the table.

• Structure with the string `RobustWgtFun` containing the name of the robust fitting weight function from the following table and optional scalar `Tune` fields — `fitlm` uses the `RobustWgtFun` weight function and `Tune` tuning constant from the structure. You can choose the name of the robust fitting weight function from this table. If you do not supply a `Tune` field, the fitting function uses the corresponding default tuning constant.

Weight FunctionEquationDefault Tuning Constant
`'andrews'``w = (abs(r)<pi) .* sin(r) ./ r`1.339
`'bisquare'` (default)`w = (abs(r)<1) .* (1 - r.^2).^2`4.685
`'cauchy'``w = 1 ./ (1 + r.^2)`2.385
`'fair'``w = 1 ./ (1 + abs(r))`1.400
`'huber'``w = 1 ./ max(1, abs(r))`1.345
`'logistic'``w = tanh(r) ./ r`1.205
`'ols'`Ordinary least squares (no weighting function)None
`'talwar'``w = 1 * (abs(r)<1)`2.795
`'welsch'``w = exp(-(r.^2))`2.985

The value r in the weight functions is

`r = resid/(tune*s*sqrt(1-h))`,

where `resid` is the vector of residuals from the previous iteration, `h` is the vector of leverage values from a least-squares fit, and `s` is an estimate of the standard deviation of the error term given by

`s = MAD/0.6745`.

MAD is the median absolute deviation of the residuals from their median. The constant 0.6745 makes the estimate unbiased for the normal distribution. If there are p columns in `X`, the smallest p absolute deviations are excluded when computing the median.

Default tuning constants give coefficient estimates that are approximately 95% as statistically efficient as the ordinary least-squares estimates, provided the response has a normal distribution with no outliers. Decreasing the tuning constant increases the downweight assigned to large residuals; increasing the tuning constant decreases the downweight assigned to large residuals.

• Structure with the function handle `RobustWgtFun` and optional scalar `Tune` fields — You can specify a custom weight function. `fitlm` uses the `RobustWgtFun` weight function and `Tune` tuning constant from the structure. Specify `RobustWgtFun` as a function handle that accepts a vector of residuals, and returns a vector of weights the same size. The fitting function scales the residuals, dividing by the tuning constant (default `1`) and by an estimate of the error standard deviation before it calls the weight function.

Example: `'RobustOpts','andrews'`

### `'VarNames'` — Names of variables in fit`{'x1','x2',...,'xn','y'}` (default) | cell array of strings

Names of variables in fit, specified as the comma-separated pair consisting of `'VarNames'` and a cell array of strings including the names for the columns of `X` first, and the name for the response variable `y` last.

`'VarNames'` is not applicable to variables in a table or dataset array, because those variables already have names.

For example, if in your data, horsepower, acceleration, and model year of the cars are the predictor variables, and miles per gallon (MPG) is the response variable, then you can name the variables as follows.

Example: `'VarNames',{'Horsepower','Acceleration','Model_Year','MPG'}`

Data Types: `cell`

### `'Weights'` — Observation weights`ones(n,1)` (default) | n-by-1 vector of nonnegative scalar values

Observation weights, specified as the comma-separated pair consisting of `'Weights'` and an n-by-1 vector of nonnegative scalar values, where n is the number of observations.

Data Types: `single` | `double`

## Output Arguments

collapse all

### `mdl` — Linear model`LinearModel` object

Linear model representing a least-squares fit of the response to the data, returned as a `LinearModel` object.

If the value of the `'RobustOpts'` name-value pair is not `[]` or `'ols'`, the model is not a least-squares fit, but uses the robust fitting function.

For properties and methods of the linear model object, `mdl`, see the `LinearModel` class page.

## Definitions

### Terms Matrix

A terms matrix is a t-by-(p + 1) matrix specifying terms in a model, where t is the number of terms, p is the number of predictor variables, and plus one is for the response variable.

The value of `T(i,j)` is the exponent of variable `j` in term `i`. Suppose there are three predictor variables `A`, `B`, and `C`:

```[0 0 0 0] % Constant term or intercept [0 1 0 0] % B; equivalently, A^0 * B^1 * C^0 [1 0 1 0] % A*C [2 0 0 0] % A^2 [0 1 2 0] % B*(C^2)```
The `0` at the end of each term represents the response variable. In general,

• If you have the variables in a table or dataset array, then `0` must represent the response variable depending on the position of the response variable. The following example illustrates this.

Load the sample data and define the dataset array.

```load hospital ds = dataset(hospital.Sex,hospital.BloodPressure(:,1),hospital.Age,... hospital.Smoker,'VarNames',{'Sex','BloodPressure','Age','Smoker'});```

Represent the linear model ```'BloodPressure ~ 1 + Sex + Age + Smoker'``` in a terms matrix. The response variable is in the second column of the dataset array, so there must be a column of 0s for the response variable in the second column of the terms matrix.

```T = [0 0 0 0;1 0 0 0;0 0 1 0;0 0 0 1] ```
```T = 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1```

Redefine the dataset array.

```ds = dataset(hospital.BloodPressure(:,1),hospital.Sex,hospital.Age,... hospital.Smoker,'VarNames',{'BloodPressure','Sex','Age','Smoker'}); ```

Now, the response variable is the first term in the dataset array. Specify the same linear model, ```'BloodPressure ~ 1 + Sex + Age + Smoker'```, using a terms matrix.

`T = [0 0 0 0;0 1 0 0;0 0 1 0;0 0 0 1]`
```T = 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1```
• If you have the predictor and response variables in a matrix and column vector, then you must include `0` for the response variable at the end of each term. The following example illustrates this.

Load the sample data and define the matrix of predictors.

```load carsmall X = [Acceleration,Weight]; ```

Specify the model ```'MPG ~ Acceleration + Weight + Acceleration:Weight + Weight^2'``` using a term matrix and fit the model to the data. This model includes the main effect and two-way interaction terms for the variables, `Acceleration` and `Weight`, and a second-order term for the variable, `Weight`.

```T = [0 0 0;1 0 0;0 1 0;1 1 0;0 2 0] ```
```T = 0 0 0 1 0 0 0 1 0 1 1 0 0 2 0 ```

Fit a linear model.

`mdl = fitlm(X,MPG,T)`
```mdl = Linear regression model: y ~ 1 + x1*x2 + x2^2 Estimated Coefficients: Estimate SE tStat pValue (Intercept) 48.906 12.589 3.8847 0.00019665 x1 0.54418 0.57125 0.95261 0.34337 x2 -0.012781 0.0060312 -2.1192 0.036857 x1:x2 -0.00010892 0.00017925 -0.6076 0.545 x2^2 9.7518e-07 7.5389e-07 1.2935 0.19917 Number of observations: 94, Error degrees of freedom: 89 Root Mean Squared Error: 4.1 R-squared: 0.751, Adjusted R-Squared 0.739 F-statistic vs. constant model: 67, p-value = 4.99e-26```

Only the intercept and `x2` term, which correspond to the `Weight` variable, are significant at the 5% significance level.

Now, perform a stepwise regression with a constant model as the starting model and a linear model with interactions as the upper model.

```T = [0 0 0;1 0 0;0 1 0;1 1 0]; mdl = stepwiselm(X,MPG,[0 0 0],'upper',T)```
```1. Adding x2, FStat = 259.3087, pValue = 1.643351e-28 mdl = Linear regression model: y ~ 1 + x2 Estimated Coefficients: Estimate SE tStat pValue (Intercept) 49.238 1.6411 30.002 2.7015e-49 x2 -0.0086119 0.0005348 -16.103 1.6434e-28 Number of observations: 94, Error degrees of freedom: 92 Root Mean Squared Error: 4.13 R-squared: 0.738, Adjusted R-Squared 0.735 F-statistic vs. constant model: 259, p-value = 1.64e-28```

The results of the stepwise regression are consistent with the results of `fitlm` in the previous step.

### Formula

A formula for model specification is a string of the form `'Y ~ terms'`

where

• `Y` is the response name.

• `terms` contains

• Variable names

• `+` means include the next variable

• `-` means do not include the next variable

• `:` defines an interaction, a product of terms

• `*` defines an interaction and all lower-order terms

• `^` raises the predictor to a power, exactly as in `*` repeated, so `^` includes lower order terms as well

• `()` groups terms

 Note:   Formulas include a constant (intercept) term by default. To exclude a constant term from the model, include `-1` in the formula.

For example,

`'Y ~ A + B + C'` means a three-variable linear model with intercept.
```'Y ~ A + B + C - 1'``` is a three-variable linear model without intercept.
`'Y ~ A + B + C + B^2'` is a three-variable model with intercept and a `B^2` term.
```'Y ~ A + B^2 + C'``` is the same as the previous example because `B^2` includes a `B` term.
```'Y ~ A + B + C + A:B'``` includes an `A*B` term.
```'Y ~ A*B + C'``` is the same as the previous example because ```A*B = A + B + A:B```.
`'Y ~ A*B*C - A:B:C'` has all interactions among `A`, `B`, and `C`, except the three-way interaction.
```'Y ~ A*(B + C + D)'``` has all linear terms, plus products of `A` with each of the other variables.

### Wilkinson Notation

Wilkinson notation describes the factors present in models. The notation relates to factors present in models, not to the multipliers (coefficients) of those factors.

Wilkinson NotationFactors in Standard Notation
`1`Constant (intercept) term
`A^k`, where `k` is a positive integer`A`, `A2`, ..., `Ak`
`A + B``A`, `B`
`A*B``A`, `B`, `A*B`
`A:B``A*B` only
`-B`Do not include `B`
`A*B + C``A`, `B`, `C`, `A*B`
`A + B + C + A:B``A`, `B`, `C`, `A*B`
`A*B*C - A:B:C``A`, `B`, `C`, `A*B`, `A*C`, `B*C`
`A*(B + C)``A`, `B`, `C`, `A*B`, `A*C`

Statistics and Machine Learning Toolbox™ notation always includes a constant term unless you explicitly remove the term using `-1`.

## Examples

collapse all

### Linear Regression Model of Matrix Data

Fit a linear model of the Hald data.

```load hald X = ingredients; % Predictor variables y = heat; % Response ```

Fit a default linear model to the data.

```mdl = fitlm(X,y) ```
```mdl = Linear regression model: y ~ 1 + x1 + x2 + x3 + x4 Estimated Coefficients: Estimate SE tStat pValue ________ _______ ________ ________ (Intercept) 62.405 70.071 0.8906 0.39913 x1 1.5511 0.74477 2.0827 0.070822 x2 0.51017 0.72379 0.70486 0.5009 x3 0.10191 0.75471 0.13503 0.89592 x4 -0.14406 0.70905 -0.20317 0.84407 Number of observations: 13, Error degrees of freedom: 8 Root Mean Squared Error: 2.45 R-squared: 0.982, Adjusted R-Squared 0.974 F-statistic vs. constant model: 111, p-value = 4.76e-07 ```

### Linear Regression with Categorical Predictor and Nonlinear Model

Fit a model of a table that contains a categorical predictor. Use a nonlinear response formula.

Load the `carsmall` data.

`load carsmall`

Construct a table containing continuous predictor variable `Weight`, nominal predictor variable `Year`, and response variable `MPG`.

```tbl = table(MPG,Weight); tbl.Year = nominal(Model_Year);```

Create a fitted model of `MPG` as a function of `Year`, `Weight`, and Weight2. (You don't have to include `Weight` explicitly in your formula because it is a lower-order term of Weight2.

`mdl = fitlm(tbl,'MPG ~ Year + Weight^2')`
```mdl = Linear regression model: MPG ~ 1 + Weight + Year + Weight^2 Estimated Coefficients: Estimate SE tStat pValue (Intercept) 54.206 4.7117 11.505 2.6648e-19 Weight -0.016404 0.0031249 -5.2493 1.0283e-06 Year_76 2.0887 0.71491 2.9215 0.0044137 Year_82 8.1864 0.81531 10.041 2.6364e-16 Weight^2 1.5573e-06 4.9454e-07 3.149 0.0022303 Number of observations: 94, Error degrees of freedom: 89 Root Mean Squared Error: 2.78 R-squared: 0.885, Adjusted R-Squared 0.88 F-statistic vs. constant model: 172, p-value = 5.52e-41```

`fitlm` creates two dummy (indicator) variables for the nominal variate, `Year`. The dummy variable `Year_76` takes the value 1 if model year is 1976 and takes the value 0 if it is not. The dummy variable `Year_82` takes the value 1 if model year is 1982 and takes the value 0 if it is not. And the year 1970 is the reference year. The corresponding model is

$M\stackrel{^}{P}G=54.206-0.0164\left(Weight\right)+2.0887\left(Year_76\right)+8.1864\left(Year_82\right)+\left(1.557e-06\right){\left(Weight\right)}^{2}$

### Simultaneously Specify the Variables and Use Formula

Simultaneously identify response and predictor variables and specify the model using formula in linear regression.

```load hospital ```

Fit a linear model with interaction terms to the data.

```mdl = fitlm(hospital,'Weight~1+Age*Sex*Smoker-Age:Sex:Smoker','ResponseVar','Weight','PredictorVars',{'Sex','Age','Smoker'},'CategoricalVar',{'Sex','Smoker'}) ```
```mdl = Linear regression model: Weight ~ 1 + Sex*Age + Sex*Smoker + Age*Smoker Estimated Coefficients: Estimate SE tStat pValue ________ _______ ________ __________ (Intercept) 118.7 7.0718 16.785 6.821e-30 Sex_Male 68.336 9.7153 7.0339 3.3386e-10 Age 0.31068 0.18531 1.6765 0.096991 Smoker_1 3.0425 10.446 0.29127 0.77149 Sex_Male:Age -0.49094 0.24764 -1.9825 0.050377 Sex_Male:Smoker_1 0.9509 3.8031 0.25003 0.80312 Age:Smoker_1 -0.07288 0.26275 -0.27737 0.78211 Number of observations: 100, Error degrees of freedom: 93 Root Mean Squared Error: 8.75 R-squared: 0.898, Adjusted R-Squared 0.892 F-statistic vs. constant model: 137, p-value = 6.91e-44 ```

The weight of the patients do not seem to differ significantly according to age, or the status of smoking, or interaction of these factors with gender at the 5% significance level.

### Robust Linear Regression Model

Fit a linear regression model of the Hald data using robust fitting.

```load hald X = ingredients; % predictor variables y = heat; % response```

Fit a robust linear model to the data.

`mdl = fitlm(X,y,'linear','RobustOpts','on')`
```mdl = Linear regression model (robust fit): y ~ 1 + x1 + x2 + x3 + x4 Estimated Coefficients: Estimate SE tStat pValue (Intercept) 60.09 75.818 0.79256 0.4509 x1 1.5753 0.80585 1.9548 0.086346 x2 0.5322 0.78315 0.67957 0.51596 x3 0.13346 0.8166 0.16343 0.87424 x4 -0.12052 0.7672 -0.15709 0.87906 Number of observations: 13, Error degrees of freedom: 8 Root Mean Squared Error: 2.65 R-squared: 0.979, Adjusted R-Squared 0.969 F-statistic vs. constant model: 94.6, p-value = 9.03e-07```

## Algorithms

The main fitting algorithm is QR decomposition. For robust fitting, the algorithm is `robustfit`.

## Alternatives

You can also construct a linear model using `fitlm`.

You can construct a model in a range of possible models using `stepwiselm`. However, you cannot use robust regression and stepwise regression together.