lm Function in R: A Comprehensive Guide

Introduction to lm Function in R

Many generic functions are available for the computation of regression coefficients, for example, testing the coefficients, computing the residuals, prediction values, etc. Therefore, a good grasp of the lm() function is necessary. It is assumed that you are aware of performing the regression analysis using the lm function.

mod <- lm(mpg ~ hp, data = mtcars)

To learn about performing linear regression analysis using the lm function you can visit the article “Performing Linear Regression in R

Objects of “lm” Class

The object returned by the lm() function has a class of “lm”. The objects associated with the “lm” class have mode as a list.

class(mod)

The name of the objects related to the “lm” class can be queried via

names(mod)

All the components of the “lm” class can be assessed directly. For example,

mod$rank

mod$coef   # or mod$coefficients

Generic Functions of “lm” model

The following is the list of some generic functions for the fitted “lm” model.

Generic FunctionShort Description
print()print or display the results in the R Console
summary()print or displays regression coefficients, their standard errors, t-ratios, p-values, and significance
coef()extracts regression coefficients
residuals()or resid(): extracts residuals of the fitted model
fitted()or fitted.values() : extracts fitted values
anova()perform comparisons of the nested model
predict()compute predicted values for new data
plot()draw a diagnostics plot of the regression model
confint()compute the confidence intervals for regression coefficients
deviance()compute the residual sum of squares
vcov()compute estimated variance-covariance matrix
logLik()compute the log-likelihood
AIC(), BIC()compute information criteria

It is better to save objects from the summary() function.

The summary() function returns an object of class “summary.lm()” and its components can be queried via

sum_mod <- summary(mod)

names(sum_mod)
names( summary(mod) )
lm class objects

The objects from the summary() function can be obtained as

sum_mod$residuals
sum_mod$r.squared
sum_mod$adj.r.squared
sum_mod$df
sum_mod$sigma
sum_mod$fstatistic

Computation and Visualization of Prediction and Confidence Interval

The confidence interval for estimated coefficients can be computed as

confint(mod, level = 0.95)

Note that level argument is optional if the confidence level is 95% (significance level is 5%).

The prediction intervals for mean and individual for hp (regressor) equal to 200 and 160, can be computed as

predict(mod, newdata=data.frame(hp = c(200, 160)), interval = "confidence" )
predict(mod, newdata=data.frame(hp = c(200, 160)), interval = "prediction" )

The prediction intervals can be used for computing and visualizing confidence bands. For example,

x = seq(50, 350, length = 32 )
pred <- predict(mod, newdata=data.frame(x), interval = "prediction" )

plot(hp, mpg)
lines(pred[,1] ~ x, col = 1) # fitted values
lines(pred[,2] ~ x, col = 2) # lower limit
lines(pred[,3] ~ x, col = 2) # upper limit
Visualization of prediction intervals and confidence band

Regression Diagnostics

For diagnostics plot, the plot() function can be used and it provides four graphs of

  • residuals vs fitted values
  • QQ plot of standardized residuals
  • scale-location plot of fitted values against the square root of standardized residuals
  • standardized residuals vs leverage
diagnostic plot of model from lm function

To plot say QQ plot only use

plot(mod, which = 2)

which argument is used to select the graph produced out of four.

FAQS about lm() Functions in R

  1. What is the use of lm() function in R?
  2. What is the class of lm() and name the object of lm() function too?
  3. Describe the generic functions for the object of class lm.
  4. What are the important objects of summary.lm() object?
  5. How objects of summary.lm() function can be accessed?
  6. How confidence and prediction intervals can be visualized in R for linear models?
  7. How diagnostics are performed in the R Language?
  8. What is the use of confint(), fitted(), coef(), anova(), vcov(), deviance(), and residuals generic functions?

Test Preparation MCQs

Learn R Programming

Simple Linear Regression Model

Introduction to Simple Linear Regression Model

The linear regression model is typically estimated by the ordinary least squares (OLS) technique. The model in general form is

$$Y_i=x’_i\beta + \varepsilon, \quad\quad i=1,2,\cdots,n$$

In matrix notation

$$y=X\beta + \varepsilon,$$

where $y$ is a vector of order $n\times 1$ that contains values of the dependent variable, $X=(x_1,x_2,\cdots,x_n)’$ is regressor(s) matrix containing $n$ observations. $X$ matrix also called model matrix (whose column represents regressors), The $\beta$ is a $p\times 1$ vector of regressor coefficients, and $\varepsilon$ is a vector of order $n\times 1$ containing error terms. To learn more about Simple linear Models, visit the link: Simple Linear Regression Models.

Estimating Regression Coefficients

The regression coefficients $\ beta$ can be estimated

$$\hat{\beta}=(X’X)^{-1}X’Y$$

The fitted values can be computed

$$\hat{y}=X\hat{\beta}$$

The residuals are

$$\hat{\varepsilon} = y – \hat{y}$$

The residual sum of squares is

$$\hat{\varepsilon}\varepsilon$$

R language has excellent facilities for fitting linear models. The basic function for fitting linear models by the least square method is lm() function.  The model is specified by a formula notation.

We will consider mtcars the dataset. Let $Y=mpg$ and $X=hp$, the simple linear regression model is

$$Y_i = \beta_1 + \beta_2 hp + \varepsilon_i$$

where $\beta_1$ is the intercept and $\beta_2$ is the slope coefficient.

Fitting Simple Linear Regression Model in R

To fit this simple linear regression model in R, one can follow:

attach(mtcars)

mod <- lm(mpg ~ hp)
mod

The lm() function uses a formula mpg ~ hp with the response variable on the left of the tilde (~) and predictor on the right. It is better to supply the data argument to lm() function. That is,

mod <- lm(mpg ~ hp, data = mtcars)

The lm() function returns an object of the class lm, saved in a variable mod (it can be different). Printing the object produces a brief report.

Hypothesis Testing of Regression Coefficients

For hypothesis testing regression coefficients summary() function should be used. It will bring more information about the fitted model such as standard errors, t-values, and p-values for each coefficient of the model fitting. For example,

summary(mod)

One can fit a regression model without an intercept term if required.

lm(mpg ~ hp -1, data = mtcars)

Graphical Representation of the Model

For the graphical representation of the model, one can use the plot() function to draw scatter points and the abline() function to draw the regression line.

plot(hp, mpg)
abline(mod)

Note the order of variables in the plot() function. The first argument to plot() function represents the predictor variable while the second argument to plot() function represents the response variable.

The function abline() plots a line on the graph according to the slope and intercept provided by the argument mod or by providing it manually.

One can change the style of the regression line using lty argument. Similarly, the color of the regression line can be changed from black to some other color using col argument. That is,

plot(hp, mpg)
abline(mod, lty = 2, col = "blue")

Note that one can identify different observations on a graph using the identify() function. For example,

identify(hp, mpg)
Simple Linear Regression Model

Note to identify a point, place the mouse pointer near the point and press the left mouse button, to exit from identify procedure, press the right mouse button, or ESC button from the keyboard.

FAQs about Simple Linear Regression in R

  1. What is a simple linear regression model? How it can be performed in the R Language?
  2. How lm() function is used to fit a simple linear regression model in detail?
  3. How estimation and testing of the regression coefficient can be performed in R?
  4. What is the use of summary() function in R, explain.
  5. How visualization of regression models in R can be performed?

Read more on Statistical models in R

MCQs in Statistics

Mean Comparison Tests in R

Comparing means between groups is fundamental in statistical analysis and data science. Whether you are testing drug efficacy, evaluating marketing campaigns, or analyzing experimental data, there are Powerful Tools for mean comparison tests in R Language.

Mean Comparison Tests in R Language

Here, we learn some basics about how to perform the Mean Comparison Test in R Language: hypothesis testing for one sample test, two-sample independent test, and dependent sample test. We will also learn how to find the p-values for a certain distribution, such as t-distribution and critical region values. We will also see how to perform one-tailed and two-tailed hypothesis tests.

How to Perform One-Sample t-Test in R

A recent article in The Wall Street Journal reported that the 30-year mortgage rate is now less than 6%. A sample of eight small banks in the Midwest revealed the following 30-year rates (in percent)

4.85.36.54.86.15.86.25.6

At the 0.01 significance level (probability of type-I error), can we conclude that the 30-year mortgage rate for small banks is less than 6%?

Manual Calculations for One-Sample t-Test and Confidence Interval

One sample mean comparison test can be performed manually.

# Manual way
X <- c(4.8, 5.3, 6.5, 4.8, 6.1, 5.8, 6.2, 5.6)
xbar <- mean(X)
s <- sd(X)
mu = 6
n = length(X)
df = n - 1 
tcal = (xbar - mu)/(s/sqrt(n) )
tcal
c(xbar - qt(0.995, df = df) * s/sqrt(n), xbar + qt(0.995, df = df) * s/sqrt(n))
Mean Comparison Tests: One sample Confidence Interval

Critical Values from t-Table

# Critical Value for Left Tail
qt(0.01, df = df, lower.tail = T)
# Critical Value for Right Tail
qt(0.99, df = df, lower.tail = T)
# Critical Vale for Both Tails
qt(0.995, df = df)

Finding p-Values

# p-value (altenative is less)
pt(tcal, df = df)
# p-value (altenative is greater)
1 - pt(tcal, df = df)
# p-value (alternative two tailed or not equal to)
2 * pt(tcal, df = df)

Performing One-Sample Confidence Interval and t-test Using Built-in Function

One can perform one sample mean comparison test using built-in functions available in the R Language.

# Left Tail test
t.test(x = X, mu = 6, alternative = c("less"), conf.level = 0.99)
# Right Tail test
t.test(x = X, mu = 6, alternative = c("greater"), conf.level = 0.99)
# Two Tail test
t.test(x = X, mu = 6, alternative = c("two.sided"), conf.level = 0.99)

How to Perform a Two-Sample t-Test in R

Consider we have two samples stored in two vectors $X$ and $Y$ as shown in R code. We are interested in the Mean Comparison Test among two groups of people regarding (say) their wages in a certain week.

X = c(70, 82, 78, 70, 74, 82, 90)
Y = c(60, 80, 91, 89, 77, 69, 88, 82)

Manual Calculations for Two-Sample t-Test and Confidence Interval

The manual calculation for two sample t-tests as a mean comparison test is as follows.

nx = length(X)
ny = length(Y)
xbar = mean(X)
sx = sd(X)
ybar = mean(Y)
sy = sd(Y)
df = nx + ny - 2
# Pooled Standard Deviation/ Variance 
SP = sqrt( ( (nx-1) * sx^2 + (ny-1) * sy^2) / df )
tcal = (( xbar - ybar ) - 0) / (SP *sqrt(1/nx + 1/ny))
tcal
# Confidence Interval
LL <- (xbar - ybar) - qt(0.975, df)* sqrt((SP^2 *(1/nx + 1/ny) ))
UL <- (xbar - ybar) + qt(0.975, df)* sqrt((SP^2 *(1/nx + 1/ny) ))
c(LL, UL)

Finding p-values

# The p-value at the left-hand side of Critical Region 
pt(tcal, df ) 
# The p-value for two-tailed Critical Region 
2 * pt(tcal, df ) 
# The p-value at the right-hand side of Critical Region 
1 - pt(tcal, df)

Finding Critical Values from the t-Table

# Left Tail
qt(0.025, df = df, lower.tail = T)
# Right Tail
qt(0.975, df = df, lower.tail = T)
# Both tails
qt(0.05, df = df)

Performing Two-Sample Confidence Interval and T-test using Built-in Function

One can perform two sample mean comparison tests using built-in functions in R Language.

# Left Tail test
t.test(X, Y, alternative = c("less"), var.equal = T)
# Right Tail test
t.test(X, Y, alternative = c("greater"), var.equal = T)
# Two Tail test
t.test(X, Y, alternative = c("two.sided"), var.equal = T)

Note that if $X$ and $Y$ variables are from a data frame, then perform the two-sample t-test using the formula symbol (~). Let’s first make the data frame from vectors $X$ and $$Y.

data <- data.frame(values = c(X, Y), group = c(rep("A", nx), rep("B", ny)))
t.test(values ~ group, data = data, alternative = "less", var.equal = T)
t.test(values ~ group, data = data, alternative = "greater", var.equal = T)
t.test(values ~ group, data = data, alternative = "two.side", var.equal = T)
Frequently Asked Questions About R
Mean Comparison Test in R

To understand probability distributions functions in R, click the link: Probability Distributions in R

MCQs in Statistics