The Poisson Regression in R

The Poisson regression model should be used when the dependent (response) variable is in the form of counts or values of the response variables following a Poisson distribution. In R, glm() function can be used to perform Poisson regression analysis.

Note that the lm() function performs simple and multiple linear regression models when the dependent variable is continuous.

Poisson Regression Models in R Language

Statistical models such as linear or Poisson regression models can be performed easily in R language.

The Poisson regression is used to analyze count data.

For the Poisson model, let us consider another built-in data set warpbreaks. This data set describes the effect of wool type (A or B) and tension (Low, Medium, and High) on the number of warp breaks per loom, where a loom corresponds to a fixed length of yarn.

head(warpbreaks)

The $breaks$ variable is a response variable since it contains the number of breaks (count of breaks). The $tension$ and $type$ variables are taken as predictor variables.

pois_mod <- glm(breaks ~ wool + tension, data = warpbreaks, family = poisson)

The output from the pois_mod object is

Poisson Regression using glm()

The glm() provides eight choices for a family with the following default link functions:

FamilyDefault Link Function
binomial(link = “logit”)
gaussian(link = “identity”)
Gamma(link = “inverse”)
inverse.gaussian(link =$\frac{1}{\mu^2}$)
poisson(link = “log”)
quasi(link = “identity”, variance = “constant”)
quasibinomial(link = “logit”)
quasipoisson(link = “log”)

The detailed output (estimation and testing of parameters) can be obtained as

summary(pois_mod)
Summary Output Poisson Regression

Poisson Example

  • A number of cargo ships were damaged by waves (McCullagh & Nelder, 1989).
  • Number of deaths due to AIDs in Australia per quarter (3 month periods) from January 1983 – June 1986.
  • A number of violent incidents were exhibited over a 6-month period by patients who had been treated in the ER of a psychiatric hospital (Gardner, Mulvey, & Shaw, 1995).
  • Daily homicide counts in California (Grogger, 1990).
  • Founding of daycare centers in Toronto (Baum & Oliver, 1992).
  • Political party-switching among members of the US House of Representatives (King, 1988).
  • Number of presidential appointments to the Supreme Court (King, 1987).
  • A number of children in a classroom that a child lists as being their friend (unlimited nomination procedure, sociometric data).
  • A number of hard disk failures during a year.
  • Number of deaths due to SARs (Yu, Chan & Fung, 2006).
  • A number of arrests resulted from 911 calls.
  • A number of orders of protection were issued.

FAQs about Poisson Regression in R

  1. What function is used in R to perform Poisson Regression?
  2. Write about important arguments of glm() function in R to perform the Poisson Regression Model.
  3. Give real-life examples of data sets, for which Poisson regression may be performed.
  4. List the link function of the family.
  5. How Poisson Model is different from Linear Regression models?
Frequently Asked Questions About R
Poisson Regression in R

MCQs in Statistics

Non-Linear Regression Model: A Comprehensive Guide

The article is about using and applying Non-Linear Regression Models in R Language. In the least square method, the regression model is established in such a way that

"The sum of the squares of the vertical distances of different points (residuals) from the regression line is minimized"

When the relationship between the variables is not linear (one has a non-linear regression model), one may

  1. try to transform the data to linearize the relationship,
  2. fit polynomial or complex spline model to the data, or
  3. fit a non-linear regression to the data.

Non-Linear Regression Model

In the non-linear regression model, a function is specified by a set of parameters to fit the data. The non-linear least squares approach is used to estimate such parameters. In R, the nls() is used to approximate the non-linear function using a linear one and iteratively try to find the best parameter values.

Some frequently used non-linear regression models are listed in the Table below.

sr no.NameModel
1)Michaelis-Menten$y=\frac{ax}{1+bx}$
2)Two-parameter asymptotic exponential$y=a(1-e^{-bx})$
3)Three-parameter asymptotic exponential$y=a-be^{-cx}$
4)Two parameter Logistic$y=\frac{e^{a+bx}}{1+e^{a+bx}}$
5)Three parameter Logistic$y=\frac{a}{1+be^{-ex}}$
6)Weibull$y=a-be^{-cx^d}$
7)Gompertz$y=e^{-be^{-cx}}$
8)Ricker curves$y=axe^{-bx}$
9)Bell-Shaped$y=a \, exp(-|bx|^2)$

Let’s fit the Michaelis-Menten non-linear function to the data given below.

x <- seq(1, 10, 1)
y <- c(3.7, 7.1, 11.9, 19, 27, 38.5, 51, 67.7, 85, 102)

nls_model <- nls(y ~ a * x/(1 + b * x), start = list(a = 1, b = 1))

summary(nls_model)

The output of the above code for the Michaelis-Menten non-linear function is

#### Output
Formula: y ~ a * x/(1 + b * x)

Parameters:
   Estimate Std. Error t value Pr(>|t|)    
a  4.107257   0.226711   18.12 8.85e-08 ***
b -0.060900   0.002708  -22.49 1.62e-08 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.805 on 8 degrees of freedom

Number of iterations to convergence: 11 
Achieved convergence tolerance: 6.354e-06

Let us plot the non-linear predicted values from 10 data points of newly generated x-values

new.data <- data.frame(x = seq(min(x), max(x), len = 10))

plot(x, y)
lines(new.data$x, predict(nls_model, newdata = new.data) )
Non-Linear Regression Models

The sum of squared residuals and the confidence interval of the chosen values of the coefficient can be obtained by issuing the commands,

sum(resid(nls_model)^2) 
# or 
print(sum(resid(nls_model)^2))
confint(nls_model) 
# or 
print(confint(nls_model))
Non=Linear Regression Models in R Output

Note that the formula nls() does not use special coding in linear terms, factors, interactions, etc. The right-hand side in the expression nls() computes the expected value to the left-hand side. The start argument contains the list of starting values of the parameter used in the expression and is varied by the algorithm.

Computer MCQs Online Test

Logistic Regression Models in R

The article is about the use and application of Logistic Regression Models in R Language. In logistic regression models, the response variable ($y$) is of categorical (binary, dichotomous) values such as 1 or 0 (TRUE/ FALSE). It measures the probability of a binary response variable based on a mathematical equation relating the values of the response variable with the predictor(s). The built-in glm() function in R can be used to perform logistic regression analysis.

Probability and Odds Ratio

The odds are used in logistic regression. If $p$ is the probability of success, the odds of in favour of success are, $\frac{p}{q}=\frac{p}{1-p}$.

Note that probability can be converted to odds and odds can also be converted to likelihood (probability). However, unlike probability, odds can exceed 1. For example, if the likelihood of an event is 0.25, the odds in favour of that event are $\frac{0.25}{0.75}=0.33$. And the odds against the same event are $\frac{0.75}{0.25}=3$.

Logistic Regression Models in R (Example)

In built-in dataset (“mtcars“), the column (am) describes the transmission mode (automatic or manual) which is of binary value (0 or 1). Let us perform logistic regression models between the response variable “am” and other regressors: “hp”, “wt”, and “cyl” as given:

Logistic Regression with one Dichotomous Predictor

logmodel1 <- glm(am ~ vs, family = "binomial")
summary(logmodel1)

Logistic Regression with one Continuous Predictor

If the prediction variable is continuous then the logistic regression formula in R would be as given below:

logmodel2 <- glm(am ~ wt, family = "binomial")
summary(logmodel2)

Multiple Predictors in Logistic Regression

The following is an example of a logistic regression model with more than one predictor. For the model diagnostic plots are also drawn.

logmodel3 <- glm(am ~ cyl + hp + wt, family = "binomial")
summary(logmodel3)
plot(logmodel3)

Note: in the logistic regression model, dichotomous and continuous variables can be used as predictors.

Logistic Regression Models in R
Logistic Regression Models in R and Diagnostic Plots

In R language, the coefficients returned by logistic regression are a logit, or the log of the odds. To convert logits to odds ratio exponentiates it and to convert logits to probability use $\frac{e^\beta}{1-e^\beta}$. For example,

logmodel1 <- glm(am ~ vs, family = "binomial", data = mtcars)
logit_coef <- logmodel1$coef
exp(logmodel1$coef)
exp(logit_coef)/(1 + exp(logmodel1$coef))
Logistic Regression in R