Use of Important Functions in R

Looking for the most important functions in R? This blog post answers key questions like creating frequency tables (table()), redirecting output (sink()), transposing data, calculating standard deviation, performing t-tests, ANOVA, and more. Perfect for R beginners and data analysts!

  • Important functions in R
  • R programming cheat sheet
  • Frequency table in R (table())
  • How to use sink() in R
  • Transpose data in R (t())
  • Standard deviation in R (sd())
  • T-test, ANOVA, and Shapiro-Wilk test in R
  • Correlation and covariance in R
  • Scatterplot matrices (pairs())
  • Diagnostic plots in R

This Important functions in R, Q&A-style guide covers essential R functions with clear examples, helping you master data manipulation, statistical tests, and visualization in R. Whether you’re a beginner or an intermediate user, this post will strengthen your R programming skills!

Which function is used to create a frequency table in R?

In R, a frequency table can be created by using table() function.

What is the use of sink() function?

The sink() function in R is used to redirect R output (such as the results of computations, printed messages, or console output) to a file instead of displaying it in the console. This is particularly useful for saving logs, results of analyses, or any other text output generated by R scripts.

Explain what transpose is and how it is performed.

Transpose is used for reshaping the data, which is used for analysis. Transpose is performed by t() function.

What is the length function in R?

The length() function in R gets or sets the length of a vector (list) or other objects. The length() function can be used for all R objects. For an environment, it returns the object number in it. NULL returns 0.

What is the difference between seq(4) and seq_along(4)?

seq(4) means vector from 1 to 4 (c(1,2,3,4)) whereas seq_along(4) means a vector of the length(4) or 1 (c(1)).

Vector $v$ is c(1,2,3,4) and list $x$ is list(5:8). What is the output of v*x[[1]]?

[1] 5 12 21 32s

Important functions in R Language

How do you get the standard deviation for a vector $x$?

sd(x, na.rm=TRUE)

$X$ is the vector c(5,9.2,3,8.51,NA). What is the output of mean(x)?

The output will be NA.

Important function in R Programming

How can one compute correlation and covariance in R?

Correlation is produced by cor() and covariance is produced by cov() function.

How to create scatterplot matrices?

pair() or splom() function are used to create scatterplot matrices.

What is the use of diagnostic plots?

It is used to check the normality, heteroscedasticity, and influential observations.

What is principal() function?

It is defined in the psych package that is used to rotate and extract the principal components.

Define mshapiro.test()?

It is a function which defined in the mvnormtest package. It produces the Shapiro-Wilk test for multivariate normality.

Define barlett.test().

The barlett.test() is used to provide a parametric k-sample test of the equality of variances.

Define anova() function.

The anova() is used to compare the nested models. Read more One-Way ANOVA

Define plotmeans().

It is defined under the gplots package, which includes confidence intervals, and it produces a mean plot for single factors.

Define loglm() function.

The loglm() function is used to create log-linear models.

What is t-tests() in R?

We use it to determine whether the means of two groups are equal or not by using t.test() function.

Statistics and Data Analysis

Summarizing Data in R Base Package

Introduction to Summarizing Data in R

Data summarization (getting different summary statistics) is a fundamental step in exploratory data analysis (EDA). Summarizing data in R Language helps analysts to understand the patterns, detect anomalies, and derive insights. While modern R packages like dplyr and data.table offers streamlined approaches. However, Base R remains a powerful and efficient tool for quick data summarization without additional dependencies (packages).

This guide explores essential Base R functions for summarizing data, from basic statistics to advanced grouped operations, ensuring you can efficiently analyze datasets right out of the box.

For learning purposes, we will use the mtcars data set.

Key Functions for Basic Summary Statistics

There are several Base R functions for computing summary statistics. The summary() function offers a quick overview of a dataset, displaying minimum, maximum, mean, median, and quartiles for numerical variables. On the other hand, the categorical variables are summarized with frequency counts. For more specific metrics, functions like mean(), median(), sd(), and var() calculate central tendency and dispersion, while min() and max() functions can be used to identify the data range. These functions are particularly useful when combined with na.rm = TRUE to handle missing values. For example, applying summary(mtcars) gives an immediate snapshot of the dataset, while mean(mtcars$mpg, na.rm = TRUE) computes the average miles per gallon.

Frequency Counts and Cross-Tabulations

When working with categorical data, the table() function is indispensable for generating frequency distributions. It counts occurrences of unique values, making it ideal for summarizing factors or discrete variables. For more complex relationships, xtabs() or ftable() can create cross-tabulations, revealing interactions between multiple categorical variables. For instance, table(mtcars$cyl) shows how many cars have 4, 6, or 8 cylinders, while xtabs(~ gear + cyl, data = mtcars) presenting a contingency table between gears and cylinders.

attach(mtcars)

# Frequency of cylinders
table(cyl)

# contingency table of gears and cylinders
xtabs(~ gear + cyl, data = mtcars)
Summarizing Data in R Language

Group-Wise Summarization Using aggregate() and by()

To compute summary statistics by groups, Base R offers aggregate() and by(). The aggregate() function splits data into subsets and applies a summary function, such as mean or sum, to each group. For example, aggregate(mpg ~ cyl, data = mtcars, FUN = mean) calculate the average MPG per cylinder group. Meanwhile, by() provides more flexibility, allowing custom functions to be applied across groups. While tapply() is another alternative for vector-based grouping, aggregate() is often preferred for its formula interface and cleaner output.

# Average for each cylinder of the vehicle
aggregate(mpg ~ cyl, data = mtcars, FUN = mean)

## Output
  cyl      mpg
1   4 26.66364
2   6 19.74286
3   8 15.10000

Advanced Techniques: Quantiles and Custom Summaries

Beyond basic summaries, Base R supports advanced techniques like percentile analysis using quantile(), which helps assess data distribution by returning specified percentiles (e.g., quantile(mtcars$mpg, probs = c(0.25, 0.5, 0.75))). For customized summaries, users can define their own functions and apply them using sapply() or lapply(). This approach is useful when needing tailored metrics, such as trimmed means or confidence intervals. Additionally, combining these functions with plotting tools like boxplot() or hist() can further enhance data interpretation.

# percentiles
quantile(mtcars$mpg, probs = c(0.25, 0.5, 0.75))

## Output
   25%    50%    75% 
15.425 19.200 22.800 

boxplot(quantile(mtcars$mpg, probs = c(0.25, 0.5, 0.75)) )
Data Visualization Summarizing Data in R Base Package

When to Use Base R vs. Tidyverse for Summarization

While Base R is efficient and lightweight, the Tidyverse (particularly dplyr) offers a more readable syntax for complex operations. Functions like summarize() and group_by() simplify chained operations, making them preferable for large-scale data wrangling. However, Base R remains advantageous for quick analyses, legacy code, or environments where installing additional packages is restricted. Understanding both approaches ensures flexibility in different analytical scenarios.

Best Effective Practices for Summarizing Data in R

To maximize efficiency, always handle missing values explicitly using na.rm = TRUE in statistical functions. For large datasets, consider optimizing performance by pre-filtering data or using vectorized operations. Visualizing summaries with basic plots (e.g., hist(), boxplot()) can provide immediate insights. Finally, documenting summary steps ensures reproducibility, whether in scripts, R Markdown, or Shiny applications.

In summary, the Base R provides a robust toolkit for data summarization, from simple descriptive statistics to advanced grouped analyses. By mastering functions like summary(), table(), aggregate(), and quantile(), analysts can efficiently explore datasets without relying on external packages. While modern alternatives like dplyr enhance readability for complex tasks, Base R’s simplicity and universality make it an essential skill for every R programmer. Practicing these techniques on real-world datasets will solidify your understanding and improve your data analysis workflow.

Dimensionality Reduction in Machine Learning

Comparing Two Sample Means in R

Comparing Two Sample Means in R

One can easily compare two sample means in R, as in R language all the classical tests are available in the package stats. There are different comparison tests such as (i) one sample mean test, (ii) two independent sample means test, and (iii) dependent sample test. When population standard deviation is known, or sample size (number of observations in the sample) is large enough ($n\ge 30), tests related to normal distribution are performed.

Data for Two Sample Means

Consider the following data set on the “latent heat of the fusion of ice (cal/gm)” from Rice, 1995.

Method A79.9880.0480.0280.0480.0380.0380.0479.9780.05
80.0380.0280.0080.02
Method B80.0279.9479.9879.9779.9780.0379.9579.97

Let us draw boxplots to make a comparison between two these two methods. The comparison will help in checking the assumption of the independent two-sample test.

Note that one can read the data using the scan() function, create vectors, or even read the above data from data files such as *.txt and *.csv. In this tutorial, we assume vectors $A$ and $B$ for method A and method B.

A = c(79.98, 80.04, 80.02, 80.04, 80.03, 80.03, 80.04, 79.97, 80.05, 80.03, 80.02, 80.00, 80.02)
B = c(80.02, 79.94, 79.98, 79.97, 79.97, 80.03, 79.95, 79.97)

Draw a Boxplot of Samples

Let us draw boxplots for each method that indicate the first group tends to give higher results than the second one.

boxplot(A, B)
Comparing Two Sample Means in R

Comparing Two Sample Means in R using t.test() Function

The unpaired t-test (independent two-sample test) for the equality of the means can be done using the function t.test() in R Language.

t.test(A, B)
t.test in R Language

From the results above, one can see that the p-value = 0.006939 is less than 0.05 (level of significance) which means that on average both methods are statistically different from each other with reference to latent heat of fusion of ice.

Testing the Equality of Variances of Samples

Note that, the R language does not assume the equality of variances in the two samples. However, the F-test can be used to check/test the equality in the variances, provided that the two samples are from normal populations.

var.test(A, B)
Testing the equality of variances in R

From the above results, there is no evidence that the variances of both samples are statistically significant, as the p-value is greater than the 0.05 level of significance. It means that one can use the classical t-test that assumes the equality of the variances.

t.test(A, B, var.equa. = TRUE)

## Output
        Welch Two Sample t-test

data:  A and B
t = 3.2499, df = 12.027, p-value = 0.006939
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 0.01385526 0.07018320
sample estimates:
mean of x mean of y 
 80.02077  79.97875 

https://rfaqs.com

https://gmstat.com