Bootstrapping, Bagging, Boosting and Random Forest

To understand bootstrap, suppose it were possible to draw repeated samples (of the same size) from the population of interest, a large number of times. Then, one would get a fairly good idea about the sampling distribution of a particular statistic from the collection of its values arising from these repeated samples. But, that does not make sense as it would be too expensive and defeat the purpose of a sample study. The purpose of a sample study is to gather information cheaply in a timely fashion. The idea behind bootstrap is to use the data of a sample study at hand as a “surrogate population”, for the purpose of approximating the sampling distribution of a statistic; i.e. to resample (with replacement) from the sample data at hand and create a large number of “phantom samples” known as bootstrap samples. The sample summary is then computed on each of the bootstrap samples (usually a few thousand). A histogram of the set of these computed values is referred to as the bootstrap distribution of the statistic.

In other words, We randomly sample with replacement from the n known observations. We then call this a bootstrap sample. Since we allow for replacement, this bootstrap sample most likely not identical to our initial sample. Some data points may be duplicated, and others data points from the initial  may be omitted in a bootstrap sample.

An Example

The following numerical example will help to demonstrate how the process works. If we begin with the sample 2, 4, 5, 6, 6, then all of the following are possible bootstrap samples:

  • 2 ,5, 5, 6, 6
  • 4, 5, 6, 6, 6
  • 2, 2, 4, 5, 5
  • 2, 2, 2, 4, 6
  • 2, 2, 2, 2, 2
  • 4,6, 6, 6, 6

Boosting and Bagging.

Bootstrap aggregating (bagging) is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. It also reduces variance and helps to avoid overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method.

Both boosting and bagging are ensemble techniques — instead of learning a single classifier, several are trained and their predictions combined. While bagging uses an ensemble of independently trained classifiers, boosting is an iterative process that attempts to mitigate prediction errors of earlier models by predicting them with later models.

Example : Below is an analysis on the relationship between ozone and temperature (data from Rousseeuw and Leroy (1986), available at classic data sets, analysis done in R).

The relationship between temperature and ozone in this data set is apparently non-linear, based on the scatter plot. To mathematically describe this relationship, LOESS smoothers (with span 0.5) are used. Instead of building a single smoother from the complete data set, 100 bootstrap samples of the data were drawn. Each sample is different from the original data set, yet resembles it in distribution and variability. For each bootstrap sample, a LOESS smoother was fit. Predictions from these 100 smoothers were then made across the range of the data. The first 10 predicted smooth fits appear as grey lines in the figure below. The lines are clearly very wiggly and they overfit the data – a result of the span being too low.

But taking the average of 100 smoothers, each fitted to a subset of the original data set, we arrive at one bagged predictor (red line). Clearly, the mean is more stable and there is less overfit.

12.12

Random Forest

The random forest (see figure below) combines trees with the notion of an ensemble. Thus, in ensemble terms, the trees are weak learners and the random forest is a strong learner.

randomforest

Difference between Bagging and Random Forest

Bagging has a single parameter – the number of trees. all trees are fully grown binary tree (unpruned) and at each node in the tree one searches over all features to find the feature that best splits the data at that node.

Randomforests has 2 parameters – the first parameter is the same as bagging (the number of trees). the second parameter (unique to randomforests) is mtry which is how many features to search over to find the best feature. This parameter is usually 1/3*D for regression and sqrt(D) for classification. thus during tree creation randomly mtry number of features are chosen from all available features and the best feature that splits the data is chosen. This random selection is done everytime at a node

Bagging only equals using ntree. Only Randomforests has ntree and mtry. If mtry = all then baggging == randomforests

Disclaimer : These are my study notes – online – instead of on paper so that others can benefit. In the process I’ve have used some pictures / content from other original authors. All sources / original content publishers are listed below and they deserve credit for their work. No copyright violation intended.

References for these notes :

http://statistics.about.com/od/Applications/a/What-Is-Bootstrapping.htm

Click to access bootstrap.pdf

http://stats.stackexchange.com/questions/27049/regarding-boosting-bagging-and-bootstrapping

http://en.wikipedia.org/wiki/Bootstrap_aggregating

http://citizennet.com/blog/2012/11/10/random-forests-ensembles-and-performance-metrics/

Leave a comment