Exercises

  1. Here we explore the maximal margin classifier on a toy data set.

See solution for similar textbook problem at https://blog.princehonest.com/stat-learning/ch9/3.html

  1. We are given \(n = 7\) observations in \(p = 2\) dimensions. For each observation, there is an associated class label. Sketch the observations.
  2. Sketch the optimal separating hyperplane, and provide the equation for this hyperplane in the form of textbook equation 9.1.
  3. Write the classification rule for the maximal margin classifier.
  4. On your sketch, indicate the margin for the maximal margin hyperplane.
  5. Indicate the support vectors for the maximal margin classifier.
  6. Argue that a slight movement of observation 4 would not affect the maximal margin hyperplane.
  7. Sketch a separating hyperplane that is not the optimal separating hyper- plane, and provide the equation for this hyperplane.
  8. How would the separating hyperplane change if an 8th observation (8, 2, 2.5, 1) as added to the data? List the support vectors, and write the equation of the plane.
  1. Bagging and boosting on the ISLR caravan data set. Questions 1-4 from lab 8.

This exercise is based on the lab material in chapter 8 of the textbook, and exercise 11. Solutions to the textbook exercise can be found at https://blog.princehonest.com/stat-learning/ch8/11.html.

  1. Use the Caravan data from the ISLR package. Read the data description.
  1. Compute the proportion of caravans purchased to not purchased. Is this a balanced class data set? What problem might be encountered in assessing the accuracy of the model as a consequence?

Its not a balanced data set, because there are few caravan purchasers. It means that in assessing the model we need to separately look at the error for each class, because the overall error will be dominated by the error for non-purchasers.

  1. Convert the response variable from a factor to an integer variable, where 1 indicates that the person purchased a caravan.

  2. Break the data into 2/3 training and test set, ensuring that the same ratio of the response variable is achieved in both sets. Check that your sampling has produced this.

It does produce the same proportions in each group.

  1. The solution code on the unofficial solution web site:
library(ISLR)
train = 1:1000
Caravan$Purchase = ifelse(Caravan$Purchase == "Yes", 1, 0)
Caravan.train = Caravan[train, ]
Caravan.test = Caravan[-train, ]

would use just the first 1000 cases for the training set. What is wrong about doing this?

It may be that the first 1000 cases contain all the purchasers, or that these were the early customers. Its generally never good to take the first X cases for training because we might be introducing a difference between training and test sets. The test set should be similar to the training set.

  1. Here we will fit a boosted tree model, using the gbm package.
  1. Use 1000 trees, and a shrinkage value of 0.01.

  2. Make a plot of the oob improvement against iteration number. What does this suggest about the number of iterations needed? Why do you think the oob improvement value varies so much, and can also be negative?

Probably around 300 iterations might be sufficient, because it plateaus at 0 around that number. The variation on improvement means that some iterations produce worse results. Re-weighting the observations will sometimes worsen the model.

  1. Compute the error for the test set, and for each class. Consider a proportion 0.2 or greater to indicate that the customer will purchase a caravan.

Overall=(62+95)/1940=0.08092784, Non-purchasers=62/1824=0.03399123, Purchasers=95/116=0.8189655

  1. What are the 6 most important variables? Make a plot of each to examine the relationship between these variables and the response. Explain what you learn from these plots.

My recommendation for making these plots is to use a side-by-side dotplot. But we learn immediately that the predictors are primarily categorical. Using jitter can help compare the purchasers against the non-purchasers. Its messy! There doesn’t really look like the classification could be good. Another approach is to focus on the proportions in each category of the predictors, using stacked bar chart. This loses the count data, so its still important to use the dotplots, too. From the bar charts, though, it can be seen why the variables are chosen to be important, because some cagtegories of predictors have higher purchase rates. Still, its messy, and doesn’t give much confidence about the ability to predict whether a customer will purchase a caravan.

  1. Here we will fit a random forest model, using the randomForest package.
  1. Use 1000 trees, using a numeric response so that predictions will be a number between 0-1, and set importance=TRUE. (Ignore the warning about not having enough distinct values to use regression.)

  2. Compute the error for the test set, and for each class. Consider a proportion 0.2 or greater to indicate that the customer will purchase a caravan.

Overall=(89+167)/1940=0.1319588, Non-purchasers=167/1824=0.09155702, Purchasers=89/116=0.0.7672414

  1. What are the 6 most important variables? Make a plot of any that are different from those chosen by gbm. How does the set of variables compare with those chosen by gbm.

Some are the same, but there are some new ones. None of them look any better when they are plotted, than those found by gbm.

  1. Here we will fit a gradient boosted model, using the xgboost package.
  1. Read the description of the XGBoost technique at https://www.hackerearth.com/practice/machine-learning/machine-learning-algorithms/beginners-tutorial-on-xgboost-parameter-tuning-r/tutorial/, or other sources. Explain how this algorithm might differ from earlier boosted tree algorithms.

My understanding is that it is primarily a better weighting optimisation. But this algorithm also seems to have some better calibration for over-fitting, and regularisation to reduce variables.

  1. Tune the model fit to determine how many iterations to make. Then fit the model, using the parameter set provided.

This would suggest 10 iterations.

  1. Compute the error for the test set, and for each class. Consider a proportion 0.2 or greater to indicate that the customer will purchase a caravan.

Overall=(114+83)/1940=0.1015464, Non-purchasers=114/1824=0.0625, Purchasers=83/116=0.7155172

  1. Compute the variable importance. What are the 6 most important variables? Make a plot of any that are different from those chosen by gbm or randomForest. How does the set of variables compare with the other two methods.

Some are the same, but there are some new ones. None of them look any better when they are plotted, than those found by gbm.

  1. Compare and summarise the results of the three model fits.

The xgboost model has a more balanced overall performance. Ideally one might make a ROC curve for each model but I can’t do that at the moment.

  1. Now scramble the response variable (Purchase) using permutation. The resulting data has no true relationship between the response and predictors. Re-do Q2 with this data set. Write a paragraph explaining what you learn about the true data from analysing this permuted data.

Even though the models for the true data look quite weak, they still provide some predictive ability when comparing against purely random data. That is, the data does has some weak signal.