Supplementary Material 1: Models with Tuning Parameters and Description

Model Name / R Package / Tuning Parameters Used / Description (in Classification)
1. Classification Tree / “rpart” / 100 complexity parameters / Traverses all predictors for cut-offs that best separate the outcomes. Subsequent splits of the data points are traversed in a branching structure until no further splits can separate the outcome.
2. Bootstrap aggregated trees / “treebag” / 300 bootstrap samples / Several classification trees are built in bootstrapped samples of data points. Each tree votes for the outcome and the majority becomes the final prediction
3. Bayesian Generalized Linear Model / “bayesglm” / N/A / Bayesian inference on a generalized linear model.
4. Partial Least Squares / “pls” / 1-100 components / Regression based on projections of data to new dimensional space.
5. k-Nearest Neighbours / “knn” / 1-200kneighbors / The outcome of the k nearest neighbors from the predictors is used.
6. Boosted Logistic Regression / “LogitBoost” / 100 boosting iterations / Logistic regression with subsequent boosting by weights to data points that were hard to predict.
7. Boosted Generalized Additive Model / “gamboost” / Mstop 50 to 1000 in steps of 50
Prune and No Prune / Generalized additive model with subsequent boosting by weights to data points that were hard to predict.
8. High Dimensional Discriminant Analysis / “hdda” / 50 thresholds / Dimension reduction for each outcome.
9. Random Forest / “rf” / 1500 trees
Random predictor samples (mTry) from 1-200 by steps of 8 / Classification trees from different resamples and predictor subsets are built. The majority vote is used for the final outcome.
10. C5.0 / “C5.0” / 1-20 trials, rules and trees, winnow and no winnowing. / Classification tree models that allow class weight.
11. Conditional Inference Tree / “ctree” / MinCriterion 100 steps / Classification tree models that place more emphasis on missing values and multiple possible splits.
12. Logistic Model Trees / “lmt” / N/A / Classification tree where the individual end-leafs are models.
13. Stochastic Gradient Boosting / “gbm” / Shrinkages 1, .01, .001 1500 boosting iterations interaction depth 1-10) / Tree-based boosting algorithm where hard-to-predict data points are assigned different weights in subsequent iterations.
14. Quadratic Discriminant Analysis / “stepQDA” / N/A / Discrimination of outcomes from a quadratic function.
15. Linear Discriminant Analysis / “stepLDA” / N/A / Discrimination of outcomes from a linear function.
16. Bagged Flexible Discriminant Analysis / “bagFDA” / Degrees: 1-10
N-Prune: 2-10 / Discrimination of outcomes from flexible function(s).
17. Bagged Multivariate Adaptive Regression Splines / “bagEarth” / Degrees: 1-10
N-Prune: 2-10 / Discrimination from multiple independent splines within each predictor.
18. Nearest Shrunken Centroids / “pam” / 50 threshold steps / Constructs centroids from the predictors and associates a class to which centroid a new data point is closest to.
19. Support Vector Machines with Radial Weights / “svmRadialWeights” / 8 sigma and C steps / In support vector machines, boundaries that separates the outcomes with the greatest margin of those possible is chosen. This is done from different kernels of linear, quadratic or radial natures.
20. Neural Network / “nnet” / 1-10 hidden units
1500 maximum iterations
Decay: .001,.01,1,5,10,20,40 / Considering the predictors as a visible layer of input, a hidden layer of units is constructed that is modelled from inputs from each of the predictors. This acts in a sense as a separate logistic regression function that models from the inputs with its own features.
21. Neural Network with Feature Extraction / “pcannet” / Size 1-10
Decay: .001,.01,1,5,10,20,40 / A neural network model with built-in feature extraction.
22. eXtreme Gradient Boosting / “xgbTree” / 20 tune steps selected by training package of parameters (nrounds, max_depth, eta and gamma) / Tree-based boosting algorithm that subsequently models trees to correctly label outcomes that were hard to predict in previous iterations.
23. Conditional Inference Random Forest / “cforest” / Random predictor samples (mTry) from 1-200 by steps of 8 / A random forest ensembling of conditional inference trees, in order to minimize bias.
24. Adaptive Boosting / “AdaBoost.M1” / 20 tune steps of mFinal, maxdepth and coeflearn / Tree-based ensembling algorithm using boosting.