Tree-based methods for statisitcal learning in R

Author(s)

    • Greenwell, Brandon M.

Bibliographic Information

Tree-based methods for statisitcal learning in R

Brandon M. Greenwell

(Chapman & Hall/CRC data science series)(A Chapman & Hall book)

CRC Press, 2022

1st ed

  • : hbk

Available at  / 2 libraries

Search this Book/Journal

Note

Includes bibliographical references (p. 359-380) and index

Description and Table of Contents

Description

Thorough coverage, from the ground up, of tree-based methods (e.g., CART, conditional inference trees, bagging, boosting, and random forests). A companion website containing additional supplementary material and the code to reproduce every example and figure in the book. A companion R package, called treemisc, which contains several data sets and functions used throughout the book (e.g., there's an implementation of gradient tree boosting with LAD loss that shows how to perform the line search step by updating the terminal node estimates of a fitted rpart tree). Interesting examples that are of practical use; for example, how to construct partial dependence plots from a fitted model in Spark MLlib (using only Spark operations), or post-processing tree ensembles via the LASSO to reduce the number of trees while maintaining, or even improving performance.

Table of Contents

1 Introduction 2 Binary recursive partitioning with CART 3 Conditional inference trees 4 "The hitchhiker's GUIDE to modern decision trees" 5 Ensemble algorithms 6 Peeking inside the "black box": post-hoc interpretability 7 Random forests 8 Gradient boosting machines

by "Nielsen BookData"

Related Books: 1-2 of 2

Details

Page Top