Measuring the Stability of Results from Supervised Statistical Learning

Philipp, Michel and Rusch, Thomas and Hornik, Kurt ORCID: https://orcid.org/0000-0003-4198-9911 and Strobl, Carolin (2017) Measuring the Stability of Results from Supervised Statistical Learning. Research Report Series / Department of Statistics and Mathematics, 131. WU Vienna University of Economics and Business, Vienna.

[img]
Preview
PDF
Report131.pdf

Download (607kB)

Abstract

Stability is a major requirement to draw reliable conclusions when interpreting results from supervised statistical learning. In this paper, we present a general framework for assessing and comparing the stability of results, that can be used in real-world statistical learning applications or in benchmark studies. We use the framework to show that stability is a property of both the algorithm and the data-generating process. In particular, we demonstrate that unstable algorithms (such as recursive partitioning) can produce stable results when the functional form of the relationship between the predictors and the response matches the algorithm. Typical uses of the framework in practice would be to compare the stability of results generated by different candidate algorithms for a data set at hand or to assess the stability of algorithms in a benchmark study. Code to perform the stability analyses is provided in the form of an R-package.

Item Type: Paper
Keywords: Resampling, Recursive Partitioning, R-package stablelearner
Divisions: Departments > Finance, Accounting and Statistics > Statistics and Mathematics
Depositing User: Josef Leydold
Date Deposited: 25 Jan 2017 13:57
Last Modified: 24 Oct 2019 13:41
FIDES Link: https://bach.wu.ac.at/d/research/results/80038/
URI: https://epub.wu.ac.at/id/eprint/5398

Actions

View Item View Item

Downloads

Downloads per month over past year

View more statistics