Hyper-parameter Tuning of a Decision Tree Induction Algorithm

Main Article Content

Rafael G. Mantovani Tomás Horváth Ricardo Cerri Joaquin Vanschoren André C. P. L. F. de Carvalho

Abstract

Supervised classification is the most studied task in Machine Learning. Among the many algorithms used in such task, Decision Tree algorithms are a popular choice, since they are robust and efficient to construct. Moreover, they have the advantage of producing comprehensible models and satisfactory accuracy levels in several application domains. Like most of the Machine Leaning methods, these algorithms have some hyperparameters whose values directly affect the performance of the induced models. Due to the high number of possibilities for these hyper-parameter values, several studies use optimization techniques to find a good set of solutions in order to produce classifiers with good predictive performance. This study investigates how sensitive decision trees are to a hyper-parameter optimization process. Four different tuning techniques were explored to adjust J48 Decision Tree algorithm hyper-parameters. In total, experiments using 102 heterogeneous datasets analyzed the tuning effect on the induced models. The experimental results show that even presenting a low average improvement over all datasets, in most of the cases the improvement is statistically significant.

Article Details

How to Cite
G. MANTOVANI, Rafael et al. Hyper-parameter Tuning of a Decision Tree Induction Algorithm. BRACIS, [S.l.], july 2017. Available at: <http://143.54.25.88/index.php/bracis/article/view/85>. Date accessed: 19 sep. 2024. doi: https://doi.org/10.1235/bracis.vi.85.
Section
Artigos