Theme:"Simplicity Matters? The Case of Non-parametric Models"
Date & Time: Thursday, 7 Jan., 1:30-3:30 p.m.
Venue:Meeting Room, Institute of Philosophy of mind and cognition, NYMU
About the theme
In this presentation, I claim that influential arguments concerning the importance of parametric simplicity for model selection have been biased by their focus on parametric models. See (Foster & Sober 1994) (Foster 2001) (Sober & Hitchcock 2004). Such a focus leads us to believe that there is a fundamental trade‐off between parametric simplicity and goodness of fit. But no such trade‐off is considered when we select non‐parametric models, like KNN1 regression models. We can increase the fit of KNN models by keeping the number of adjustable parameters to 1.
This leads me to point out that the important trade‐off that is made as we select any kind of model is between the bias and the variance of an estimator for a dependent variable. Consequently, I explain why a favored selection criterion for the proponents of parametric simplicity, i.e., the AIC (Akaike Information Criterion) is not optimal (in any scenario) in order to tell if we have made a reasonable bias/variance trade‐off. A selection criterion based on crossvalidation is more appropriate.
Forster, M. (2001), “The New Science of Simplicity”, (In A. Zellner, H.
Keuzenkamp, and M. McAleer (Eds.), Simplicity, Inference and Modelling. (pp. 83‐
119). Cambridge: Cambridge University Press)
Forster M. and Sober, E. (1994), “How to Tell When Simpler, More Unified, or
Less Ad Hoc Theories will Provide More Accurate Predictions”, The British
Journal for the Philosophy of Science, 45: 1 ‐ 35.
Hitchcock, C. and Sober, E. (2004) “Prediction Versus Accommodation and the
Risk of Overfitting”, The British Journal for the Philosophy of Science, 55: 1‐34
About the speaker
Guillaume Rochefort-Maranda, Laval University, Québec, Canada