Event Date:
Event Location:
- HSSB 1173
Event Price:
FREE
Event Contact:
Professor Siddhartha Chib
Washington University in St. Louis
Professor of Econometrics and Statistics
Related Link:
With advances in statistical and machine learning methodology, and the availability of vast stores of data, models of enormous complexity can be fit to the data. Instead of just fitting one sparse or dense model, it is desirable to fit many models, perhaps thousands, or millions, and to ask the question of which model (amongst these likely misspecified models) is the best. In this talk, we review the possibilities for conducting model inference from the Bayesian viewpoint, based on model marginal likelihoods, which offers a principled framework for comparing (and ranking) misspecified parametric and nonparametric models, and penalizes complexity. The marginal likelihood identity introduced in Chib (1995) forms the basis of the discussion. It has proved invaluable in establishing the large sample consistency of model marginal likelihoods and for computing marginal likelihoods from the MCMC output of models with high-dimensional parameters. We also touch on the question of formulating an orbit around the best model of models that are close to the best model, a sort of credibility interval on model space. Open avenues for research include development of theory for comparing models with parameters growing with sample size, and computational frameworks (such as massively parallel or quantum computing) for dealing with exponentially massive model spaces. Some non-standard examples are presented, in which a multitude of competing models are scanned using carefully crafted model-specific priors on parameters.