Back to Videos


Douglas Bates, Department of Statistics, University of Wisconsin - Madison

Presentation (slides version):

Presentation (notes version):

The use of Markov-chain Monte Carlo methods for Bayesian inference has increased awareness of the need to view the posterior distribution of the parameter (in the Bayesian sense) or the distribution of the parameter estimator for those who prefer non-Bayesian techniques. I will concentrate on non-Bayesian inference although the techniques can also be applied to the posterior density in Bayesian methods. For many statistical models, including linear and generalized linear mixed-effects models, parameter estimates are defined as the optimizer of an objective function, e.g. the MLE's maximize the log-likelihood, and inference is based upon the location of the optimizer and local approximation at the optimizer, without assessing the validity of the approximation. This made sense when fitting a single model may have involved many days waiting for the answers from shared computer systems. It doesn't make sense when models can be fit in a few seconds. By repeatedly fitting a model subject to holding a particular parameter fixed we can build up a profile of the objective with respect to the parameter and use the information to produce profile based confidence intervals. But perhaps the most important aspect of the technique is graphical presentation of the results that force us to consider the behavior of the estimator beyond the estimate, which can cast doubt on many of the principles of inference and simulation that we hold dear.