##
Tutorial: Bayesian computing with INLA: An introduction

**
HÃ¥vard Rue**, Department of Mathematical Sciences,
Norwegian University of Science and Technology, Trondheim, Norway

###
Abstract

In these short course, I will discuss approximate Bayesian
inference for a class of models named `latent Gaussian models'
(LGM). LGM's are perhaps the most commonly used class of models in
statistical applications. It includes, among others, most of
(generalised) linear models, (generalised) additive models,
smoothing spline models, state space models, semiparametric
regression, spatial and spatiotemporal models, log-Gaussian Cox
processes and geostatistical and geoadditive models.

The concept of LGM is intended for the modelling stage, but turns
out to be extremely useful when doing inference as we can treat
models listed above in a unified way and using the \emph{same}
algorithm and software tool. Our approach to (approximate)
Bayesian inference, is to use integrated nested Laplace
approximations (INLA). Using this new tool, we can directly
compute very accurate approximations to the posterior
marginals. The main benefit of these approximations is
computational: where Markov chain Monte Carlo algorithms need
hours or days to run, our approximations provide more precise
estimates in seconds or minutes. Another advantage with our
approach is its generality, which makes it possible to perform
Bayesian analysis in an automatic, streamlined way, and to compute
model comparison criteria and various predictive measures so that
models can be compared and the model under study can be
challenged.

In this short course I will introduce the background for
understanding LGM and INLA; why it works and why its fast. I will
end these lectures illustrating INLA on some examples in
`R`. Please visit www.r-inla.org to download the
package and for further documentation.

###
Intended Audience

Researchers, students and professionals interested in Bayesian data analysis.

###
Related Links

www.r-inla.org