I passed my examination on June 17th, 2003 --- My examiners were Peter Sollich and David MacKay.
The whole* thesis can be downloaded
- Beal, M.J. (2003)
Variational Algorithms for Approximate Bayesian Inference
PhD. Thesis, Gatsby Computational Neuroscience Unit, University College London.
[pdf 3.9M] [ps.gz 3.7M] (281 pages)
Or you might want to download individual chapters*
Front material & Contents | (12) | [pdf 164k] [ps.gz 141k] |
Ch 1: Introduction | (31) | [pdf 410k] [ps.gz 335k] |
Ch 2: Variational Bayesian Theory | (38) | [pdf 491k] [ps.gz 379k] |
Ch 3: Variational Bayesian Hidden Markov Models | (24) | [pdf 529k] [ps.gz 652k] |
Ch 4: Variational Bayesian Mixture of Factor Analysers | (53) | [pdf 980k] [ps.gz 906k] |
Ch 5: Variational Bayesian Linear Dynamical Systems | (47) | [pdf 1.1M] [ps.gz 1.3M] |
Ch 6: Learning the structure of discrete-variable graphical models with hidden variables | (44) | [pdf 749k] [ps.gz 689k] |
Ch 7: Conclusion | (9) | [pdf 164k] [ps.gz 153k] |
Appendices | (11) | [pdf 215k] [ps.gz 198k] |
Bibliography | (12) | [pdf 137k] [ps.gz 78k] |
*Hyperlinks across sections, equations and citations (which back-reference to the text) are only functional in the whole pdf version, within an enabled viewer like Adobe Acrobat Reader.
Abstract
The Bayesian framework for machine learning allows for the incorporation of prior knowledge in a coherent way, avoids overfitting problems, and provides a principled basis for selecting between alternative models. Unfortunately the computations required are usually intractable. This thesis presents a unified variational Bayesian (VB) framework which approximates these computations in models with latent variables using a lower bound on the marginal likelihood.
Chapter 1 presents background material on Bayesian inference, graphical models, and propagation algorithms. Chapter 2 forms the theoretical core of the thesis, generalising the expectation-maximisation (EM) algorithm for learning maximum likelihood parameters to the VB EM algorithm which integrates over model parameters. The algorithm is then specialised to the large family of conjugate-exponential (CE) graphical models, and several theorems are presented to pave the road for automated VB derivation procedures in both directed and undirected graphs (Bayesian and Markov networks, respectively).
Chapters 3-5 derive and apply the VB EM algorithm to three commonly-used and important models: mixtures of factor analysers, linear dynamical systems, and hidden Markov models. It is shown how model selection tasks such as determining the dimensionality, cardinality, or number of variables are possible using VB approximations. Also explored are methods for combining sampling procedures with variational approximations, to estimate the tightness of VB bounds and to obtain more effective sampling algorithms. Chapter 6 applies VB learning to a long-standing problem of scoring discrete-variable directed acyclic graphs, and compares the performance to annealed importance sampling amongst other methods. Throughout, the VB approximation is compared to other methods including sampling, Cheeseman-Stutz, and asymptotic approximations such as BIC. The thesis concludes with a discussion of evolving directions for model selection including infinite models and alternative approximations to the marginal likelihood.