I design machine learning algorithms that try to solve some of today's most challenging problems in computer science and statistics.
I adapt ideas from physics and the statistical sciences, and use them in algorithms that can be applied to areas such as: bioinformatics, artificial intelligence, pattern recognition, document information retrieval, and human-computer interaction.
Click on the following topics to see research descriptions and some papers:-
|Nonparametric Bayes||-||powerful nonparametric text/document modelling|
|Variational Bayesian Methods||-||approximate Bayesian learning and inference|
|Bioinformatics||-||microarray analysis using variational Bayes|
|Embedded Hidden Markov Models||-||a novel tool for time series inference|
|Probabilistic Sensor Fusion||-||combining modalities using Bayesian graphical models|
|Collaborators||-||people I have worked with|
Inference using Embedded Hidden Markov Models
Embedded HMMs, invented by Radford Neal, constitute a new type of inference tool for sequential data that will allow many filtering, prediction, and control problems to be tackled and solved in a very novel way. They are an elegant generalisation of the particle filtering and smoothing procedures that are currently used in non-linear systems. Embedded HMMs efficiently perform inference in non-linear systems by temporarily embedding a tractable (finite-state) HMM in the non-linear hidden state-space of the model.
So far the embedded HMM has been applied to the tasks of robot localisation, speech analysis, and recovering 3-dimensional structure from human motion sequences (ongoing work). In the future we hope to apply it to more general inference tasks in the graphical models framework.