A few weeks ago, Max Welling and I published a preprint on arXiv on Auto-Encoding Variational Bayes.
It’s an efficient general algorithm for training direct graphical models with continuous latent variables. It also works for very complicated models (where the usual learning algorithms are intractable), and it easily scales to huge datasets.
One interesting application is (deep) generative neural networks. In this case, a deep encoder is learned together with the deep generative model.
arXiv paper: Auto-Encoding Variational Bayes
Researchers at DeepMind have independently developed the same algorithm. They posted their paper a few weeks later: Stochastic Back-propagation and Variational Inference in Deep Latent Gaussian Models