Deep Generative Models have had great success in modeling
distributions over observed variables - including classification,
machine translation, speech recognition and image synthesis.
However, learning to construct interpretable and compact latent
representations of the world without ground truth data remains
challenging. This lecture will introduce the main approaches for
learning latent variables with deep models and how deep latent
variable models have been applied in practice. Finally how latent
variables could be used to decouple learning in large deep learning
tasks will be discussed.