In this chapter, we introduce generative models. We focus specifically on the Variational Autoencoder (VAE) family, which uses the same set of tools introduced in Chap. 3, but with a stark objective in mind. Here, we are interested in modeling the process that generates the observed data. This empowers us to simulate new data, create world models, grasp underlying generative factors, and learn with little to no supervision. Throughout the chapter we progressively build the rationale behind the vanilla VAE, laying out the foundation to understand the shortcomings that later extensions try to overcome, such as the Conditional VAE, the β − V AE, the Categorical VAE, and others. Moreover, we provide training and sample generation experiments with VAEs on two image data sets and finish off with an illustrative example of semi-supervised learning with VAEs.
Variational Autoencoder
Variational Methods for Machine Learning with Applications to Deep Networks ; Chapter : 5 ; 111-149
2021-02-17
39 pages
Article/Chapter (Book)
Electronic Resource
English
Springer Verlag | 2025
|