Chapter 12 explained that learning models can be divided into discriminative and generative models. The Variational Autoencoder (VAE), introduced in this chapter, is a generative model. Variational inference is a technique that finds a lower bound on the log-likelihood of the data and maximizes the lower bound rather than the log-likelihood in the Maximum Likelihood Estimation (MLE) (see Chap. 12). This lower bound is usually referred to as the Evidence Lower Bound (ELBO). Learning the parameters of latent space can be done using Expectation Maximization (EM), as done in factor analysis (see Chap. 12). The Variational Autoencoder (VAE) implements variational inference in an autoencoder neural network setup, where the encoder and decoder model the E-step (expectation step) and M-step (maximization step) of EM, respectively. However, VAE is usually trained using backpropagation in practice. Variational inference and VAE are found in many Bayesian analysis applications. For example, variational inference has been used in 3D human motion analysis and VAE has been used in forecasting.
Variational Autoencoders
Elements of Dimensionality Reduction and Manifold Learning ; Kapitel : 20 ; 563-576
08.08.2022
14 pages
Aufsatz/Kapitel (Buch)
Elektronische Ressource
Englisch
Deep Tracking Portfolios Using Autoencoders and Variational Autoencoders
Springer Verlag | 2024
|