KL Divergence Between Gaussians: A Step-by-Step Derivation for the Variational Autoencoder Objective
KL Divergence Between Gaussians: A Step-by-Step Derivation for the Variational Autoencoder Objective
Andrés Muñoz, Rodrigo Ramele
AbstractKullback-Leibler (KL) divergence is a fundamental concept in information theory that quantifies the discrepancy between two probability distributions. In the context of Variational Autoencoders (VAEs), it serves as a central regularization term, imposing structure on the latent space and thereby enabling the model to exhibit generative capabilities. In this work, we present a detailed derivation of the closed-form expression for the KL divergence between Gaussian distributions, a case of particular importance in practical VAE implementations. Starting from the general definition for continuous random variables, we derive the expression for the univariate case and extend it to the multivariate setting under the assumption of diagonal covariance. Finally, we discuss the interpretation of each term in the resulting expression and its impact on the training dynamics of the model.