Basic GAN

  • Generative Adversarial Networks
  • Learn a prob distribution directly from data generated by that distribution
  • no need for any Markov Chain or unrolled approximate inference networks during either training or generation of samples
  • Adversarial Learning
  • Min Max game where
  • G : Gradient Descent
  • D : Gradient Ascent
  • Discriminator Loss (Given Generator)
  • Generator Loss (Given Discriminator)
    • This is low if Discriminator is fooled by Gen,

Training

  • pick mini batch of samples 
  • update discriminator with Gradient Descent based** on discriminator loss with generator obtained from previous update
  • update the generator with Gradient Descent based on generator loss with the discriminator from the previous step

Issues