Generative Adversarial Network (GAN) Training Algorithm

From GM-RKB
(Redirected from GAN Training Algorithm)
Jump to navigation Jump to search

An Generative Adversarial Network (GAN) Training Algorithm is a neural network training algorithm in which a generator NNet predicts a sample from the space while the discriminator NNet predicts whether the sample is generated or real.



References

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Generative_adversarial_network Retrieved:2023-5-28.
    • A generative adversarial network (GAN) is a class of machine learning frameworks and a prominent framework for approaching generative AI. The concept was initially developed by Ian Goodfellow and his colleagues in June 2014.[1] In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss.

      Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proved useful for semi-supervised learning,[2] fully supervised learning, and reinforcement learning.

      The core idea of a GAN is based on the "indirect" training through the discriminator, another neural network that can tell how "realistic" the input seems, which itself is also being updated dynamically. This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner.

      GANs are similar to mimicry in evolutionary biology, with an evolutionary arms race between both networks.

  1. Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). Generative Adversarial Nets (PDF). Proceedings of the International Conference on Neural Information Processing Systems (NIPS 2014). pp. 2672–2680.
  2. Salimans, Tim; Goodfellow, Ian; Zaremba, Wojciech; Cheung, Vicki; Radford, Alec; Chen, Xi (2016). “Improved Techniques for Training GANs". arXiv:1606.03498

2018a

2018b

2017

2016a

  • (Quora, 2016) ⇒ https://www.quora.com/What-are-Generative-Adversarial-Networks
    • QUOTE: Generative Adversarial Networks (GANs) are neural networks that are trained in an adversarial manner to generate data mimicking some distribution. To understand this deeply, first you'll have to understand what a generative model is. In machine learning, the two main classes of models are generative and discriminative. A discriminative model is one that discriminates between two (or more) different classes of data - for example a convolutional neural network that is trained to output 1 given an image of a human face and 0 otherwise. A generative model on the other hand doesn't know anything about classes of data. Instead, its purpose is to generate new data which fits the distribution of the training data - for example, a Gaussian Mixture Model is a generative model which, after trained on a set of points, is able to generate new random points which more-or-less fit the distribution of the training data (assuming a GMM is able to mimic the data well). More specifically, a generative model g trained on training data X sampled from some true distribution D is one which, given some standard random distribution Z, produces a distribution D′ which is close to D according to some closeness metric (a sample z∼Z maps to a sample g(z)∼D′).

      The 'standard' way to determine a generative model g given training data X is maximum-likelihood, which requires all sorts of calculations of marginal probabilities, partition functions, most-likely estimates, etc. This may be feasible when your generative model is a GMM, but if you want to try to make a generative model out of a deep neural network, this quickly becomes intractable.

      Adversarial training allows you to train a generative model without all of these intractable calculations. Let's assume our training data X⊂Rd . The basic idea is that you will have two adversarial models - a generator g:Rn→Rd and a discriminator d:Rd→{0,1}. The generator will be tasked with taking in a given sample from a standard random distribution (e.g. a sample from an n-dimensional Gaussian) and producing a point that looks sort of like it could come from the same distribution as X. The discriminator, on the other hand, will be tasked with discriminating between samples from the true data X and the artificial data generated by g. Each model is trying to best the other - the generator's objective is to fool the discriminator and the discriminator's objective is to not be fooled by the generator.

      In our case, both g and d are neural nets. And what happens is that we train them both in an alternating manner. Each of their objectives can be expressed as a loss function that we can optimize via gradient descent. So we train g for a couple steps, then train d for a couple steps, then give g the chance to improve itself, and so on. The result is that the generator and the discriminator each get better at their objectives in tandem, so that at the end, the generator is able to or is close to being able to fool the most sophisticated discriminator. In practice, this method ends up with generative neural nets that are incredibly good at producing new data (e.g. random pictures of human faces).

2016b

2014

2004

  • (Lysyanskaya, et al., 2014) ⇒ Anna Lysyanskaya, Roberto Tamassia, and Nikos Triandopoulos. “Multicast authentication in fully adversarial networks.” In: Security and Privacy, 2004. Proceedings. 2004 IEEE Symposium on, pp. 241-253. IEEE, 2004.