Generative adversarial network

What is a generative adversarial network?

A type of deep neural network known as generative adversarial network (GAN)  is a subclass of  deep learning models which uses two of its components to generate completely new images using training data.. This is basically a neural network which uses training data and uses that existing data and knowledge to generate completely new data of it. The training in this type of network is purely indirect, which gets updated and altered dynamically through its initial discrimination. This type of network is generally more challenging and complex as compared to all other neural network models. The core aim of this model is to generate new data from literal scratch but this can distinguish from domain to domain. It basically generates something completely different than the data originally fed into it, for example it would generate a picture of a zebra from the data fed of a horse. In the type of reinforcement learning, it can easily and more accurately train a robot to learn new things.

generative adversarial network

Basically, Generative adversarial neural networks consist of neurons which fight with each other to process some data and output another set of data. The two sets of neural networks are generator and discriminator. 

Real-World example of GAN Application:

This can also be referred to as the generator being the counterfeiter or the culprit and the cop as the discriminator. In this regard, the counterfeiter is constantly trying to develop new paintings out of old ones and the cop is constantly trying to catch the counterfeiter. So once the counterfeiter gets caught, he continuously keeps improving his counterfeiting paintings until the day the cop can no more distinguish between the original and the counterfeit painting and he succeeds to over-smart the cop. 

In GAN, we have a pair of neural networks and we divide the overall data into real and fake images as training data and both are to the generator and the discriminator . Here, the fake images are generated by the generator. The discriminator needs to identify the difference between the real and the fake ones to establish a statement containing the information about both the types of paintings, while the generator fake classifies his fake images as real ones to confuse the discriminator. 

generative adversarial network example

How Generative Adversarial Network (GAN) works:

The basic composition of a GAN consists of two parts, a generator and a discriminator. The images are produced by generators which are then discriminated against. A detailed description is as follows:

  • Generator: This first part of the GAN is the one which generates new images from the training data it was initially fed with. Now, the question arises how are these images generated?

Working: The noise z as input data is used as a sample using uniform distribution and a normal. When z is used as noise, the generator G uses this noise z to create images x(x=G(z)). Here the letter z represents the most prominent features of the noise, such as the height and color. As the features are not controlled in deep learning, the same way the images are not controlled as far as their semantic meaning z is concerned. This is rather left with the network itself to learn on it’s own. Meaning, the particular byte which stores information about the color of the image isn’t controlled by us. One particular z dimension can be altered to generate a new set of images. The most famous application of this network can be a DCGAN. It plays out different rendered convolutions to up sample z to produce the picture x. We can see it as the profound learning classifier the converse way. The generator alone uses random noise. The GAN tells the generator which images to produce. Now, how does a GAN tell this to the generator?

  • Discriminator: The real images are fed into the discriminator as training data which is then to distinguish between the real and the fake ones. Here, the probability of an image being (aka the input x) real can be denoted as output D(X). P(class of input = real image). 

This discriminator works in the same way the classifiers of the deep neural networks work. On the off chance that the information is genuine, we need D(x)=1. In the event that it is produced, it ought to be zero. Through this cycle, the discriminator distinguishes highlights that add to genuine pictures. Then again, we need the generator to make pictures with D(x) = 1 (coordinating the genuine picture). So we can prepare the generator by backpropagation this objective worth right back to the generator, for example we train the generator to make pictures that towards what the discriminator thinks it is genuine.

We train the two networks in exchanging steps and lock them into a competition to develop themselves. Sequentially, the discriminator distinguishes the little contrast between the genuine and the produced, and the generator makes pictures that the discriminator can’t differentiate. The GAN model in the end joins and delivers common looking natural pictures. 

This discriminator idea can be applied to many existing profound learning applications too. The discriminator in GAN goes about as a pundit. We can plug the discriminator into existing profound learning answers for give input to improve it.

  • Back propagation: 

Presently, we will use some basic factors. The discriminator yields a worth D(x) demonstrating the opportunity that x is a real picture. Our goal is to augment the opportunity to perceive real pictures as real and created pictures as fake. For example the greatest probability of the watched information. To gauge the misfortune, we utilize cross-entropy as in most Deep Learning: p log(q). For genuine pictures, p (the genuine mark for genuine pictures) is equivalent to 1. For created pictures, we switch the name (for example one less mark).

We then fix the generator model’s boundaries and play out a solitary cycle of slope plunge on the discriminator utilizing the genuine and the produced pictures. At that point we switch sides. Fix the discriminator and train the generator for another single emphasis. We train the two networks in substituting ventures until the generator creates great quality pictures. This sums up the information stream and the slopes utilized for the back propagation.

Read more about this.