OpenAI's diffusion models beat GANs at what they do best May 16, 2021 04:22 EDT with 0 comments Generative Adversarial Networks (GANs) are a class of deep learning models that learn to produce new (or pseudo-real) data. Their advent in 2014 and refinement thereafter have led to them dominating the image generation domain for the past few years and laying the foundations of a new paradigm – deep fakes. Their ability to mimic training data and produce new samples similar to it has gone more or less unmatched. As such, they hold the state-of-the-art (SOTA) in most image generation tasks today. Despite these advantages, GANs are notoriously hard to train and are prone to issues like mode collapse and unintelligible training procedures. Moreover, researchers have realized that GANs focus more on fidelity rather than capturing a diverse set of the training data's distribution. As such, researchers have been looking into improving GANs in this domain or eyeing other architectures that would perform better in the same domain.