Generative Adversarial Networks (GANs), according to Turing award winner Yann Lecun, is "the most interesting idea in the last 10 years in ML". However, training GANs is also a notoriously challenging task, unstableness and mode collapsing are commonly encountered by practitioners. When doing right, GANs can generate mind mind-boggling photo-realistic images, and facilitate many other applications in computer vision and beyond. In this talk, I will introduce frontiers on GANs research, such as spectral normalization, self-modulation, self-supervision, large batch size (BigGAN) and so on. These techniques fuel the state-of-the-art results, and could potentially help train your GANs!