event

Machine Learning Center Seminar Series | Consistency Models

Primary tabs

Featuring Dr. Yang Song, OpenAI

Abstract: I will introduce consistency models, a new family of generative models that mitigate the slow generation limitations of diffusion models. They rapidly generate high-quality samples from noise in a single step but also allow for multiple sampling to trade compute for quality. Like diffusion models, they enable zero-shot data editing tasks, such as image pinpointing, colorization, and super-resolution, without task-specific training. We can train consistency models through knowledge distillation from pre-existing diffusion models or build them from scratch as standalone generative models. Experiments showcase their superiority over existing diffusion distillation techniques for one- and few-step sampling. When implemented independently, they excel on various image datasets, emerging as a strong contender against diffusion models and traditional one-step generative models including normalizing flows, VAEs, and GANs.

Bio: Yang Song is a research scientist at OpenAI and an incoming Assistant Professor at Caltech. His research interest is in deep generative models, inverse problem solving and AI safety. His research has been recognized with an Outstanding Paper Award at ICLR, an Apple PhD Fellowship in AI/ML, a J.P. Morgan PhD Fellowship, and a WAIC YunFan award.

Groups

Status

  • Workflow Status:Published
  • Created By:shatcher8
  • Created:09/21/2023
  • Modified By:shatcher8
  • Modified:10/31/2023