MIT Researchers Introduce Restart Sampling For Improving Generative Processes

Differential equation-based deep generative models have recently emerged as potent modeling tools for high-dimensional data in fields ranging from image synthesis to biology. These models solve differential equations iteratively in reverse, eventually transforming a basic distribution (such as a Gaussian in diffusion models) into a complicated data distribution.

Studies have categorized prior samplers that can model these reversible processes into two types: 

  1. ODEsamplers those whose evolution is deterministic after the initial randomization
  2. SDE-samplers whose generation trajectories are stochastic. 

Several publications provide evidence that these samplers exhibit benefits in various settings. Smaller discretization errors produced by ODE solvers allow for usable sample quality even at bigger step sizes. The quality of their offspring, though, quickly levels off. On the other hand, SDE improves quality in the big NFE regime, but at the cost of more time spent sampling.

Inspired by this, MIT researchers developed a new sampling technique called Restart, combining ODE and SDE benefits. The Restart sampling algorithm consists of K iterations of two subroutines in a fixed amount of time: a Restart forward process that introduces a large amount of noise, effectively “restarting” the original backward process, and a Restart backward process that executes the backward ODE. 

The Restart algorithm decouples randomness and drifts, and the amount of noise added in the forward process of Restart is much larger than the small single-step noise interleaving with drifts in earlier SDEs, which increases the contraction effect on accumulated errors. The constriction effect introduced at each Restart iteration is bolstered by cycling forward and backward K times. Restart can reduce discretization mistakes and achieve ODE-like step sizes thanks to its deterministic backward processes. In reality, the Restart interval is often placed at the simulation’s end, where the accumulated error is bigger, to make the most of the contraction effects. In addition, multiple Restart periods are used for more difficult activities to reduce early mistakes.

Experimental results show that across various NFEs, datasets, and pre-trained models, Restart outperforms state-of-the-art ODE and SDE solvers in quality and speed. In particular, on CIFAR-10 with VP, Restart achieves a 10x speedup compared to the previous best-performing SDEs, and on ImageNet 64×64 with EDM, a 2x speedup while also outperforming ODE solvers in the small NFE regime.

The researchers also apply Restart to a Stable Diffusion model pre-trained on LAION 512 x 512 images to translate text to images. Restart improves upon prior samplers by striking a better balance between text-image alignment/visual quality (as evaluated by CLIP/Aesthetic scores) and diversity (as measured by FID score) with a variable classifier-free guidance strength.

To fully realize the Restart framework’s potential, the team plans to build a more moral method in the future for automatically selecting appropriate hyperparameters for Restart based on the error analysis of models. 


Check Out The Paper and Github Link. Don’t forget to join our 25k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.


Leave a Reply

Your email address will not be published. Required fields are marked *