Diffusion models are trained to reverse a stochastic process through score matching. However, a lot of diffusion models rely on a small but critical implementation detail called thresholding. Thresholding projects the sampling process to the data support after each discretized diffusion step, stabilizing generation at the cost of breaking the theoretical framework. Interestingly, as one limits the number of steps to infinity, thresholding converges to a reflected stochastic differential equation. In this blog post, we will be discussing our recent work on Reflected Diffusion Models, which explores this connection to develop a new class of diffusion models which correctly trains for thresholded sampling and respects general boundary constraints.
MIT researchers developed a geometric deep learning model that is more accurate and over 1,000 times faster at finding potential drug-like molecules than the fastest state-of-the-art computational models, reducing the chances and costs of failures in an industry where 90 percent of drug candidates fail clinical trials.