Forecasting When to Forecast: Accelerating Diffusion Models with Confidence-Gated Taylor

1School of Computer Science, Wuhan University
2PaddlePaddle Team, Baidu Inc
3International Joint Innovation Center, The Electromagnetics Academy at Zhejiang University
4Yunnan Key Laboratory of Media Convergence

*Corresponding author
Image 1 Image 2 Image 3

Comparison of our method and baselines under different speedup ratios.

Abstract

Diffusion Transformers (DiTs) have demonstrated remarkable performance in visual generation tasks. However, their low inference speed limits their deployment in low-resource applications. Recent training-free approaches exploit the redundancy of features across timesteps by caching and reusing past representations to accelerate inference. Building on this idea, TaylorSeer instead uses cached features to predict future ones via Taylor expansion. However, its module-level prediction across all transformer blocks (e.g., attention or feedforward modules) requires storing fine-grained intermediate features, leading to notable memory and computation overhead. Moreover, it adopts a fixed caching schedule without considering the varying accuracy of predictions across timesteps, which can lead to degraded outputs when prediction fails. To address these limitations, we propose a novel approach to better leverage Taylor-based acceleration. First, we shift the Taylor prediction target from the module level to the last block level, significantly reducing the number of cached features. Furthermore, observing strong sequential dependencies among Transformer blocks, we propose to use the error between the Taylor-estimated and actual outputs of the first block as an indicator of prediction reliability. If the error is small, we trust the Taylor prediction for the last block; otherwise, we fall back to full computation, thereby enabling a dynamic caching mechanism. Empirical results show that our method achieves a better balance between speed and quality, achieving a 3.17x acceleration on FLUX, 2.36x on DiT, and 4.14x on Wan Video with negligible quality drop.

FLUX Visualization

Image 1

DiT Visualization

Image 1

Wan Visualization

Image 1

BibTeX

BibTex Code Here