The real world is dynamic, yet most image fusion methods process static frames independently, ignoring temporal correlations in videos and leading to flickering and temporal inconsistency. To address this, we propose Unified Video Fusion (UniVF), a novel framework for temporally coherent video fusion that leverages multi-frame learning and optical flow-based feature warping for informative, temporally coherent video fusion. To support its development, we also introduce Video Fusion Benchmark (VF-Bench), the first comprehensive benchmark covering four video fusion tasks: multi-exposure, multi-focus, infrared-visible, and medical fusion. VF-Bench provides high-quality, well-aligned video pairs obtained through synthetic data generation and rigorous curation from existing datasets, with a unified evaluation protocol that jointly assesses the spatial quality and temporal consistency of video fusion. Extensive experiments show that UniVF achieves state-of-the-art results across all tasks on VF-Bench.
Detailed illustration of our UniVF architecture.
The proposed data generation paradigms for (a) multi-exposure video pair and (b) multi-focus video pair for our VF-Bench.
Multi-Exposure Video Fusion Branch
Quantitative evaluation results for the Multi-Exposure Fusion and Multi-Focus Fusion task. The red and blue highlights indicate the highest and second-highest scores
Quantitative evaluation results for the Infrared-Visible Fusion and Medical Video Fusion task. The red and blue highlights indicate the highest and second-highest scores
Refer to the pdf paper linked above for more details on qualitative, quantitative, and ablation studies.
TBD