The Multifocal Image Fusion (MFIF) technology aims to synthesize a clear, detailed image with a wide field of view by fusing images captured at varying focal lengths. Addressing issues like spherical and chromatic aberrations, we introduce an innovative MFIF algorithm integrating spatial and channel attention mechanisms. This method emphasizes critical regions and channels, extracting vital features. A depth-of-field estimation network predicts focal lengths, guiding the fusion. Residual networks and dilated convolutions enhance feature extraction stability and receptive field, respectively. A multiscale pyramid facilitates layer-by-layer fusion, maximizing results. Evaluation shows SSIM of ${0 . 9 7 7 2}$ and PSNR of ${3 4 . 8 7 1 1}$ dB, outperforming benchmarks. Ablation studies affirm the significance of depth estimation, attention, and pyramid fusion in preserving details and enhancing comprehensiveness. This research offers a novel approach to image fusion, leveraging deep learning and attention for precise, efficient information integration, advancing the field by intelligently recognizing key image elements.
Research on Multi-focus Image Fusion Guided by Depth Mapping Prediction Network
23.10.2024
1085560 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Multi-Exposure & Multi-Focus Image Fusion in Transform Domain
British Library Conference Proceedings | 2006
|Depth from focus with one image
IEEE | 1994
|Depth from Focus with One Image
British Library Conference Proceedings | 1994
|