Infrared-Visible image fusion is a method of integrating information from images captured in separate spectra, including infrared (IR) and visible light. This technique aims to create a single composite image that takes advantage of the strengths of the original image while minimizing their inherent limitations. This study introduces a Neural Style Transfer based non-End-to-End framework for seamlessly merging infrared and visible images. Proposed approach entails an optimization process where fused features interplay with the initial composite image. Then, the vital features are extracted from input images using the first four layers of the ResNet50 network. These features subsequently unite through an appropriate fusion rule. The original images are blended using the average rule to formulate the initial composite image. By employing backpropagation, the final synthesized image emerges as the initial composite image is fine-tuned with the imbibed features. In this study, the efficacy of proposed fusion framework is validated by conducting experiments on the TNO Image Fusion dataset. The outcomes of these experiments clearly demonstrate that our approach outperforms currently approaches, as evident from improvements in both subjective and objective assessments.
Neural Style Transfer Based Infrared-Visible Fusion
22.11.2023
1479458 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Image Fusion Processing Method Based on Infrared and Visible Light
British Library Conference Proceedings | 2019
|Visible-infrared fusion schemes for road obstacle classification
Online Contents | 2013
|A Visible and Infrared Fusion Based Visual Odometry for Autonomous Vehicles
SAE Technical Papers | 2020
|A Visible and Infrared Fusion Based Visual Odometry for Autonomous Vehicles
British Library Conference Proceedings | 2020
|