1.武汉科技大学 信息科学与工程学院, 湖北 武汉 430081
[ "李青松(1999—),男,四川广安人,硕士研究生,2020年于武汉科技大学获得学士学位,主要从事红外与可见光图像融合方面的研究。E-mail:1849916541@ qq.com" ]
[ "杨莘(1977—),女,湖南娄底人,博士,副教授,2006年于武汉大学获得博士学位,主要从事多媒体通信与信号处理方面的研究。E-mail:yangshen@ wust.edu.cn" ]
扫 描 看 全 文
李青松, 杨莘, 吴谨, 等. 基于结构纹理分解的红外与可见光图像融合算法[J]. 液晶与显示, 2023,38(10):1389-1398.
LI Qing-song, YANG Shen, WU Jin, et al. Infrared and visible image fusion algorithm based on structure-texture decomposition[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(10):1389-1398.
李青松, 杨莘, 吴谨, 等. 基于结构纹理分解的红外与可见光图像融合算法[J]. 液晶与显示, 2023,38(10):1389-1398. DOI: 10.37188/CJLCD.2022-0398.
LI Qing-song, YANG Shen, WU Jin, et al. Infrared and visible image fusion algorithm based on structure-texture decomposition[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(10):1389-1398. DOI: 10.37188/CJLCD.2022-0398.
针对红外与可见光图像融合中存在的热目标信息丢失、边缘结构模糊、细节损失等问题,提出一种多层分解的图像融合算法。首先使用结构纹理分解将源图分解为细节层和结构层,对细节层使用基于结构相似性和,L,2,范数的融合规则融合并增强;然后提出一种结构均值法,将结构层分解为亮度层和基础层,对亮度层使用绝对值取大融合,对基础层设计了一种基于多指标的融合规则进行融合;最后重构各子融合图像得到最终融合图像。为验证算法的有效性,与9种红外与可见光图像融合算法进行对比,使用空间频率、平均梯度、边缘强度、方差、视觉保真度、基于人类视觉感知的指标和信息熵7种客观图像评价指标,在前5种指标上分别取得27.4%、36.5%、38.2%、8.5%和23.5%的提升。实验结果表明,本文算法在有效保留红外热目标的同时较好地保留了边缘结构和纹理细节,且在客观评价指标上取得了更好的效果。
In order to solve the problems of thermal target information loss, edge structure blur and detail loss in infrared and visible image fusion, an infrared and visible image fusion algorithm is proposed based on structure-texture decomposition. Firstly, the source images are decomposed into detail layer and structural layer by structure-texture decomposition, and the detail layer is fused and enhanced by fusion rule based on structural similarity and ,L,2, norm. Then, a structure-average method is proposed to decompose the structural layer into luminance layer and basic layer. The absolute-value-maximum is used to fuse the luminance layer, and a fusion rule based on multi- indicators is designed for the basic layer. Finally, the fused sub-images are reconstructed to get the final fused image. In order to verify the effectiveness of our algorithm, it is compared with nine infrared and visible image fusion algorithms, and seven objective evaluating indicators are used including spatial frequency, average gradient, edge intensity, variance, visual information fidelity, the metric based on human visual perception and information entropy. The first five indicators are improved by 27.4%,36.5%,38.2%,8.5% and 23.5%, respectively. The experimental results show that the proposed algorithm not only effectively retains the infrared thermal target, but also retains the edge structure and texture details, and achieves better results in the objective evaluating indicators.
图像处理图像融合结构纹理分解红外图像可见光图像
image processingimage fusionstructure-texture decompositioninfrared imagevisible image
GOSHTASBY A A, NIKOLOV S. Image fusion: advances in the state of the art [J]. Information Fusion, 2007, 8(2): 114-118. doi: 10.1016/j.inffus.2006.04.001http://dx.doi.org/10.1016/j.inffus.2006.04.001
MA J, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey [J]. Information Fusion, 2019, 45: 153-178. doi: 10.1016/j.inffus.2018.02.004http://dx.doi.org/10.1016/j.inffus.2018.02.004
沈英,黄春红,黄峰,等. 红外与可见光图像融合技术的研究进展[J]. 红外与激光工程,2021,50(9):20200467. doi: 10.3788/IRLA20200467http://dx.doi.org/10.3788/IRLA20200467
SHEN Y, HUANG C H, HUANG F, et al. Research progress of infrared and visible image fusion technology [J]. Infrared and Laser Engineering, 2021, 50(9): 20200467. (in Chinese). doi: 10.3788/IRLA20200467http://dx.doi.org/10.3788/IRLA20200467
刘博,韩广良,罗惠元. 基于多尺度细节的孪生卷积神经网络图像融合算法[J]. 液晶与显示,2021,36(9):1283-1293. doi: 10.37188/CJLCD.2020-0339http://dx.doi.org/10.37188/CJLCD.2020-0339
LIU B, HAN G L, LUO H Y. Image fusion algorithm based on multi-scale detail Siamese convolutional neural network [J]. Chinese Journal of Liquid Crystals and Displays, 2021, 36(9): 1283-1293. (in Chinese). doi: 10.37188/CJLCD.2020-0339http://dx.doi.org/10.37188/CJLCD.2020-0339
REN L, PAN Z B, CAO J Z, et al. Infrared and visible image fusion based on edge-preserving guided filter and infrared feature decomposition [J]. Signal Processing, 2021, 186: 108108. doi: 10.1016/j.sigpro.2021.108108http://dx.doi.org/10.1016/j.sigpro.2021.108108
BURT P J, ADELSON E H. The Laplacian pyramid as a compact image code [M]//FISCHLER M A, FIRSCHEIN O. Readings in Computer Vision. Amsterdam: Elsevier, 1987: 671-679. doi: 10.1016/b978-0-08-051581-6.50065-9http://dx.doi.org/10.1016/b978-0-08-051581-6.50065-9
CHU H, ZHU W L. Image fusion algorithms using discrete cosine transform [J]. Optics and Precision Engineering, 2006, 14(2): 266-273.
傅瑶,孙雪晨,薛旭成,等. 基于非下采样轮廓波变换的全色图像与多光谱图像融合方法研究[J]. 液晶与显示,2013,28(3):429-434. doi: 10.3788/yjyxs20132803.0429http://dx.doi.org/10.3788/yjyxs20132803.0429
FU Y, SUN X C, XUE X C, et al. Panchromatic and multispectral image fusion method based on nonsubsampled contourlet transform [J]. Chinese Journal of Liquid Crystals and Displays, 2013, 28(3): 429-434. (in Chinese). doi: 10.3788/yjyxs20132803.0429http://dx.doi.org/10.3788/yjyxs20132803.0429
LI S T, KANG X D, HU J W. Image fusion with guided filtering [J]. IEEE Transactions on Image Processing, 2013, 22(7): 2864-2875. doi: 10.1109/tip.2013.2244222http://dx.doi.org/10.1109/tip.2013.2244222
ZHOU Z Q, WANG B, LI S, et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters [J]. Information Fusion, 2016, 30: 15-26. doi: 10.1016/j.inffus.2015.11.003http://dx.doi.org/10.1016/j.inffus.2015.11.003
LEE H, JEON J, KIM J, et al. Structure-texture decomposition of images with interval gradient [J]. Computer Graphics Forum, 2017, 36(6): 262-274. doi: 10.1111/cgf.12875http://dx.doi.org/10.1111/cgf.12875
GUO Q, ZHANG C M, ZHANG Y F, et al. An efficient SVD-based method for image denoising [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2016, 26(5): 868-880. doi: 10.1109/tcsvt.2015.2416631http://dx.doi.org/10.1109/tcsvt.2015.2416631
ACHANTA R, HEMAMI S, ESTRADA F, et al. Frequency-tuned salient region detection [C]. 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami: IEEE, 2009: 1597-1604. doi: 10.1109/cvpr.2009.5206596http://dx.doi.org/10.1109/cvpr.2009.5206596
ZHOU Z Q, LI S, WANG B. Multi-scale weighted gradient-based fusion for multi-focus images [J]. Information Fusion, 2014, 20: 60-72. doi: 10.1016/j.inffus.2013.11.005http://dx.doi.org/10.1016/j.inffus.2013.11.005
LI G F, LIN Y J, QU X D. An infrared and visible image fusion method based on multi-scale transformation and norm optimization [J]. Information Fusion, 2021, 71: 109-129. doi: 10.1016/j.inffus.2021.02.008http://dx.doi.org/10.1016/j.inffus.2021.02.008
BAVIRISETTI D P, XIAO G, LIU G. Multi-sensor image fusion based on fourth order partial differential equations [C]. 2017 20th International Conference on Information Fusion. Xi'an: IEEE, 2017: 1-9. doi: 10.23919/icif.2017.8009719http://dx.doi.org/10.23919/icif.2017.8009719
MA J Y, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization [J]. Information Fusion, 2016, 31: 100-109. doi: 10.1016/j.inffus.2016.02.001http://dx.doi.org/10.1016/j.inffus.2016.02.001
ZHANG Y, ZHANG L J, BAI X Z, et al. Infrared and visual image fusion through infrared feature extraction and visual information preservation [J]. Infrared Physics & Technology, 2017, 83: 227-237. doi: 10.1016/j.infrared.2017.05.007http://dx.doi.org/10.1016/j.infrared.2017.05.007
BAVIRISETTI D P, DHULI R. Two-scale image fusion of visible and infrared images using saliency detection [J]. Infrared Physics & Technology, 2016, 76: 52-64. doi: 10.1016/j.infrared.2016.01.009http://dx.doi.org/10.1016/j.infrared.2016.01.009
LI H, QI X B, XIE W Y. Fast infrared and visible image fusion with structural decomposition [J]. Knowledge-Based Systems, 2020, 204: 106182. doi: 10.1016/j.knosys.2020.106182http://dx.doi.org/10.1016/j.knosys.2020.106182
ZHAO Z X, XU S, ZHANG C X, et al. Bayesian fusion for infrared and visible images [J]. Signal Processing, 2020, 177: 107734. doi: 10.1016/j.sigpro.2020.107734http://dx.doi.org/10.1016/j.sigpro.2020.107734
ZHAO Z X, XU S, ZHANG C X, et al. DIDFuse: deep image decomposition for infrared and visible image fusion [C]. Twenty-Ninth International Joint Conference on Artificial Intelligence. Yokohama: IJCAI, 2020: 970-976. doi: 10.24963/ijcai.2020/135http://dx.doi.org/10.24963/ijcai.2020/135
LIU J Y, FAN X, JIANG J, et al. Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(1): 105-119. doi: 10.1109/tcsvt.2021.3056725http://dx.doi.org/10.1109/tcsvt.2021.3056725
0
浏览量
145
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构