1.中国科学院 长春光学精密机械与物理研究所, 吉林 长春130033
2.中国科学院大学, 北京 100049
[ "王贤涛(1998—),男,湖北随州人,硕士研究生,2019年于湖北文理学院获得学士学位,主要从事图像融合方面的研究。E-mail:18827553120@163.com" ]
[ "赵金宇(1977—),男,内蒙古赤峰人,博士,研究员,2006年于中国科学院长春光学精密机械与物理研究所获得博士学位,主要从事数字图像信号处理软硬件技术、图像跟踪与目标识别、图像恢复等方面的研究。E-mail:zhaojy@ ciomp.ac.cn" ]
扫 描 看 全 文
王贤涛, 赵金宇. 基于多判断和加权最小二乘优化的NSCT红外和可见图像融合[J]. 液晶与显示, 2023,38(2):204-215.
WANG XIAN-tao, ZHAO JIN-yu. Fusion of NSCT infrared and visible images based on multi-judgment and WLS optimization[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(2):204-215.
王贤涛, 赵金宇. 基于多判断和加权最小二乘优化的NSCT红外和可见图像融合[J]. 液晶与显示, 2023,38(2):204-215. DOI: 10.37188/CJLCD.2022-0179.
WANG XIAN-tao, ZHAO JIN-yu. Fusion of NSCT infrared and visible images based on multi-judgment and WLS optimization[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(2):204-215. DOI: 10.37188/CJLCD.2022-0179.
为了克服传统方法的一些缺陷和单一特征提取信息的不足,在进一步提高红外和可见图像融合的同时,寻找针对不同类型特点的适应能力强的方法,提出了一种基于多判断与加权最小二乘优化(WLS)的非下采样轮廓波变换(NSCT)红外可见图像融合方法。首先,采用NSCT对图像进行多尺度分解,得到图像的低频和高频子带。其次,低频子带选择局部平方熵和修正拉普拉斯和(SML)来相互补充,在保证好的对比度下提取少量细节信息;高频子带充分考虑底层特征的重要性,选择相位一致性(PC)、局部加权修正拉普拉斯算子和(WSML)以及局部加权能量(WLE)相互补充的方式融合细节层,对其进行WLS优化,融合后的图像细节更自然,更适合人眼视觉感知。最后,对融合后的低频和高频子带进行逆变换,得到融合图像。对不同类型特点的图像进行了实验验证,实验结果表明,与其他融合方法相比,本文方法在主观上目标显著、背景清晰、视觉效果好。在4个客观评价指标平均梯度(AG)、信息熵(IE)、空间频率(SF)、互信息(MI)中,在保证MI指标比较好的前提下,其他3个指标都处于最好的状态,尤其是对于光照均匀的camp图像,AG和SF与最好的数值相比提高了6.9%和4.8%,从而验证了本文方法的有效性。
In order to overcome some defects of traditional methods and the insufficiency of single feature extraction information, while further improving the fusion of infrared and visible images, a method with strong adaptability to different types of features is sought. A non-subsampled contourlet transform (NSCT) infrared-visible image fusion method based on multi-judgment and weighted least squares optimization (WLS) is proposed. Firstly, NSCT is used to decompose the image at multiple scales to obtain the low-frequency and high-frequency subbands of the image. Secondly, the low-frequency sub-band selects local squared entropy and sum-modified laplacian (SML) to complement each other, which extracts a small amount of detailed information under guaranteed good contrast. The high-frequency subbands fully considers the importance of the underlying features, and selects phase consistency (PC), the local weighted sum-modified Laplacian (WSML) and the local weighted energy (WLE) to complement each other to fuse the detail layer. They are optimized by WLS, and the fused image details are more natural and more suitable for human visual perception. Finally, the inverse transform is performed on the fused low-frequency and high-frequency subbands to obtain a fused image. Through the experimental verification of images with different types of characteristics, experimental results show that the proposed method has subjectively significant targets, clear backgrounds and better visual effects in comparison with other fusion methods. Under the average gradient (AG), information entropy (IE), spatial frequency (SF), and mutual information (MI) of the four objective evaluation indicators, on the premise that MI is relatively good, the other three indicators are in the best position,especially for uniformly illuminated camp images, AG,and SF are improved by 6. 9% and 4. 8% in comparison with the best values, thus validating the proposed method effectiveness.
图像融合多判断非下采样轮廓波变换加权最小二乘优化人眼视觉感知
image fusionmulti-judgmentnon-subsampled contourlet transformweighted least squares optimizationhuman visual perception
ZHAO J F, ZHOU Q, CHEN Y T, et al. Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition [J]. Infrared Physics & Technology, 2013, 56: 93-99. doi: 10.1016/j.infrared.2012.11.003http://dx.doi.org/10.1016/j.infrared.2012.11.003
ZHAN L C, ZHUANG Y, HUANG L D. Infrared and visible images fusion method based on discrete wavelet transform [J]. Journal of Computers, 2017, 28(2): 57-71.
郭玲,杨斌. 基于视觉显著性的红外与可见光图像融合[J]. 计算机科学,2015,42(S1):211-214,235.
GUO L, YANG B. Fusion of infrared and visible images based on visual saliency [J]. Computer Science, 2015, 42(S1): 211-214, 235. (in Chinese)
CHAI P F, LUO X Q, ZHANG Z C. Image fusion using quaternion wavelet transform and multiple features [J]. IEEE Access, 2017, 5: 6724-6734. doi: 10.1109/access.2017.2685178http://dx.doi.org/10.1109/access.2017.2685178
汪廷. 红外图像与可见光图像融合研究与应用[D]. 西安:西安理工大学,2019. doi: 10.23919/chicc.2019.8865405http://dx.doi.org/10.23919/chicc.2019.8865405
WANG T. Research and application of infrared image and visible image fusion [D]. Xi'an: Xi'an University of Technology, 2019. (in Chinese). doi: 10.23919/chicc.2019.8865405http://dx.doi.org/10.23919/chicc.2019.8865405
MA J Y, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey [J]. Information Fusion, 2019, 45: 153-178. doi: 10.1016/j.inffus.2018.02.004http://dx.doi.org/10.1016/j.inffus.2018.02.004
孙雨涵. 基于图像视觉显著特征与局部结构一致化处理的图像融合方法研究[D]. 衡阳:南华大学,2019.
SUN Y H. Research on image fusion method based on visual salient feature and local structure consistency [D]. Hengyang: University of South China, 2019. (in Chinese)
杨艳春,高晓宇,党建武,等. 基于WEMD和生成对抗网络重建的红外与可见光图像融合[J]. 光学 精密工程,2022,30(3):320-330. doi: 10.37188/OPE.20223003.0320http://dx.doi.org/10.37188/OPE.20223003.0320
YANG Y C, GAO X Y, DANG J W, et al. Infrared and visible image fusion based on WEMD and generative adversarial network reconstruction [J]. Optics and Precision Engineering, 2022, 30(3): 320-330. (in Chinese). doi: 10.37188/OPE.20223003.0320http://dx.doi.org/10.37188/OPE.20223003.0320
BAI X Z, ZHOU F G, XUE B D. Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform [J]. Optics Express, 2011, 19(9): 8444-8457. doi: 10.1364/oe.19.008444http://dx.doi.org/10.1364/oe.19.008444
杨彬,黄润才,王从澳. 基于改进的NSCT红外可见光图像融合算法[J]. 计算机与现代化,2021(6):48-53. doi: 10.3969/j.issn.1006-2475.2021.06.009http://dx.doi.org/10.3969/j.issn.1006-2475.2021.06.009
YANG B, HUANG R C, WANG C A. Image fusion algorithm based on improved NSCT infrared and visible light [J]. Computer and Modernization, 2021(6): 48-53. (in Chinese). doi: 10.3969/j.issn.1006-2475.2021.06.009http://dx.doi.org/10.3969/j.issn.1006-2475.2021.06.009
郭盼,何文超,梁龙凯,等. 基于导向滤波器的医学图像融合方法[J]. 液晶与显示,2019,34(6):605-612. doi: 10.3788/yjyxs20193406.0605http://dx.doi.org/10.3788/yjyxs20193406.0605
GUO P, HE W C, LIANG L K, et al. Medical image fusion method based on guided filter [J]. Chinese Journal of Liquid Crystals and Displays, 2019, 34(6): 605-612. (in Chinese). doi: 10.3788/yjyxs20193406.0605http://dx.doi.org/10.3788/yjyxs20193406.0605
BAVIRISETTI D P, DHULI R. Two-scale image fusion of visible and infrared images using saliency detection [J]. Infrared Physics & Technology, 2016, 76: 52-64. doi: 10.1016/j.infrared.2016.01.009http://dx.doi.org/10.1016/j.infrared.2016.01.009
YANG Y, QUE Y, HUANG S Y, et al. Multiple visual features measurement with gradient domain guided filtering for multisensor image fusion [J]. IEEE Transactions on Instrumentation and Measurement, 2017, 66(4): 691-703. doi: 10.1109/tim.2017.2658098http://dx.doi.org/10.1109/tim.2017.2658098
GAN W, WU X H, WU W, et al. Infrared and visible image fusion with the use of multi-scale edge-preserving decomposition and guided image filter [J]. Infrared Physics & Technology, 2015, 72: 37-51. doi: 10.1016/j.infrared.2015.07.003http://dx.doi.org/10.1016/j.infrared.2015.07.003
LI S T, KANG X D, HU J W. Image fusion with guided filtering [J]. IEEE Transactions on Image Processing, 2013, 22(7): 2864-2875. doi: 10.1109/tip.2013.2244222http://dx.doi.org/10.1109/tip.2013.2244222
LI H, LIU L, HUANG W, et al. An improved fusion algorithm for infrared and visible images based on multi-scale transform [J]. Infrared Physics & Technology, 2016, 74: 28-37. doi: 10.1016/j.infrared.2015.11.002http://dx.doi.org/10.1016/j.infrared.2015.11.002
林剑萍,廖一鹏. 结合分数阶显著性检测及量子烟花算法的NSST域图像融合[J]. 光学 精密工程,2021,29(6):1406-1419. doi: 10.37188/OPE.20212906.1406http://dx.doi.org/10.37188/OPE.20212906.1406
LIN J P, LIAO Y P. A novel image fusion method with fractional saliency detection and QFWA in NSST [J]. Optics and Precision Engineering, 2021, 29(6): 1406-1419. (in Chinese). doi: 10.37188/OPE.20212906.1406http://dx.doi.org/10.37188/OPE.20212906.1406
GANASALA P, KUMAR V. CT and MR image fusion scheme in nonsubsampled contourlet transform domain [J]. Journal of Digital Imaging, 2014, 27(3): 407-418. doi: 10.1007/s10278-013-9664-xhttp://dx.doi.org/10.1007/s10278-013-9664-x
丁贵鹏,陶钢,李英超,等. 基于非下采样轮廓波变换与引导滤波器的红外及可见光图像融合[J]. 兵工学报,2021,42(9):1911-1922. doi: 10.3969/j.issn.1000-1093.2021.09.012http://dx.doi.org/10.3969/j.issn.1000-1093.2021.09.012
DING G P, TAO G, LI Y C, et al. Infrared and visible images fusion based on non-subsampled contourlet transform and guided filter [J]. Acta Armamentarii, 2021, 42(9): 1911-1922. (in Chinese). doi: 10.3969/j.issn.1000-1093.2021.09.012http://dx.doi.org/10.3969/j.issn.1000-1093.2021.09.012
LI H F, QIU H M, YU Z T, et al. Infrared and visible image fusion scheme based on NSCT and low-level visual features [J]. Infrared Physics & Technology, 2016, 76: 174-184. doi: 10.1016/j.infrared.2016.02.005http://dx.doi.org/10.1016/j.infrared.2016.02.005
ZHU Z Q, ZHENG M Y, QI G Q, et al. A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain [J]. IEEE Access, 2019, 7: 20811-20824. doi: 10.1109/access.2019.2898111http://dx.doi.org/10.1109/access.2019.2898111
FARBMAN Z, FATTAL R, LISCHINSKI D, et al. Edge-preserving decompositions for multi-scale tone and detail manipulation [J]. ACM Transactions on Graphics, 2008, 27(3): 1-10. doi: 10.1145/1360612.1360666http://dx.doi.org/10.1145/1360612.1360666
MA J L, ZHOU Z Q, WANG B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization [J]. Infrared Physics & Technology, 2017, 82: 8-17. doi: 10.1016/j.infrared.2017.02.005http://dx.doi.org/10.1016/j.infrared.2017.02.005
0
浏览量
90
下载量
1
CSCD
关联资源
相关文章
相关作者
相关机构