1.陕西科技大学 电子信息与人工智能学院, 陕西 西安 710021
[ "李颀(1973—),女,陕西西安人,博士,教授,2013年于西北工业大学获得博士学位,主要从事机器视觉、自动驾驶、信息融合等方面的研究。E-mail:liqidq@sust.edu.cn" ]
[ "张冉(1998—),女,河南信阳人,硕士研究生,2020年于陕西科技大学获得学士学位,主要从事智能识别、图像处理、自动驾驶等方面的研究。E-mail:Zhangran0709@163.com" ]
扫 描 看 全 文
李颀, 张冉. 融合红外与可见光的实验室火焰图像分割识别[J]. 液晶与显示, 2023,38(9):1262-1271.
LI Qi, ZHANG Ran. Laboratory flame image segmentation and recognition by fusing infrared and visible light[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(9):1262-1271.
李颀, 张冉. 融合红外与可见光的实验室火焰图像分割识别[J]. 液晶与显示, 2023,38(9):1262-1271. DOI: 10.37188/CJLCD.2022-0357.
LI Qi, ZHANG Ran. Laboratory flame image segmentation and recognition by fusing infrared and visible light[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(9):1262-1271. DOI: 10.37188/CJLCD.2022-0357.
为实现实验室火灾识别并解决因火苗小导致相机采集到的图像火焰不显著,以及火焰伴随烟雾遮挡影响分割识别精度的问题,提出一种改进的语义感知的实时热红外和可见光图像融合分割网络。通过融合热红外与可见光图像,提供热辐射信息以增强可见光图像中因烟雾遮挡而降低的光谱信息以及火焰燃烧前期的显著性,完成对实验室烟雾遮挡下火焰以及火焰燃烧前期小火苗的分割。对融合网络中的梯度残差密集块(GRDB)增加中间特征传输块(IFTB)并引入权重块,减少融合时火焰图像的信息损失,在增强火焰图像显著性的同时以最少内容损失为基准还原可见光图像结构信息。在Deeplabv3+语义分割网络中添加基于梯度变换的边缘提取模块(EEM),增强融合图像中明暗变换显著的火焰烟雾图像边缘信息,减少烟雾遮挡对火焰分割的影响,提高火焰分割识别精度。实验结果显示,通过融合可见光与热红外图像使火焰燃烧前期图像的火焰检测分割识别精度得到了提升,改进的火焰分割网络在自采数据集中的平均交并比为91.27%,分割效率为11.96 FPS,表明改进的融合分割网络对实验室火焰烟雾分割识别的效果有明显提升,对于实验室火焰烟雾检测具有现实应用价值。
In order to realize laboratory fire recognition and solve the problems that the flame is not significant in the image collected by the camera due to the small fire, and the flame with smoke occlusion affects the accuracy of segmentation and recognition, an improved semantic aware real-time thermal infrared and visible image fusion segmentation network is proposed. The thermal radiation information is provided to enhance the spectral information reduced by smoke occlusion in the visible light image, as well as the significance of the flame in the early stage of combustion, and the segmentation of the flame under the laboratory smoke occlusion and the small flame in the early stage of flame combustion is completed. The intermediate feature transfer block (IFTB) is added to the gradient residual dense block (GRDB) in the fusion network, and the weight block is introduced to reduce the information loss of the flame image during fusion, and the visual image structure information is restored with the minimum content loss as the benchmark while enhancing the saliency of the flame image. The edge extraction module based on gradient transformation (EEM) is added to the Deeplabv3+ semantic segmentation network to enhance the edge information of flame and smoke images with significant light and dark transformation in the fusion image, reduce the influence of smoke occlusion on flame segmentation, and improve the accuracy of flame segmentation and recognition. The experimental results show that the accuracy of flame detection segmentation and recognition in the early stage of flame combustion is improved by fusing visible light and thermal infrared images. The average intersection over union ratio of the improved flame segmentation network in the self-collected data set is 91.27%, and the segmentation efficiency is 11.96 FPS. The improved fusion segmentation network significantly improves the effect of laboratory flame and smoke segmentation and recognition, and has practical application value for laboratory flame and smoke detection.
火焰烟雾检测图像融合语义分割IFTB边缘提取
flame smoke detectionimage fusionsemantic segmentationIFTBedge extraction
隋兴亮. 从实验室火灾爆炸事故引发的实验室消防安全思考[J]. 化工安全与环境,2022,35(18):17-20.
SUI X L. Thinking of laboratory fire safety caused by laboratory fire explosion accident [J]. Chemical Safety & Environment, 2022, 35(18): 17-20. (in Chinese)
TÖREYIN B U, DEDEOĞLU Y, ÇETIN A E. Wavelet based real-time smoke detection in video [C]. 2005 13th European Signal Processing Conference. Antalya: IEEE, 2005: 1-4.
GONG F M, LI C T, GONG W J, et al. A real-time fire detection method from video with multifeature fusion [J]. Computational Intelligence and Neuroscience, 2019, 2019: 1939171. doi: 10.1155/2019/1939171http://dx.doi.org/10.1155/2019/1939171
KIM Y J, KIM H, LEE S, et al. Trustworthy building fire detection framework with simulation-based learning [J]. IEEE Access, 2021, 9: 55777-55789. doi: 10.1109/access.2021.3071552http://dx.doi.org/10.1109/access.2021.3071552
周泊龙,宋英磊,俞孟蕻. 基于图像处理的火灾烟雾检测算法研究[J]. 消防科学与技术,2016,35(3):390-393. doi: 10.3969/j.issn.1009-0029.2016.03.027http://dx.doi.org/10.3969/j.issn.1009-0029.2016.03.027
ZHOU B L, SONG Y L, YU M H. Fire smoke detection algorithm based on image disposal [J]. Fire Science and Technology, 2016, 35(3): 390-393. (in Chinese). doi: 10.3969/j.issn.1009-0029.2016.03.027http://dx.doi.org/10.3969/j.issn.1009-0029.2016.03.027
HU Y C, LU X B. Real-time video fire smoke detection by utilizing spatial-temporal ConvNet features [J]. Multimedia Tools and Applications, 2018, 77(22): 29283-29301. doi: 10.1007/s11042-018-5978-5http://dx.doi.org/10.1007/s11042-018-5978-5
严忱,严云洋,高尚兵,等. 基于多级特征融合的视频火焰检测方法[J]. 南京师大学报(自然科学版),2021,44(3):131-136.
YAN C, YAN Y Y, GAO S B, et al. Video flame detection based on fusion of multilevel features [J]. Journal of Nanjing Normal University (Natural Science Edition), 2021, 44(3): 131-136. (in Chinese)
HOSSEINI A, HASHEMZADEH M, FARAJZADEH N. UFS-Net: a unified flame and smoke detection method for early detection of fire in video surveillance applications using CNNs [J]. Journal of Computational Science, 2022, 61: 101638. doi: 10.1016/j.jocs.2022.101638http://dx.doi.org/10.1016/j.jocs.2022.101638
TANG L F, YUAN J T, MA J Y. Image fusion in the loop of high-level vision tasks: a semantic-aware real-time infrared and visible image fusion network [J]. Information Fusion, 2022, 82: 28-42. doi: 10.1016/j.inffus.2021.12.004http://dx.doi.org/10.1016/j.inffus.2021.12.004
刘志赢,谢春思,李进军,等. 基于改进Deeplabv3+的烟雾区域分割识别算法[J]. 系统工程与电子技术,2021,43(2):328-335.
LIU Z Y, XIE C S, LI J J, et al. Smoke region segmentation recognition algorithm based on improved Deeplabv3+ [J]. Systems Engineering and Electronics, 2021, 43(2): 328-335. (in Chinese)
LIU C, YANG B, LI Y H, et al. An information retention and feature transmission network for infrared and visible image fusion [J]. IEEE Sensors Journal, 2021, 21(13): 14950-14959. doi: 10.1109/jsen.2021.3073568http://dx.doi.org/10.1109/jsen.2021.3073568
REN S S, LIU Q, ZHANG X D. MPSA: a multi-level pixel spatial attention network for thermal image segmentation based on Deeplabv3+ architecture [J]. Infrared Physics & Technology, 2022, 123: 104193. doi: 10.1016/j.infrared.2022.104193http://dx.doi.org/10.1016/j.infrared.2022.104193
LI S B, LIU P, YAN Q D, et al. Optimized deep learning model for fire semantic segmentation [J]. Computers, Materials & Continua, 2022, 72(3): 4999-5013. doi: 10.32604/cmc.2022.026498http://dx.doi.org/10.32604/cmc.2022.026498
HARKAT H, NASCIMENTO J M P, BERNARDINO A, et al. Assessing the impact of the loss function and encoder architecture for fire aerial images segmentation using deeplabv3+ [J]. Remote Sensing, 2022, 14(9): 2023. doi: 10.3390/rs14092023http://dx.doi.org/10.3390/rs14092023
WANG Y Y, WANG C Y, WU H X, et al. An improved Deeplabv3+ semantic segmentation algorithm with multiple loss constraints [J]. PLoS One, 2022, 17(1): e0261582. doi: 10.1371/journal.pone.0261582http://dx.doi.org/10.1371/journal.pone.0261582
MA K D, DUANMU Z F, YEGANEH H, et al. Multi-exposure image fusion by optimizing a structural similarity index [J]. IEEE Transactions on Computational Imaging, 2018, 4(1): 60-72. doi: 10.1109/tci.2017.2786138http://dx.doi.org/10.1109/tci.2017.2786138
HOU R C, ZHOU D M, NIE R C, et al. VIF-Net: an unsupervised framework for infrared and visible image fusion [J]. IEEE Transactions on Computational Imaging, 2020, 6: 640-651. doi: 10.1109/tci.2020.2965304http://dx.doi.org/10.1109/tci.2020.2965304
LIU G H, CHAI Z P. Image semantic segmentation based on improved DeepLabv3+ network and superpixel edge optimization [J]. Journal of Electronic Imaging, 2022, 31(1): 013011. doi: 10.1117/1.jei.31.1.013011http://dx.doi.org/10.1117/1.jei.31.1.013011
黄鹏,郑淇,梁超. 图像分割方法综述[J]. 武汉大学学报(理学版),2020,66(6):519-531. doi: 10.14188/j.1671-8836.2019.0002http://dx.doi.org/10.14188/j.1671-8836.2019.0002
HUANG P, ZHENG Q, LIANG C. Overview of image segmentation methods [J]. Journal of Wuhan University (Natural Science Edition), 2020, 66(6): 519-531. doi: 10.14188/j.1671-8836.2019.0002http://dx.doi.org/10.14188/j.1671-8836.2019.0002
0
浏览量
41
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构