{"defaultlang":"zh","titlegroup":{"articletitle":[{"lang":"zh","data":[{"name":"text","data":"改进的多尺度火焰检测方法"}]},{"lang":"en","data":[{"name":"text","data":"Improved multi-scale flame detection method"}]}]},"contribgroup":{"author":[{"name":[{"lang":"zh","surname":"侯","givenname":"易呈","namestyle":"eastern","prefix":""},{"lang":"en","surname":"HOU","givenname":"Yi-cheng","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":["first-author"],"bio":[{"lang":"zh","text":["侯易呈(1996-), 男, 山西孝义人, 硕士研究生, 2018年于河南城建学院获得学士学位, 主要从事机器视觉及自动控制方面的研究。E-mail:819182588@qq.com"],"graphic":[],"data":[[{"name":"bold","data":[{"name":"text","data":"侯易呈"}]},{"name":"text","data":"(1996-), 男, 山西孝义人, 硕士研究生, 2018年于河南城建学院获得学士学位, 主要从事机器视觉及自动控制方面的研究。E-mail:"},{"name":"text","data":"819182588@qq.com"}]]}],"email":"819182588@qq.com","deceased":false},{"name":[{"lang":"zh","surname":"王","givenname":"慧琴","namestyle":"eastern","prefix":""},{"lang":"en","surname":"WANG","givenname":"Hui-qin","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":["corresp"],"corresp":[{"rid":"cor1","lang":"zh","text":"王慧琴, E-mail: hqwang@xauat.edu.cn","data":[{"name":"text","data":"王慧琴, E-mail: hqwang@xauat.edu.cn"}]}],"bio":[{"lang":"zh","text":["王慧琴(1970-), 女, 山西长治人, 博士, 教授, 2002年于西安交通大学获得博士学位, 主要从事智能信息处理、信息理论与应用、信息技术与管理、数字建筑等研究。E-mail: hqwang@xauat.edu.cn"],"graphic":[],"data":[[{"name":"bold","data":[{"name":"text","data":"王慧琴"}]},{"name":"text","data":"(1970-), 女, 山西长治人, 博士, 教授, 2002年于西安交通大学获得博士学位, 主要从事智能信息处理、信息理论与应用、信息技术与管理、数字建筑等研究。E-mail: "},{"name":"text","data":"hqwang@xauat.edu.cn"}]]}],"email":"hqwang@xauat.edu.cn","deceased":false},{"name":[{"lang":"zh","surname":"王","givenname":"可","namestyle":"eastern","prefix":""},{"lang":"en","surname":"WANG","givenname":"Ke","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":[],"deceased":false}],"aff":[{"id":"aff1","intro":[{"lang":"zh","label":"","text":"西安建筑科技大学 信息与控制工程学院, 陕西 西安 710055","data":[{"name":"text","data":"西安建筑科技大学 信息与控制工程学院, 陕西 西安 710055"}]},{"lang":"en","label":"","text":"College of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China","data":[{"name":"text","data":"College of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China"}]}]}]},"abstracts":[{"lang":"zh","data":[{"name":"p","data":[{"name":"text","data":"网络层数的加深会造成对火焰目标深层特征细节信息表征能力减弱,同时提取了低相关度的冗余特征,导致火焰识别精度不高。针对该问题,提出了一种基于改进Faster R-CNN的火焰检测方法,以提高在深层网络下的火焰识别精度。首先利用ResNet50网络提取火焰特征,并添加SENet模块降低火焰目标冗余特征;然后将深层特征和浅层特征进行多尺度特征融合,增强深层特征的细节信息;最后训练网络,实现对火焰目标的识别定位。实验通过构建VOC火焰数据集进行网络训练,使用测试集进行检测,并进行特征图可视化对比,相比于改进前模型,本文模型平均精度提高了7.78%,召回率提高了9.05%,精确率提高了12.54%。本文提出的火焰目标检测模型,通过结合注意力机制模块和多尺度特征融合机制,能够有效进行火焰目标特征提取,火焰目标的检测结果更加准确。"}]}]},{"lang":"en","data":[{"name":"p","data":[{"name":"text","data":"The deepening of the number of network layers can weaken the ability to characterize the detailed information of the deep features of the flame target, and at the same time extract redundant features with low correlation, resulting in low flame recognition accuracy. Aiming at this problem, a flame detection method based on improved Faster R-CNN is proposed to improve the accuracy of flame recognition in deep networks. Firstly, the ResNet50 network is used to extract flame features, and the SENet module is added to reduce the redundant features of flame targets. Then, the deep features and shallow features are multi-scale feature fusion to enhance the detailed information of deep features. Finally, the network is trained to realize the recognition of flame targets positioning. In the experiment, the VOC flame data set is constructed for network training, the test set is used for detection, and the feature map visualization is compared. Compared with the model before the improvement, the AP value increases by 7.78%, the recall increases by 9.05%, and the precision increases by 12.54%. By combining the attention mechanism module and the multi-scale feature fusion mechanism, the flame target detection model proposed in this paper, can effectively extract the flame target feature, and the flame target detection result is more accurate."}]}]}],"keyword":[{"lang":"zh","data":[[{"name":"text","data":"目标检测"}],[{"name":"text","data":"卷积网络"}],[{"name":"text","data":"多尺度特征融合"}],[{"name":"text","data":"Faster R-CNN"}],[{"name":"text","data":"SENet"}]]},{"lang":"en","data":[[{"name":"text","data":"target detection"}],[{"name":"text","data":"convolutional network"}],[{"name":"text","data":"multi-scale feature fusion"}],[{"name":"text","data":"Faster R-CNN"}],[{"name":"text","data":"SENet"}]]}],"highlights":[],"body":[{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"1"}],"title":[{"name":"text","data":"引言"}],"level":"1","id":"s1"}},{"name":"p","data":[{"name":"text","data":"传统火灾检测方法使用感温、感烟传感器采集火焰数据的静态或动态特征进行火焰检测,对手动提取目标特征的依赖性较高。现阶段,通过图像处理进行目标识别的技术"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"1","type":"bibr","rid":"b1","data":[{"name":"text","data":"1"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"和基于机器视觉的目标检测技术不断发展"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"2","type":"bibr","rid":"b2","data":[{"name":"text","data":"2"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",并且已开始应用到火灾检测领域。"}]},{"name":"p","data":[{"name":"text","data":"传统方法在火焰识别任务中,多注重人工对火焰目标进行特征提取以及分类器的设计"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"blockXref","data":{"data":[{"name":"xref","data":{"text":"3","type":"bibr","rid":"b3","data":[{"name":"text","data":"3"}]}},{"name":"text","data":"-"},{"name":"xref","data":{"text":"5","type":"bibr","rid":"b5","data":[{"name":"text","data":"5"}]}}],"rid":["b3","b4","b5"],"text":"3-5","type":"bibr"}},{"name":"text","data":"]"}]},{"name":"text","data":"。蔡敏"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"6","type":"bibr","rid":"b6","data":[{"name":"text","data":"6"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"将运动分割分为运用目标检测和图像分割两个方面,并提出对VIBE算法的两点改进,用于森林火灾视频检测。刘宇欣"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"7","type":"bibr","rid":"b7","data":[{"name":"text","data":"7"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"将人工蜂群算法图像处理分割方法、边缘检测技术以及特征融合等技术应用于矿用带式输送机的火灾检测中。苗续芝"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"和王中林"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"9","type":"bibr","rid":"b9","data":[{"name":"text","data":"9"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"分别改进了果蝇优化SVM模型和增量支持向量机算法,提高了火灾检测识别的准确率。相比较于传统检测方法,深度学习方法能够自动提取目标特征,在图像识别领域得到了广泛的应用"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"blockXref","data":{"data":[{"name":"xref","data":{"text":"10","type":"bibr","rid":"b10","data":[{"name":"text","data":"10"}]}},{"name":"text","data":"-"},{"name":"xref","data":{"text":"12","type":"bibr","rid":"b12","data":[{"name":"text","data":"12"}]}}],"rid":["b10","b11","b12"],"text":"10-12","type":"bibr"}},{"name":"text","data":"]"}]},{"name":"text","data":"。同时,也有学者将深度学习方法应用到了火焰识别检测任务中。任嘉峰"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"b13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出一种引入K-means聚类算法的YOLOV3算法进行火焰识别检测,通过迁移学习实现了对小样本的识别分类。徐登"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"b14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"使用改进的双流卷积神经网络,提高了火焰目标识别率。段锁林"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"15","type":"bibr","rid":"b15","data":[{"name":"text","data":"15"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"对火焰区域的RGB通道做灰度处理和二值化处理,形成9通道的三维数据输入,并且对Relu激活函数进行了修改,以平衡特征数量。回天"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"b16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出基于Faster R-CNN的火焰识别检测方法,使用AlexNet作为基础特征提取网络,通过迁移学习完成对不同类别火焰样本的识别与分类。"}]},{"name":"p","data":[{"name":"text","data":"利用卷积神经网络进行火焰识别的过程中,随着网络卷积层数的加深,火焰特征的语义信息变抽象,分辨率降低,火焰深层特征中的火焰目标细节信息减少,同时提取到了与火焰目标低相关性的冗余特征,导致火焰识别率不高。本文提出了一种基于Faster R-CNN"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"17","type":"bibr","rid":"b17","data":[{"name":"text","data":"17"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"网络的火焰识别检测方法,使用深度残差网络(Residual Network, ResNet)"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"18","type":"bibr","rid":"b18","data":[{"name":"text","data":"18"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"进行特征提取,添加压缩和激励网络(Squeeze-and-Excitation Networks, SENet)模块"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"19","type":"bibr","rid":"b19","data":[{"name":"text","data":"19"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"减少低相关度冗余特征,然后使用多尺度特征融合结构,通过通道叠加补充深层特征中特征信息的不足,为具有抽象语义信息的深层特征添加具有丰富细节信息的浅层特征。该方法可以提升火焰目标检测的精度。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2"}],"title":[{"name":"text","data":"Faster R-CNN算法"}],"level":"1","id":"s2"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.1"}],"title":[{"name":"text","data":"Faster R-CNN简介"}],"level":"2","id":"s2-1"}},{"name":"p","data":[{"name":"text","data":"Faster R-CNN目标检测模型是通过优化R-CNN和Fast R-CNN模型而来的高性能模型,"},{"name":"xref","data":{"text":"图 1","type":"fig","rid":"Figure1","data":[{"name":"text","data":"图 1"}]}},{"name":"text","data":"为Faster R-CNN网络结构,主要由4部分组成:特征提取网络、区域建议网络(Region Proposal Network, RPN)、兴趣池化网络(Region of interest pooling Network, RoI)和分类回归网络,其图像处理步骤为:"}]},{"name":"fig","data":{"id":"Figure1","caption":[{"lang":"zh","label":[{"name":"text","data":"图1"}],"title":[{"name":"text","data":"Faster R-CNN网络结构图"}]},{"lang":"en","label":[{"name":"text","data":"Fig 1"}],"title":[{"name":"text","data":"Faster R-CNN network structure diagram"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576055&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576055&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576055&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"Step 1.将原始图片处理为224×224大小输入网络,经过卷积神经网络(Convolutional Neural Networks, CNN)"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"20","type":"bibr","rid":"b20","data":[{"name":"text","data":"20"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提取图像特征,将图片信息编码到深层维度,提取到的特征图作为区域建议网络和平均池化网络的共享特征层。"}]},{"name":"p","data":[{"name":"text","data":"Step 2.区域建议网络"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"21","type":"bibr","rid":"b21","data":[{"name":"text","data":"21"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"通过在共享特征图上添加滑动窗口将空间窗口映射到低维向量,在每个滑动的位置都会预测出"},{"name":"italic","data":[{"name":"text","data":"k"}]},{"name":"text","data":"个区域建议框。然后连接两个并行的全连接层,分别得到2"},{"name":"italic","data":[{"name":"text","data":"k"}]},{"name":"text","data":"个对应的分类层输出和4"},{"name":"italic","data":[{"name":"text","data":"k"}]},{"name":"text","data":"个对应的回归层输出。接着采用非极大抑制法以交并比作为分类指标选取得分排名前300的目标建议框。"}]},{"name":"p","data":[{"name":"text","data":"Step 3.平均池化网络将共享特征层和目标建议框作为输入,将不同大小的感兴趣区域通过池化操作降维成7×7尺寸的特征向量,使得输入图片不要求固定尺寸。"}]},{"name":"p","data":[{"name":"text","data":"Step 4.利用Softmax损失函数完成分类任务,利用smooth"},{"name":"sub","data":[{"name":"text","data":"L1"}]},{"name":"text","data":"损失函数完成回归定位任务。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.2"}],"title":[{"name":"text","data":"RPN网络损失函数"}],"level":"2","id":"s2-2"}},{"name":"p","data":[{"name":"text","data":"区域建议网络总体损失函数的定义如下所示:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"1"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576056&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576056&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576056&type=middle"}}}],"id":"yjyxs-36-5-751-E1"}}]},{"name":"p","data":[{"name":"text","data":"分类损失函数"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"cls"}]},{"name":"text","data":"的定义如下所示:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"2"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576057&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576057&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576057&type=middle"}}}],"id":"yjyxs-36-5-751-E2"}}]},{"name":"p","data":[{"name":"text","data":"回归损失函数"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"reg"}]},{"name":"text","data":"的定义如下所示:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"3"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576058&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576058&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576058&type=middle"}}}],"id":"yjyxs-36-5-751-E3"}}]},{"name":"p","data":[{"name":"text","data":"其中,"},{"name":"italic","data":[{"name":"text","data":"R"}]},{"name":"text","data":"为smoothL1函数,其公式如下所示:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"4"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576059&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576059&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576059&type=middle"}}}],"id":"yjyxs-36-5-751-E4"}}]},{"name":"p","data":[{"name":"text","data":"式中,"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"cls"}]},{"name":"text","data":"表示边框分类层归一化,即mini-batch大小;"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"reg"}]},{"name":"text","data":"表示边框回归层归一化,即anchor数量;"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"text","data":"表示平衡因子,通常取"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"text","data":"=1;"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"表示每批次训练中候选框索引;"},{"name":"italic","data":[{"name":"text","data":"p"},{"name":"sub","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":"表示候选框中存在目标的概率,"},{"name":"italic","data":[{"name":"text","data":"p"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]}]},{"name":"sup","data":[{"name":"text","data":"*"}]},{"name":"text","data":"表示类别标签,正样本、负样本分别为1、0;"},{"name":"italic","data":[{"name":"text","data":"t"},{"name":"sub","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":"表示预测框相对于候选框的偏移坐标,"},{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]}]},{"name":"sup","data":[{"name":"text","data":"*"}]},{"name":"text","data":"表示标记框相对于候选框的偏移坐标。"}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3"}],"title":[{"name":"text","data":"基于Faster R-CNN的多尺度改进实现"}],"level":"1","id":"s3"}},{"name":"p","data":[{"name":"text","data":"本文在Faster R-CNN中使用ResNet50作为特征提取网络,并添加注意力机制模块增加火焰相关特征通道重要性,然后通过多尺度结构在深层特征中添加浅层特征,增强火焰特征表达能力,以提高火焰识别精度。"}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.1"}],"title":[{"name":"text","data":"改进残差网络"}],"level":"2","id":"s3-1"}},{"name":"p","data":[{"name":"text","data":"深度学习中,加深网络深度会遇到梯度爆炸和梯度消失问题。传统对于该问题的解决方法主要是对数据进行初始化和正则化,但是此方法伴随着网络深度的加深,误报率和错报率提升,造成网络性能的退化。何凯明在使用多达152层的ResNet网络在ImageNet上进行实验,结果表明深度残差网络在深度增加的情况下,有效提高了准确率,解决了因深度增加造成的梯度消失和网络性能退化的问题。本文特征提取网络部分采用ResNet50网络进行特征提取,增加SENet注意力机制模块,简称为SE-RCSNet50, ResNet和SE-ResNet模块分别如"},{"name":"xref","data":{"text":"图 2","type":"fig","rid":"Figure2","data":[{"name":"text","data":"图 2"}]}},{"name":"text","data":"、"},{"name":"xref","data":{"text":"图 3","type":"fig","rid":"Figure3","data":[{"name":"text","data":"图 3"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure2","caption":[{"lang":"zh","label":[{"name":"text","data":"图2"}],"title":[{"name":"text","data":"ResNet模块"}]},{"lang":"en","label":[{"name":"text","data":"Fig 2"}],"title":[{"name":"text","data":"Block diagram of ResNet module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576061&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576061&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576061&type=middle"}]}},{"name":"fig","data":{"id":"Figure3","caption":[{"lang":"zh","label":[{"name":"text","data":"图3"}],"title":[{"name":"text","data":"SE-ResNet模块"}]},{"lang":"en","label":[{"name":"text","data":"Fig 3"}],"title":[{"name":"text","data":"Block diagram of SE-ResNet module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576063&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576063&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576063&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"在深度残差网络模块中,存在两种映射关系,一种为恒等映射,指的是把当前网络输入当作输出直接传输到下一层网络中,跳过当前网络。另一种为残差映射,最终的输出为"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")="},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")+"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":",此时当"},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")=0时,则"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")="},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"。在此基础上,改变了残差网络的学习目标,直接学习"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")和"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"的差值,就是残差映射"},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")="},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")-"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"text","data":"本文在深度残差模块中添加了SENet注意力机制模块,通过学习的方式自动获取每个特征通道的重要程度,并且根据该通道的重要程度提升有用特征,抑制无用特征,以此来使提取到的特征具有更强的表征能力。压缩和激励网络结构如"},{"name":"xref","data":{"text":"图 4","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure4","caption":[{"lang":"zh","label":[{"name":"text","data":"图4"}],"title":[{"name":"text","data":"注意力机制模块"}]},{"lang":"en","label":[{"name":"text","data":"Fig 4"}],"title":[{"name":"text","data":"Block diagram of attention mechanism module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576065&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576065&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576065&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"压缩和激励网络执行步骤如下:"}]},{"name":"p","data":[{"name":"text","data":"(1) 首先对特征U进行挤压(Squeeze)操作,该操作通过全局平均池化(Global average pooling)将二维特征压缩成一个1×1×"},{"name":"italic","data":[{"name":"text","data":"C"}]},{"name":"text","data":"的实数数列。该实数数列在一定意义上具有全局感受野,使得低层网络也可以具有全局信息。全局平均池化计算方式为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"5"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576067&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576067&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576067&type=middle"}}}],"id":"yjyxs-36-5-751-E5"}}]},{"name":"p","data":[{"name":"text","data":"(2) 接入激励(Excitation)操作,自主学习每个特征通道之间的非线性交互关系,根据重要性的不同赋予不同的权重。先通过全连接层,减少计算量,然后连接ReLu函数,使得输出的维度不变。激励操作计算方式为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"6"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576068&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576068&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576068&type=middle"}}}],"id":"yjyxs-36-5-751-E6"}}]},{"name":"p","data":[{"name":"text","data":"(3) 经过重新赋值(Reweight)操作,如式(7)所示,经过激励操作后的结果输出可以看作是经过特征选择后的每个特征通道的重要性,然后通过乘法将每个通道加权到之前的特征上,完成对原始特征的重新赋值,实现注意力机制。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"7"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576069&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576069&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576069&type=middle"}}}],"id":"yjyxs-36-5-751-E7"}}]},{"name":"p","data":[{"name":"bold","data":[{"name":"italic","data":[{"name":"text","data":"u"}]}]},{"name":"sub","data":[{"name":"text","data":"c"}]},{"name":"text","data":"表示"},{"name":"bold","data":[{"name":"italic","data":[{"name":"text","data":"u"}]}]},{"name":"text","data":"中第"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"text","data":"个二维矩阵,"},{"name":"italic","data":[{"name":"text","data":"s"}]},{"name":"sub","data":[{"name":"text","data":"c"}]},{"name":"text","data":"表示激励操作的输出权重。"}]},{"name":"p","data":[{"name":"text","data":"添加压缩和激励模块后,主要添加的参数量为两个全连接层增加的参数,每个全连接层的维度为"},{"name":"italic","data":[{"name":"text","data":"C"}]},{"name":"sup","data":[{"name":"text","data":"2"}]},{"name":"text","data":"/"},{"name":"italic","data":[{"name":"text","data":"r"}]},{"name":"text","data":",两个全连接层的参数量就是2"},{"name":"italic","data":[{"name":"text","data":"C"}]},{"name":"sup","data":[{"name":"text","data":"2"}]},{"name":"text","data":"/"},{"name":"italic","data":[{"name":"text","data":"r"}]},{"name":"text","data":",其中"},{"name":"italic","data":[{"name":"text","data":"r"}]},{"name":"text","data":"为缩放参数,一般取16。在ResNet50中,含有5个阶段,每个阶段包含"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"text","data":"个重复的残差模块,添加压缩和激励后,参数量为"},{"name":"inlineformula","data":[{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576077&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576077&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576077&type=middle"}}}]},{"name":"text","data":"。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.2"}],"title":[{"name":"text","data":"多尺度网络结构"}],"level":"2","id":"s3-2"}},{"name":"p","data":[{"name":"text","data":"由于卷积神经网络通过逐层抽象的方式来提取目标的特征,提取到的高层特征的感受野比较大,语义信息表征能力强,但是特征分辨率小,几何信息的表征能力弱,适合处理大目标;而提取到的低层特征的感受野比较小,语义信息表征能力弱,但是特征分辨率大,几何信息的表征能力强,适合处理小目标。为了节省计算量,使用Block4特征图进行火焰目标检测,Block5用做最后的分类和回归。Block4层特征图的语义信息较为丰富,能更好地反映火焰图像全局特征,但是分辨率低,细节信息表征能力弱,所以不能精确地对火焰进行检测。通过分析,本文提出一种基于Faster R-CNN的多尺度特征融合算法,通过在深层特征上添加浅层特征信息进行特征增强,提高对火焰目标的识别检测准确率。"}]},{"name":"p","data":[{"name":"text","data":"网络结构如"},{"name":"xref","data":{"text":"图 5","type":"fig","rid":"Figure5","data":[{"name":"text","data":"图 5"}]}},{"name":"text","data":"所示,分别对Block1, 2, 3的输出用1×1的卷积做通道变换,将其通道数从64,256,512变换为256,512,1 024,并引入批规范化(Batch Normalization,BN)处理,将数据规范到"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"text","data":"(0,1)的正态分布,然后将Block2,3,4的输出分别和上一层进行通道变化后的输出进行concatenate通道相加,然后使用Rule函数进行激活,以此作为Block2,3,4的输出。利用该结构,能够增强火焰目标的特征表达能力,并且任意下一层的输入都来自前面两层的输出,加强了对特征的重用"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"22","type":"bibr","rid":"b22","data":[{"name":"text","data":"22"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",使得网络更利于训练,具有一定的正则化效果,缓解了梯度消失和模型退化问题。同时引入批规范化处理,能够加快训练速度,并提高网络泛化能力。"}]},{"name":"fig","data":{"id":"Figure5","caption":[{"lang":"zh","label":[{"name":"text","data":"图5"}],"title":[{"name":"text","data":"多尺度结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig 5"}],"title":[{"name":"text","data":"Block diagram of muti-scale feature fusisn mechanism"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576070&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576070&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576070&type=middle"}]}}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4"}],"title":[{"name":"text","data":"实验与结果分析"}],"level":"1","id":"s4"}},{"name":"p","data":[{"name":"text","data":"本文所使用的主要硬件参数为:操作系统为Windows10 64 bit,内存为8 GByte,GPU设备为GTX2070,使用python3.6语言,搭建平台为Keras,训练和测试软件为Pycharm。"}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4.1"}],"title":[{"name":"text","data":"数据集处理"}],"level":"2","id":"s4-1"}},{"name":"p","data":[{"name":"text","data":"在实际情况中,火焰图片都是采集于视觉模块。通过查找有关火焰检测实验的公开数据集,并且添加从不同角度拍摄的火焰图像数据,扩充原本公开数据集以作为本文的实验数据集,使得该样本数据集更接近实际数据。由于原始图像的尺寸过大,在进行模型训练时导致计算量增加,在不影响整体细节的情况下,将图像裁剪为224×224像素大小,并将格式转变PNG格式,按照PSACAL VOC格式构建火焰数据集。本次实验共采用数量为10 078的火焰图像数据集,从扩充以后的图像数据中随机选择8 163图像作为训练集,907张作为验证集,剩余1 008张作为测试集。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4.2"}],"title":[{"name":"text","data":"评价指标"}],"level":"2","id":"s4-2"}},{"name":"p","data":[{"name":"text","data":"深度学习中对分类器模型进行评估主要使用精确率(Precision)和召回率(Recall)。通过混淆矩阵进行计算,混淆矩阵如"},{"name":"xref","data":{"text":"表 1","type":"table","rid":"Table1","data":[{"name":"text","data":"表 1"}]}},{"name":"text","data":"所示。"}]},{"name":"table","data":{"id":"Table1","caption":[{"lang":"zh","label":[{"name":"text","data":"表1"}],"title":[{"name":"text","data":"混淆矩阵"}]},{"lang":"en","label":[{"name":"text","data":"Table 1"}],"title":[{"name":"text","data":"Confusion matrix"}]}],"note":[],"table":[{"head":[[{"align":"center","data":[]},{"align":"center","data":[{"name":"text","data":"Positive"}]},{"align":"center","data":[{"name":"text","data":"Negative"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"True"}]},{"align":"center","data":[{"name":"text","data":"TP"}]},{"align":"center","data":[{"name":"text","data":"FP"}]}],[{"align":"center","data":[{"name":"text","data":"False"}]},{"align":"center","data":[{"name":"text","data":"FN"}]},{"align":"center","data":[{"name":"text","data":"TN"}]}]],"foot":[]}]}},{"name":"p","data":[{"name":"text","data":"表中,TP(True Positive)表示将正类预测为正类数;TN(True Negative)表示将负类预测为负类数;FP(False Positive)表示将负类预测为正类数,也称为误报率;FN(False Negative)表示将正类预测为负类数,也成为漏报率。"}]},{"name":"p","data":[{"name":"text","data":"精确率表示预测为正的样本中有多少为正确的预测,如下所示:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"8"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576071&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576071&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576071&type=middle"}}}],"id":"yjyxs-36-5-751-E8"}}]},{"name":"p","data":[{"name":"text","data":"召回率表示在正例样本中被正确地预测,如下所示:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"9"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576072&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576072&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576072&type=middle"}}}],"id":"yjyxs-36-5-751-E9"}}]},{"name":"p","data":[{"name":"text","data":"同时在目标检测任务中,使用平均精度均值(Mean average precision,mAP)作为衡量模型性能的指标。在本任务中,对火焰的识别检测为单类识别,平均精度均值指标与平均精度(Average precision,AP)等价,所以使用平均精度来作为本任务中模型评价指标,平均精度指标计算如式(10)所示:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"10"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576073&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576073&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576073&type=middle"}}}],"id":"yjyxs-36-5-751-E10"}}]},{"name":"p","data":[{"name":"text","data":"式中,"},{"name":"italic","data":[{"name":"text","data":"R"}]},{"name":"text","data":"表示测试集中所有正样本的个数;"},{"name":"italic","data":[{"name":"text","data":"M"}]},{"name":"text","data":"表示测试集总样本个数;"},{"name":"italic","data":[{"name":"text","data":"I"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":"=1表示第"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"个样本是正样本,若是负样本,则"},{"name":"italic","data":[{"name":"text","data":"I"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":"=0;"},{"name":"italic","data":[{"name":"text","data":"R"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":"表示前"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"个样本中正样本的个数。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4.3"}],"title":[{"name":"text","data":"实验结果分析"}],"level":"2","id":"s4-3"}},{"name":"p","data":[{"name":"text","data":"使用本文算法和改进前算法在本文测试集上进行测试,实验结果如"},{"name":"xref","data":{"text":"表 2","type":"table","rid":"Table2","data":[{"name":"text","data":"表 2"}]}},{"name":"text","data":"所示。"}]},{"name":"table","data":{"id":"Table2","caption":[{"lang":"zh","label":[{"name":"text","data":"表2"}],"title":[{"name":"text","data":"改进前后对比"}]},{"lang":"en","label":[{"name":"text","data":"Table 2"}],"title":[{"name":"text","data":"Comparison before and after improvement"}]}],"note":[],"table":[{"head":[[{"align":"center","data":[]},{"align":"center","data":[{"name":"text","data":"AP/%"}]},{"align":"center","data":[{"name":"text","data":"Recall/%"}]},{"align":"center","data":[{"name":"text","data":"Precision/%"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"改进前"}]},{"align":"center","data":[{"name":"text","data":"79.51"}]},{"align":"center","data":[{"name":"text","data":"69.60"}]},{"align":"center","data":[{"name":"text","data":"56.59"}]}],[{"align":"center","data":[{"name":"text","data":"改进后"}]},{"align":"center","data":[{"name":"text","data":"87.29"}]},{"align":"center","data":[{"name":"text","data":"78.65"}]},{"align":"center","data":[{"name":"text","data":"69.13"}]}]],"foot":[]}]}},{"name":"p","data":[{"name":"text","data":"通过表中数据对比可以发现,改进前算法以ResNet50作为特征提取网络,与本文方法进行比较,本文算法的平均精度、召回率、精确率分别提高了7.78%、9.05%、12.54%,证明本文方法提取到的火焰目标特征对于火焰识别精度有较高影响。原因是加入了SENet注意力机制模块的ResNet50特征提取网络抑制了与火焰目标相关度低的无关特征通道,并且将火焰的通道重要性提高,同时,因为多尺度结构在提取到的深层特征上加入了浅层的细节特征,增强了火焰目标特征的表征能力,有效提高了模型对于火焰目标的识别性能。"}]},{"name":"p","data":[{"name":"text","data":"为进一步对比本文算法在特征提取部分的优势,进行特征图可视化对比实验,结果如"},{"name":"xref","data":{"text":"图 6","type":"fig","rid":"Figure6","data":[{"name":"text","data":"图 6"}]}},{"name":"text","data":"所示。由图中可以看出,随着网络深度加深,火焰目标的位置、形状等细节信息随着感受野的增大而稀疏化,使得无法分辨火焰的轮廓、纹理等信息。通过对比本文方法与ResNet50,在火焰目标深层特征中,因为使用了多尺度特征融合结构,使得本文提取到的火焰目标有着更为明显的纹理、颜色以及轮廓等浅层特征中的细节信息,同时,由于SENet的作用,对提取到的特征通道进行了重新标定,抑制了与目标特征相关性低的通道,增强了相关性高的颜色等通道信息,使得提取到的火焰特征信息表达能力更强。"}]},{"name":"fig","data":{"id":"Figure6","caption":[{"lang":"zh","label":[{"name":"text","data":"图6"}],"title":[{"name":"text","data":"特征图可视化"}]},{"lang":"en","label":[{"name":"text","data":"Fig 6"}],"title":[{"name":"text","data":"Visualization of feature map"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576074&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576074&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576074&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"在测试集中,选取不同场景下的火焰图像对本文算法模型以及改进前算法进行性能测试对比,结果如"},{"name":"xref","data":{"text":"图 7","type":"fig","rid":"Figure7","data":[{"name":"text","data":"图 7"}]}},{"name":"text","data":"所示。可以看出,以ResNet50作为特征提取网络进行火焰目标识别时,存在识别率低的问题,同时在复杂背景下,有复检问题存在,并且对于小目标的检测效果不好。而本文方法能够适应不同背景下对火焰目标的识别检测,对于不同环境下的火焰都有较好的识别结果。"}]},{"name":"fig","data":{"id":"Figure7","caption":[{"lang":"zh","label":[{"name":"text","data":"图7"}],"title":[{"name":"text","data":"检测结果对比"}]},{"lang":"en","label":[{"name":"text","data":"Fig 7"}],"title":[{"name":"text","data":"Comparison of test results"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576075&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576075&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=19576075&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"对图片识别时间进行对比,其结果如"},{"name":"xref","data":{"text":"表 3","type":"table","rid":"Table3","data":[{"name":"text","data":"表 3"}]}},{"name":"text","data":"所示。本文算法对比改进前对每张图片的消耗时间增加了0.03 s。原因是本文添加了SENet模块,并且采用了多尺度特征融合机制,增加了模型参数量,导致识别时间增加。"}]},{"name":"table","data":{"id":"Table3","caption":[{"lang":"zh","label":[{"name":"text","data":"表3"}],"title":[{"name":"text","data":"改进前后参数量和时间的比较"}]},{"lang":"en","label":[{"name":"text","data":"Table 3"}],"title":[{"name":"text","data":"Comparison of parameter quantity and time before and after improvement"}]}],"note":[],"table":[{"head":[[{"align":"center","data":[{"name":"text","data":"Method"}]},{"align":"center","data":[{"name":"text","data":"Par"}]},{"align":"center","data":[{"name":"text","data":"Time/(s·pcs"},{"name":"sup","data":[{"name":"text","data":"-1"}]},{"name":"text","data":")"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"改进前"}]},{"align":"center","data":[{"name":"text","data":"28 235 955"}]},{"align":"center","data":[{"name":"text","data":"0.21"}]}],[{"align":"center","data":[{"name":"text","data":"改进后"}]},{"align":"center","data":[{"name":"text","data":"39 599 715"}]},{"align":"center","data":[{"name":"text","data":"0.24"}]}]],"foot":[]}]}},{"name":"p","data":[{"name":"text","data":"为进一步验证本文算法的效果,将本文算法与已有学者完成的火焰识别检测方法进行比较。实验结果如"},{"name":"xref","data":{"text":"表 4","type":"table","rid":"Table4","data":[{"name":"text","data":"表 4"}]}},{"name":"text","data":"所示。本文方法对比文献["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"b14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]和文献["},{"name":"xref","data":{"text":"23","type":"bibr","rid":"b23","data":[{"name":"text","data":"23"}]}},{"name":"text","data":"]的结果有所提高,其中文献["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"b14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]使用改进的双流卷机神经网络进行火焰目标提取,文献["},{"name":"xref","data":{"text":"23","type":"bibr","rid":"b23","data":[{"name":"text","data":"23"}]}},{"name":"text","data":"]通过采用多个不同尺度融合的方式进行目标检测。在特征提取方面,本文方法能够提取到信息更丰富的火焰特征,使得本文算法效果较好。"}]},{"name":"table","data":{"id":"Table4","caption":[{"lang":"zh","label":[{"name":"text","data":"表4"}],"title":[{"name":"text","data":"与现有学习算法准确率对比"}]},{"lang":"en","label":[{"name":"text","data":"Table 4"}],"title":[{"name":"text","data":"Comparison with accuracy of existing learn algorithms"}]}],"note":[],"table":[{"head":[[{"align":"center","data":[{"name":"text","data":"Method"}]},{"align":"center","data":[{"name":"text","data":"Acc/%"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"文献["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"b14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]},{"align":"center","data":[{"name":"text","data":"93.50"}]}],[{"align":"center","data":[{"name":"text","data":"文献["},{"name":"xref","data":{"text":"23","type":"bibr","rid":"b23","data":[{"name":"text","data":"23"}]}},{"name":"text","data":"]"}]},{"align":"center","data":[{"name":"text","data":"92.12"}]}],[{"align":"center","data":[{"name":"text","data":"Ours"}]},{"align":"center","data":[{"name":"text","data":"96.52"}]}]],"foot":[]}]}}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5"}],"title":[{"name":"text","data":"结论"}],"level":"1","id":"s5"}},{"name":"p","data":[{"name":"text","data":"针对火焰目标检测,本文提出一种基于Faster R-CNN模型的改进算法,提高深度学习网络对火焰识别的准确率。首先使用ResNet50作为基础特征提取网络进行特征提取,接着添加SENet模块通过通道注意力机制,增强目标相关性高的特征通道,然后使用多尺度结构将浅层特征进行通道变换并和深层特征进行通道相加,最后完成对火焰区域的识别检测。实验证明该方法能够克服深度网络中深层特征几何信息的缺失问题,并抑制无用特征,提取更有效的火焰特征,完成对火焰目标的识别检测。对比改进前的算法,其火焰识别检测平均精度提高了7.78%,召回率提高了9.05%,精确率提高了12.54%。有效地提高了火焰识别效果。"}]}]}],"footnote":[],"reflist":{"title":[{"name":"text","data":"参考文献"}],"data":[{"id":"b1","label":"1","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"黄 博"},{"name":"text","data":" , "},{"name":"text","data":"江 慎旺"},{"name":"text","data":" , "},{"name":"text","data":"张 增"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"自适应特征引流管故障智能识别方法"},{"name":"text","data":" . "},{"name":"text","data":"中国光学"},{"name":"text","data":" , "},{"name":"text","data":"2017"},{"name":"text","data":" . "},{"name":"text","data":"10"},{"name":"text","data":" ( "},{"name":"text","data":"3"},{"name":"text","data":" ): "},{"name":"text","data":"340"},{"name":"text","data":" - "},{"name":"text","data":"347"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201703006.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201703006.htm"}},{"name":"text","data":"."}],"title":"自适应特征引流管故障智能识别方法"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"B HUANG"},{"name":"text","data":" , "},{"name":"text","data":"S W JIANG"},{"name":"text","data":" , "},{"name":"text","data":"Z ZHANG"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Intelligent identification algorithm of adaptive feature drainage tube fault"},{"name":"text","data":" . "},{"name":"text","data":"China Optics"},{"name":"text","data":" , "},{"name":"text","data":"2017"},{"name":"text","data":" . "},{"name":"text","data":"10"},{"name":"text","data":" ( "},{"name":"text","data":"3"},{"name":"text","data":" ): "},{"name":"text","data":"340"},{"name":"text","data":" - "},{"name":"text","data":"347"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201703006.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201703006.htm"}},{"name":"text","data":"."}],"title":"Intelligent identification algorithm of adaptive feature drainage tube fault"}]},{"id":"b2","label":"2","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"耿 庆田"},{"name":"text","data":" , "},{"name":"text","data":"赵 浩宇"},{"name":"text","data":" , "},{"name":"text","data":"于 繁华"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于改进HOG特征提取的车型识别算法"},{"name":"text","data":" . "},{"name":"text","data":"中国光学"},{"name":"text","data":" , "},{"name":"text","data":"2018"},{"name":"text","data":" . "},{"name":"text","data":"11"},{"name":"text","data":" ( "},{"name":"text","data":"2"},{"name":"text","data":" ): "},{"name":"text","data":"174"},{"name":"text","data":" - "},{"name":"text","data":"181"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201802003.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201802003.htm"}},{"name":"text","data":"."}],"title":"基于改进HOG特征提取的车型识别算法"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"Q T GENG"},{"name":"text","data":" , "},{"name":"text","data":"H Y ZHAO"},{"name":"text","data":" , "},{"name":"text","data":"F H YU"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Vehicle type recognition algorithm based on improved HOG feature"},{"name":"text","data":" . "},{"name":"text","data":"China Optics"},{"name":"text","data":" , "},{"name":"text","data":"2018"},{"name":"text","data":" . "},{"name":"text","data":"11"},{"name":"text","data":" ( "},{"name":"text","data":"2"},{"name":"text","data":" ): "},{"name":"text","data":"174"},{"name":"text","data":" - "},{"name":"text","data":"181"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201802003.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201802003.htm"}},{"name":"text","data":"."}],"title":"Vehicle type recognition algorithm based on improved HOG feature"}]},{"id":"b3","label":"3","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"张 健"},{"name":"text","data":" , "},{"name":"text","data":"钟 中志"},{"name":"text","data":" , "},{"name":"text","data":"柯 艳国"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于超像素和DTCWT的火焰检测"},{"name":"text","data":" . "},{"name":"text","data":"信息技术"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"43"},{"name":"text","data":" ( "},{"name":"text","data":"6"},{"name":"text","data":" ): "},{"name":"text","data":"31"},{"name":"text","data":" - "},{"name":"text","data":"34"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-HDZJ201906008.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-HDZJ201906008.htm"}},{"name":"text","data":"."}],"title":"基于超像素和DTCWT的火焰检测"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"J ZHANG"},{"name":"text","data":" , "},{"name":"text","data":"Z Z ZHONG"},{"name":"text","data":" , "},{"name":"text","data":"Y G KE"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Flame detection based on super-pixel and DTCWT"},{"name":"text","data":" . "},{"name":"text","data":"Information Technology"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"43"},{"name":"text","data":" ( "},{"name":"text","data":"6"},{"name":"text","data":" ): "},{"name":"text","data":"31"},{"name":"text","data":" - "},{"name":"text","data":"34"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-HDZJ201906008.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-HDZJ201906008.htm"}},{"name":"text","data":"."}],"title":"Flame detection based on super-pixel and DTCWT"}]},{"id":"b4","label":"4","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"刘 凯"},{"name":"text","data":" , "},{"name":"text","data":"魏 艳秀"},{"name":"text","data":" , "},{"name":"text","data":"许 京港"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于计算机视觉的森林火灾识别算法设计"},{"name":"text","data":" . "},{"name":"text","data":"森林工程"},{"name":"text","data":" , "},{"name":"text","data":"2018"},{"name":"text","data":" . "},{"name":"text","data":"34"},{"name":"text","data":" ( "},{"name":"text","data":"4"},{"name":"text","data":" ): "},{"name":"text","data":"89"},{"name":"text","data":" - "},{"name":"text","data":"95"},{"name":"text","data":" . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3969/j.issn.1006-8023.2018.04.015"}],"href":"http://doi.org/10.3969/j.issn.1006-8023.2018.04.015"}},{"name":"text","data":"."}],"title":"基于计算机视觉的森林火灾识别算法设计"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"K LIU"},{"name":"text","data":" , "},{"name":"text","data":"Y X WEI"},{"name":"text","data":" , "},{"name":"text","data":"J G XU"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Design of forest fire identification algorithm based on computer vision"},{"name":"text","data":" . "},{"name":"text","data":"Forest Engineering"},{"name":"text","data":" , "},{"name":"text","data":"2018"},{"name":"text","data":" . "},{"name":"text","data":"34"},{"name":"text","data":" ( "},{"name":"text","data":"4"},{"name":"text","data":" ): "},{"name":"text","data":"89"},{"name":"text","data":" - "},{"name":"text","data":"95"},{"name":"text","data":" . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3969/j.issn.1006-8023.2018.04.015"}],"href":"http://doi.org/10.3969/j.issn.1006-8023.2018.04.015"}},{"name":"text","data":"."}],"title":"Design of forest fire identification algorithm based on computer vision"}]},{"id":"b5","label":"5","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"张 杰"},{"name":"text","data":" , "},{"name":"text","data":"隋 阳"},{"name":"text","data":" , "},{"name":"text","data":"李 强"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于卷积神经网络的火灾视频图像检测"},{"name":"text","data":" . "},{"name":"text","data":"电子技术应用"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"45"},{"name":"text","data":" ( "},{"name":"text","data":"4"},{"name":"text","data":" ): "},{"name":"text","data":"34"},{"name":"text","data":" - "},{"name":"text","data":"38, 44"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-DZJY201904008.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-DZJY201904008.htm"}},{"name":"text","data":"."}],"title":"基于卷积神经网络的火灾视频图像检测"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"J ZHANG"},{"name":"text","data":" , "},{"name":"text","data":"Y SUI"},{"name":"text","data":" , "},{"name":"text","data":"Q LI"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Fire video image detection based on convolutional neural network"},{"name":"text","data":" . "},{"name":"text","data":"Application of Electronic Technique"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"45"},{"name":"text","data":" ( "},{"name":"text","data":"4"},{"name":"text","data":" ): "},{"name":"text","data":"34"},{"name":"text","data":" - "},{"name":"text","data":"38, 44"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-DZJY201904008.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-DZJY201904008.htm"}},{"name":"text","data":"."}],"title":"Fire video image detection based on convolutional neural network"}]},{"id":"b6","label":"6","citation":[{"lang":"zh","text":[{"name":"p","data":[{"name":"text","data":"蔡敏. 基于视频分析的森林烟火识别算法研究[D]. 南京: 东南大学, 2018."}]}]},{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"CAI M. Research on forest smoke detection based on video analysis[D]. Nanjing: Southeast University, 2018. (in Chinese)"}]}]}]},{"id":"b7","label":"7","citation":[{"lang":"zh","text":[{"name":"p","data":[{"name":"text","data":"刘宇欣. 基于人工蜂群算法的矿用带式输送机早期火灾检测[D]. 西安: 西安科技大学, 2018."}]}]},{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"LIU Y X. Research on early fire monitoring of mine belt conveyor based on artificial bee colony algorithm[D]. Xi'an: Xi'an University of Science and Technology, 2018. (in Chinese)"}]}]}]},{"id":"b8","label":"8","citation":[{"lang":"zh","text":[{"name":"p","data":[{"name":"text","data":"苗续芝. 基于视频图像的火灾检测研究与实现[D]. 徐州: 中国矿业大学, 2018."}]}]},{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"MIAO X Z. Study and realization of fire detection based on video image[D]. Xuzhou: China University of Mining and Technology, 2018. (in Chinese)"}]}]}]},{"id":"b9","label":"9","citation":[{"lang":"zh","text":[{"name":"p","data":[{"name":"text","data":"王中林. 基于双目视觉的火焰目标检测[D]. 北京: 北京交通大学, 2018."}]}]},{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"WANG Z L. Flame target detection based on binocular stereo vision[D]. Beijing: Beijing Jiaotong University, 2018. (in Chinese)"}]}]}]},{"id":"b10","label":"10","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"唐 悦"},{"name":"text","data":" , "},{"name":"text","data":"吴 戈"},{"name":"text","data":" , "},{"name":"text","data":"朴 燕"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"改进的GDT-YOLOV 3目标检测算法"},{"name":"text","data":" . "},{"name":"text","data":"液晶与显示"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"35"},{"name":"text","data":" ( "},{"name":"text","data":"8"},{"name":"text","data":" ): "},{"name":"text","data":"852"},{"name":"text","data":" - "},{"name":"text","data":"860"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"http://cjlcd.lightpublishing.cn/thesisDetails#10.37188/YJYXS20203508.0852"}],"href":"http://cjlcd.lightpublishing.cn/thesisDetails#10.37188/YJYXS20203508.0852"}},{"name":"text","data":"."}],"title":"改进的GDT-YOLOV 3目标检测算法"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"Y TANG"},{"name":"text","data":" , "},{"name":"text","data":"G WU"},{"name":"text","data":" , "},{"name":"text","data":"Y PIAO"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Improved algorithm of GDT-YOLOV 3 image target detection"},{"name":"text","data":" . "},{"name":"text","data":"Chinese Journal of Liquid Crystals and Displays"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"35"},{"name":"text","data":" ( "},{"name":"text","data":"8"},{"name":"text","data":" ): "},{"name":"text","data":"852"},{"name":"text","data":" - "},{"name":"text","data":"860"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"http://cjlcd.lightpublishing.cn/thesisDetails#10.37188/YJYXS20203508.0852"}],"href":"http://cjlcd.lightpublishing.cn/thesisDetails#10.37188/YJYXS20203508.0852"}},{"name":"text","data":"."}],"title":"Improved algorithm of GDT-YOLOV 3 image target detection"}]},{"id":"b11","label":"11","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"刘 嘉政"},{"name":"text","data":" , "},{"name":"text","data":"王 雪峰"},{"name":"text","data":" , "},{"name":"text","data":"王 甜"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于深度学习的树种图像自动识别"},{"name":"text","data":" . "},{"name":"text","data":"南京林业大学学报(自然科学版)"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"44"},{"name":"text","data":" ( "},{"name":"text","data":"1"},{"name":"text","data":" ): "},{"name":"text","data":"138"},{"name":"text","data":" - "},{"name":"text","data":"144"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-NJLY202001020.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-NJLY202001020.htm"}},{"name":"text","data":"."}],"title":"基于深度学习的树种图像自动识别"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"J Z LIU"},{"name":"text","data":" , "},{"name":"text","data":"X F WANG"},{"name":"text","data":" , "},{"name":"text","data":"T WANG"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Automatic identification of tree species based on deep learning"},{"name":"text","data":" . "},{"name":"text","data":"Journal of Nanjing Forestry University (Natural Science Edition)"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"44"},{"name":"text","data":" ( "},{"name":"text","data":"1"},{"name":"text","data":" ): "},{"name":"text","data":"138"},{"name":"text","data":" - "},{"name":"text","data":"144"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-NJLY202001020.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-NJLY202001020.htm"}},{"name":"text","data":"."}],"title":"Automatic identification of tree species based on deep learning"}]},{"id":"b12","label":"12","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"丁 立德"},{"name":"text","data":" , "},{"name":"text","data":"胡 怀湘"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于FPGA的CNN应用加速技术"},{"name":"text","data":" . "},{"name":"text","data":"信息技术"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"43"},{"name":"text","data":" ( "},{"name":"text","data":"12"},{"name":"text","data":" ): "},{"name":"text","data":"110"},{"name":"text","data":" - "},{"name":"text","data":"115"},{"name":"text","data":" . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3969/j.issn.1674-2117.2019.12.037"}],"href":"http://doi.org/10.3969/j.issn.1674-2117.2019.12.037"}},{"name":"text","data":"."}],"title":"基于FPGA的CNN应用加速技术"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"L D DING"},{"name":"text","data":" , "},{"name":"text","data":"H X HU"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"An acceleration technique for CNN application based on FPGA"},{"name":"text","data":" . "},{"name":"text","data":"Information Technology"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"43"},{"name":"text","data":" ( "},{"name":"text","data":"12"},{"name":"text","data":" ): "},{"name":"text","data":"110"},{"name":"text","data":" - "},{"name":"text","data":"115"},{"name":"text","data":" . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3969/j.issn.1674-2117.2019.12.037"}],"href":"http://doi.org/10.3969/j.issn.1674-2117.2019.12.037"}},{"name":"text","data":"."}],"title":"An acceleration technique for CNN application based on FPGA"}]},{"id":"b13","label":"13","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"任 嘉锋"},{"name":"text","data":" , "},{"name":"text","data":"熊 卫华"},{"name":"text","data":" , "},{"name":"text","data":"吴 之昊"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于改进YOLOv3的火灾检测与识别"},{"name":"text","data":" . "},{"name":"text","data":"计算机系统应用"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"28"},{"name":"text","data":" ( "},{"name":"text","data":"12"},{"name":"text","data":" ): "},{"name":"text","data":"171"},{"name":"text","data":" - "},{"name":"text","data":"176"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-XTYY201912026.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-XTYY201912026.htm"}},{"name":"text","data":"."}],"title":"基于改进YOLOv3的火灾检测与识别"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"J F REN"},{"name":"text","data":" , "},{"name":"text","data":"W H XIONG"},{"name":"text","data":" , "},{"name":"text","data":"Z H WU"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Fire detection and identification based on improved YOLOv3"},{"name":"text","data":" . "},{"name":"text","data":"Computer Systems & Applications"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"28"},{"name":"text","data":" ( "},{"name":"text","data":"12"},{"name":"text","data":" ): "},{"name":"text","data":"171"},{"name":"text","data":" - "},{"name":"text","data":"176"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-XTYY201912026.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-XTYY201912026.htm"}},{"name":"text","data":"."}],"title":"Fire detection and identification based on improved YOLOv3"}]},{"id":"b14","label":"14","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"徐 登"},{"name":"text","data":" , "},{"name":"text","data":"黄 晓东"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于改进双流卷积网络的火灾图像特征提取方法"},{"name":"text","data":" . "},{"name":"text","data":"计算机科学"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"46"},{"name":"text","data":" ( "},{"name":"text","data":"11"},{"name":"text","data":" ): "},{"name":"text","data":"291"},{"name":"text","data":" - "},{"name":"text","data":"296"},{"name":"text","data":" . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.11896/jsjkx.180901640"}],"href":"http://doi.org/10.11896/jsjkx.180901640"}},{"name":"text","data":"."}],"title":"基于改进双流卷积网络的火灾图像特征提取方法"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"D XU"},{"name":"text","data":" , "},{"name":"text","data":"X D HUANG"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Fire images features extraction based on improved two-stream convolution network"},{"name":"text","data":" . "},{"name":"text","data":"Computer Science"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"46"},{"name":"text","data":" ( "},{"name":"text","data":"11"},{"name":"text","data":" ): "},{"name":"text","data":"291"},{"name":"text","data":" - "},{"name":"text","data":"296"},{"name":"text","data":" . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.11896/jsjkx.180901640"}],"href":"http://doi.org/10.11896/jsjkx.180901640"}},{"name":"text","data":"."}],"title":"Fire images features extraction based on improved two-stream convolution network"}]},{"id":"b15","label":"15","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"段 锁林"},{"name":"text","data":" , "},{"name":"text","data":"刘 福"},{"name":"text","data":" , "},{"name":"text","data":"高 仁洲"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于卷积神经网络的火焰识别"},{"name":"text","data":" . "},{"name":"text","data":"计算机工程与设计"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"40"},{"name":"text","data":" ( "},{"name":"text","data":"11"},{"name":"text","data":" ): "},{"name":"text","data":"3288"},{"name":"text","data":" - "},{"name":"text","data":"3292, 3298"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-SJSJ201911044.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-SJSJ201911044.htm"}},{"name":"text","data":"."}],"title":"基于卷积神经网络的火焰识别"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"S L DUAN"},{"name":"text","data":" , "},{"name":"text","data":"F LIU"},{"name":"text","data":" , "},{"name":"text","data":"R Z GAO"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Flame recognition based on convolution neural network"},{"name":"text","data":" . "},{"name":"text","data":"Computer Engineering and Design"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"40"},{"name":"text","data":" ( "},{"name":"text","data":"11"},{"name":"text","data":" ): "},{"name":"text","data":"3288"},{"name":"text","data":" - "},{"name":"text","data":"3292, 3298"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-SJSJ201911044.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-SJSJ201911044.htm"}},{"name":"text","data":"."}],"title":"Flame recognition based on convolution neural network"}]},{"id":"b16","label":"16","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"回 天"},{"name":"text","data":" , "},{"name":"text","data":"哈力旦·阿布都热依木 "},{"name":"text","data":" , "},{"name":"text","data":"杜 晗"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"结合Faster R-CNN的多类型火焰检测"},{"name":"text","data":" . "},{"name":"text","data":"中国图象图形学报"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"24"},{"name":"text","data":" ( "},{"name":"text","data":"1"},{"name":"text","data":" ): "},{"name":"text","data":"73"},{"name":"text","data":" - "},{"name":"text","data":"83"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGTB201901008.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGTB201901008.htm"}},{"name":"text","data":"."}],"title":"结合Faster R-CNN的多类型火焰检测"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"T HUI"},{"name":"text","data":" , "},{"name":"text","data":" HALIDAN·ABUDUREYIMU"},{"name":"text","data":" , "},{"name":"text","data":"H DU"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Multi-type flame detection combined with Faster R-CNN"},{"name":"text","data":" . "},{"name":"text","data":"Journal of Image and Graphics"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"24"},{"name":"text","data":" ( "},{"name":"text","data":"1"},{"name":"text","data":" ): "},{"name":"text","data":"73"},{"name":"text","data":" - "},{"name":"text","data":"83"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGTB201901008.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-ZGTB201901008.htm"}},{"name":"text","data":"."}],"title":"Multi-type flame detection combined with Faster R-CNN"}]},{"id":"b17","label":"17","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"S Q REN"},{"name":"text","data":" , "},{"name":"text","data":"K M HE"},{"name":"text","data":" , "},{"name":"text","data":"R GIRSHICK"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Faster R-CNN: towards real-time object detection with region proposal networks"},{"name":"text","data":" . "},{"name":"text","data":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"name":"text","data":" , "},{"name":"text","data":"2017"},{"name":"text","data":" . "},{"name":"text","data":"39"},{"name":"text","data":" ( "},{"name":"text","data":"6"},{"name":"text","data":" ): "},{"name":"text","data":"1137"},{"name":"text","data":" - "},{"name":"text","data":"1149"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"http://europepmc.org/abstract/MED/27295650"}],"href":"http://europepmc.org/abstract/MED/27295650"}},{"name":"text","data":"."}],"title":"Faster R-CNN: towards real-time object detection with region proposal networks"}]},{"id":"b18","label":"18","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"HE K M, ZHANG X Y, REN S Q, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Deep residual learning for image recognition[C]//2016"},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":" ("},{"name":"italic","data":[{"name":"text","data":"CVPR"}]},{"name":"text","data":"). Las Vegas, NV, USA: IEEE, 2016: 770-778."}]}]}]},{"id":"b19","label":"19","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"J HU"},{"name":"text","data":" , "},{"name":"text","data":"L SHEN"},{"name":"text","data":" , "},{"name":"text","data":"S ALBANIE"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Squeeze-and-excitation networks"},{"name":"text","data":" . "},{"name":"text","data":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"42"},{"name":"text","data":" ( "},{"name":"text","data":"8"},{"name":"text","data":" ): "},{"name":"text","data":"2011"},{"name":"text","data":" - "},{"name":"text","data":"2023"},{"name":"text","data":"."}],"title":"Squeeze-and-excitation networks"}]},{"id":"b20","label":"20","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"A KRIZHEVSKY"},{"name":"text","data":" , "},{"name":"text","data":"I SUTSKEVER"},{"name":"text","data":" , "},{"name":"text","data":"G E HINTON"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"ImageNet classification with deep convolutional neural networks"},{"name":"text","data":" . "},{"name":"text","data":"Communications of the ACM"},{"name":"text","data":" , "},{"name":"text","data":"2017"},{"name":"text","data":" . "},{"name":"text","data":"60"},{"name":"text","data":" ( "},{"name":"text","data":"6"},{"name":"text","data":" ): "},{"name":"text","data":"84"},{"name":"text","data":" - "},{"name":"text","data":"90"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"http://users.ics.aalto.fi/perellm1/thesis/summaries_html/node64.html"}],"href":"http://users.ics.aalto.fi/perellm1/thesis/summaries_html/node64.html"}},{"name":"text","data":"."}],"title":"ImageNet classification with deep convolutional neural networks"}]},{"id":"b21","label":"21","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"T Y CHUANG"},{"name":"text","data":" , "},{"name":"text","data":"J Y HAN"},{"name":"text","data":" , "},{"name":"text","data":"D J JHAN"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Geometric recognition of moving objects in monocular rotating imagery using faster R-CNN"},{"name":"text","data":" . "},{"name":"text","data":"Remote Sensing"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"12"},{"name":"text","data":" ( "},{"name":"text","data":"12"},{"name":"text","data":" ): "},{"name":"text","data":"1908"},{"name":"text","data":" "},{"name":"uri","data":{"text":[{"name":"text","data":"http://www.researchgate.net/publication/342175133_Geometric_Recognition_of_Moving_Objects_in_Monocular_Rotating_Imagery_Using_Faster_R-CNN"}],"href":"http://www.researchgate.net/publication/342175133_Geometric_Recognition_of_Moving_Objects_in_Monocular_Rotating_Imagery_Using_Faster_R-CNN"}},{"name":"text","data":"."}],"title":"Geometric recognition of moving objects in monocular rotating imagery using faster R-CNN"}]},{"id":"b22","label":"22","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"HUANG G, LIU Z, VAN DER MAATEN L, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Densely connected convolutional networks[J]."},{"name":"italic","data":[{"name":"text","data":"arXiv"}]},{"name":"text","data":": 1608.06993, 2016."}]}]}]},{"id":"b23","label":"23","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"REDMON J, FARHADI A. YOLOv3: an incremental improvement[J]."},{"name":"italic","data":[{"name":"text","data":"arXiv"}]},{"name":"text","data":": 1804.02767, 2018."}]}]}]}]},"response":[],"contributions":[],"acknowledgements":[],"conflict":[],"supportedby":[],"articlemeta":{"doi":"10.37188/CJLCD.2020-0221","clc":[[{"name":"text","data":"TP394.1;TH691.9"}]],"dc":[],"publisherid":"yjyxs-36-5-751","citeme":[],"fundinggroup":[{"lang":"zh","text":[{"name":"text","data":"陕西省科技厅国际科技合作计划项目(No.2020KW-012);陕西省教育厅重点项目高端智库(No.18JT006);西安市科技局项目(No.GXYD10.1)"}]},{"lang":"en","text":[{"name":"text","data":"Supported by Shaanxi Provincial Department of Science and Technology International Science and Technology Cooperation Program Project (No.2020KW-012);Shaanxi Provincial Department of Education Key Project High-end Think Tank (No.18JT006);Xi'an Science and Technology Bureau Project (No.GXYD10.1)"}]}],"history":{"received":"2020-08-31","revised":"2020-11-11","opub":"2021-09-08"},"copyright":{"data":[{"lang":"zh","data":[{"name":"text","data":"版权所有©《液晶与显示》编辑部2021"}],"type":"copyright"},{"lang":"en","data":[{"name":"text","data":"Copyright ©2021 Chinese Journal of Liquid Crystals and Displays. All rights reserved."}],"type":"copyright"}],"year":"2021"}},"appendix":[],"type":"research-article","ethics":[],"backSec":[],"supplementary":[],"journalTitle":"液晶与显示","issue":"5","volume":"36","originalSource":[]}