1.宁夏大学 电子与电气工程学院, 宁夏 银川 750000
2.宁夏大学 沙漠信息智能感知重点实验室, 宁夏 银川 750000
[ "刘雪纯(1997—),女,甘肃庆阳人,硕士研究生,2019年于兰州交通大学获得学士学位,主要从事计算机视觉、嵌入式、深度学习等方面的研究。E-mail:1564032673@qq.com" ]
[ "刘大铭(1969—),男,宁夏银川人,硕士,教授,2005年于西北工业大学获得硕士学位,主要从事嵌入式系统、智能仪器仪表等方面的研究。E-mail:ldm@nxu.edu.cn" ]
扫 描 看 全 文
刘雪纯, 刘大铭, 刘若晨. 改进的轻量级安全帽佩戴检测算法[J]. 液晶与显示, 2023,38(7):964-974.
LIU Xue-chun, LIU Da-ming, LIU Ruo-chen. Improved lightweight helmet wear detection algorithm[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(7):964-974.
刘雪纯, 刘大铭, 刘若晨. 改进的轻量级安全帽佩戴检测算法[J]. 液晶与显示, 2023,38(7):964-974. DOI: 10.37188/CJLCD.2022-0268.
LIU Xue-chun, LIU Da-ming, LIU Ruo-chen. Improved lightweight helmet wear detection algorithm[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(7):964-974. DOI: 10.37188/CJLCD.2022-0268.
针对现有安全帽佩戴检测算法对密集目标和小目标存在漏检现象且参数多、计算量大,不适合部署在嵌入式设备端等问题,提出了一种改进的YOLOv5安全帽佩戴检测算法YOLOv5-Q。首先,在原网络80×80的特征图上进行2倍上采样操作形成160×160的特征图,新特征图融合了原模型的3层特征信息,形成四尺度检测,提升了密集目标及小目标的检测精度。其次,采用轻量级的GhostNet替换原YOLOv5的主干网络实现特征提取,降低了网络的参数,可以移植在嵌入式设备端实现目标检测。最后,添加注意力机制CA提升特征图中重要信息的权重,抑制非相关信息的权重,从而提升模型的精度。实验结果表明,YOLOv5-Q的模型大小为26.47 MB,参数量为12 696 640,精度为0.937。与YOLOv5相比,YOLOv5-Q算法的参数量减少了39.12%,模型大小降低了37.2%,但是精度仅降低了1.2%。YOLOv5-Q算法提高了密集环境下小目标的检测精度且满足在嵌入式端部署的需求。
For the existing helmet wearing detection algorithm for dense targets and small targets having the phenomenon of missed detection, many parameters, large computation, and not suitable for deployment in the embedded device side and other problems, this paper proposes an improved YOLOv5 helmet wearing detection algorithm YOLOv5-Q. Firstly, on the original network 80×80 feature map, the 2 times up-sampling operation is made to form a 160×160 feature map, and the new feature map fuses the three-layer feature information of the original model to form a four-scale detection, which improves the detection accuracy of dense targets and small targets. Secondly, the feature extraction is achieved by replacing the original YOLOv5 backbone network with a lightweight GhostNet, which reduces the parameters of the network and can be ported to the embedded devices for target detection. Finally, the attention mechanism CA is added to boost the weight of important information in the feature map and suppress the weight of non-relevant information, thus improving the accuracy of the model. The experimental results show that the model size of YOLOv5-Q is 26.47 MB, the number of parameters is 12 696 640, and the accuracy is 0.937. Compared with YOLOv5, the YOLOv5-Q algorithm reduces the number of parameters by 39.12%, the model size by 37.2%, but the accuracy by only 1.2%. The YOLOv5-Q algorithm increases the detection accuracy of small targets in dense environments and meets the requirements for deployment on the embedded side.
YOLOv5轻量级安全帽嵌入式注意力机制
YOLOv5lightweightsafety helmetembeddedattention mechanisms
段鸿斌. 建筑安全管理存在的不利因素及措施[J]. 大众标准化,2022(8):16-18. doi: 10.3969/j.issn.1007-1350.2022.08.007http://dx.doi.org/10.3969/j.issn.1007-1350.2022.08.007
DUAN H B. Disadvantages of construction safety management and measures [J]. Popular Standardization, 2022(8): 16-18. (in Chinese). doi: 10.3969/j.issn.1007-1350.2022.08.007http://dx.doi.org/10.3969/j.issn.1007-1350.2022.08.007
方明,孙腾腾,邵桢. 基于改进YOLOv2的快速安全帽佩戴情况检测[J]. 光学 精密工程,2019,27(5):1196-1205. doi: 10.3788/ope.20192705.1196http://dx.doi.org/10.3788/ope.20192705.1196
FANG M, SUN T T, SHAO Z. Fast helmet-wearing-condition detection based on improved YOLOv2 [J]. Optics and Precision Engineering, 2019, 27(5): 1196-1205. (in Chinese). doi: 10.3788/ope.20192705.1196http://dx.doi.org/10.3788/ope.20192705.1196
李琪瑞.基于人体识别的安全帽视频检测系统研究与实现[D].成都:电子科技大学,2017.
LI Q R. A research and implementation of safety-helmet video detection system based on human body recognition [D]. Chengdu: University of Electronic Science and Technology of China, 2017. (in Chinese)
ZUO C, QIAN J M, FENG S J, et al. Deep learning in optical metrology: a review [J]. Light: Science & Applications, 2022, 11(1): 39. doi: 10.1038/s41377-022-00714-xhttp://dx.doi.org/10.1038/s41377-022-00714-x
GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation [C]//2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE, 2014: 580-587. doi: 10.1109/cvpr.2014.81http://dx.doi.org/10.1109/cvpr.2014.81
GIRSHICK R. Fast R-CNN [C]//2015 IEEE International Conference on Computer Vision. Santiago: IEEE, 2015: 1440-1448. doi: 10.1109/iccv.2015.169http://dx.doi.org/10.1109/iccv.2015.169
REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. doi: 10.1109/tpami.2016.2577031http://dx.doi.org/10.1109/tpami.2016.2577031
徐守坤,王雅如,顾玉宛,等. 基于改进Faster RCNN的安全帽佩戴检测研究[J]. 计算机应用研究,2020,37(3):901-905. doi: 10.19734/j.issn.1001-3695.2018.07.0667http://dx.doi.org/10.19734/j.issn.1001-3695.2018.07.0667
XU S K, WANG Y R, GU Y W, et al. Safety helmet wearing detection study based on improved Faster RCNN [J]. Application Research of Computers, 2020, 37(3): 901-905.(in Chinese). doi: 10.19734/j.issn.1001-3695.2018.07.0667http://dx.doi.org/10.19734/j.issn.1001-3695.2018.07.0667
REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. Seattle:IEEE, 2016: 779-788. doi: 10.1109/cvpr.2016.91http://dx.doi.org/10.1109/cvpr.2016.91
张锦,屈佩琪,孙程,等. 基于改进YOLOv5的安全帽佩戴检测算法[J]. 计算机应用,2022,42(4):1292-1300.
ZHANG J, QU P Q, SUN C, et al. Safety helmet wearing detection algorithm based on improved YOLOv5 [J]. Journal of Computer Applications, 2022, 42(4): 1292-1300. (in Chinese)
邱天衡,王玲,王鹏,等. 基于改进YOLOv5的目标检测算法研究[J]. 计算机工程与应用,2022,58(13):63-73. doi: 10.3778/j.issn.1002-8331.2202-0093http://dx.doi.org/10.3778/j.issn.1002-8331.2202-0093
QIU T H, WANG L, WANG P, et al. Research on object detection algorithm based on improved YOLOv5 [J]. Computer Engineering and Applications, 2022, 58(13): 63-73. (in Chinese). doi: 10.3778/j.issn.1002-8331.2202-0093http://dx.doi.org/10.3778/j.issn.1002-8331.2202-0093
WANG C Y, MARK LIAO H Y, WU Y H, et al. CSPNet: A new backbone that can enhance learning capability of CNN [C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Seattle: IEEE, 2020: 1571-1580. doi: 10.1109/cvprw50498.2020.00203http://dx.doi.org/10.1109/cvprw50498.2020.00203
LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: common objects in context [C]. 13th European Conference on Computer Vision. Zurich: Springer, 2014: 740-755. doi: 10.1007/978-3-319-10602-1_48http://dx.doi.org/10.1007/978-3-319-10602-1_48
HAN K, WANG Y H, TIAN Q, et al. GhostNet: more features from cheap operations [C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020: 1577-1586. doi: 10.1109/cvpr42600.2020.00165http://dx.doi.org/10.1109/cvpr42600.2020.00165
WANG F, WANG C L, CHEN M L, et al. Far-field super-resolution ghost imaging with a deep neural network constraint [J]. Light: Science & Applications, 2022, 11(1): 1. doi: 10.1038/s41377-021-00680-whttp://dx.doi.org/10.1038/s41377-021-00680-w
杨永波,李栋. 改进YOLOv5的轻量级安全帽佩戴检测算法[J]. 计算机工程与应用,2022,58(9):201-207. doi: 10.3778/j.issn.1002-8331.2111-0346http://dx.doi.org/10.3778/j.issn.1002-8331.2111-0346
YANG Y B, LI D. Lightweight helmet wearing detection algorithm of improved YOLOv5 [J]. Computer Engineering and Applications, 2022, 58(9): 201-207. (in Chinese). doi: 10.3778/j.issn.1002-8331.2111-0346http://dx.doi.org/10.3778/j.issn.1002-8331.2111-0346
王静,孙紫雲,郭苹,等. 改进YOLOv5的白细胞检测算法[J]. 计算机工程与应用,2022,58(4):134-142. doi: 10.3778/j.issn.1002-8331.2107-0332http://dx.doi.org/10.3778/j.issn.1002-8331.2107-0332
WANG J, SUN Z Y, GUO P, et al. Improved leukocyte detection algorithm of YOLOv5 [J]. Computer Engineering and Applications, 2022, 58(4): 134-142. (in Chinese). doi: 10.3778/j.issn.1002-8331.2107-0332http://dx.doi.org/10.3778/j.issn.1002-8331.2107-0332
贺愉婷,车进,吴金蔓. 基于YOLOv5和重识别的行人多目标跟踪方法[J]. 液晶与显示,2022,37(7):880-890. doi: 10.37188/CJLCD.2022-0025http://dx.doi.org/10.37188/CJLCD.2022-0025
HE Y T, CHE J, WU J M. Pedestrian multi-target tracking method based on YOLOv5 and person re-identification [J]. Chinese Journal of Liquid Crystals and Displays, 2022, 37(7): 880-890. (in Chinese). doi: 10.37188/CJLCD.2022-0025http://dx.doi.org/10.37188/CJLCD.2022-0025
郭磊,王邱龙,薛伟,等. 基于改进YOLOv5的小目标检测算法[J]. 电子科技大学学报,2022,51(2):251-258. doi: 10.12178/1001-0548.2021235http://dx.doi.org/10.12178/1001-0548.2021235
GUO L, WANG Q L, XUE W, et al. A small object detection algorithm based on improved YOLOv5 [J]. Journal of University of Electronic Science and Technology of China, 2022, 51(2): 251-258. (in Chinese). doi: 10.12178/1001-0548.2021235http://dx.doi.org/10.12178/1001-0548.2021235
PENG D Z, SUN Z K, CHEN Z R, et al. Detecting heads using feature refine net and cascaded multi-scale architecture [C]//Proceedings of the 2018 24th International Conference on Pattern Recognition. Beijing: IEEE, 2018: 2528-2533. doi: 10.1109/icpr.2018.8545068http://dx.doi.org/10.1109/icpr.2018.8545068
肖体刚,蔡乐才,高祥,等. 改进YOLOv3的安全帽佩戴检测方法[J]. 计算机工程与应用,2021,57(12):216-223. doi: 10.3778/j.issn.1002-8331.2009-0175http://dx.doi.org/10.3778/j.issn.1002-8331.2009-0175
XIAO T G, CAI L C, GAO X, et al. Improved YOLOv3 helmet wearing detection method [J]. Computer Engineering and Applications, 2021, 57(12): 216-223. (in Chinese). doi: 10.3778/j.issn.1002-8331.2009-0175http://dx.doi.org/10.3778/j.issn.1002-8331.2009-0175
韩锟,李斯宇,肖友刚. 施工场景下基于YOLOv3的安全帽佩戴状态检测[J]. 铁道科学与工程学报,2021,18(1):268-276.
HAN K, LI S Y, XIAO Y G. Detection of wearing state of safety helmet based on YOLOv3 in construction scene [J]. Journal of Railway Science and Engineering, 2021, 18(1): 268-276. (in Chinese)
0
浏览量
74
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构