{"defaultlang":"zh","titlegroup":{"articletitle":[{"lang":"zh","data":[{"name":"text","data":"基于轻量级图卷积网络的校园暴力行为识别"}]},{"lang":"en","data":[{"name":"text","data":"Campus violence action recognition based on lightweight graph convolution network"}]}]},"contribgroup":{"author":[{"name":[{"lang":"zh","surname":"李","givenname":"颀","namestyle":"eastern","prefix":""},{"lang":"en","surname":"LI","givenname":"Qi","namestyle":"eastern","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":"1"}],"role":["first-author"],"bio":[{"lang":"zh","text":["李颀(1973—),女,陕西西安人,博士,教授,2013年于西北工业大学获得博士学位,主要从事机器视觉、信息融合等方面的研究。E-mail: liqidq@sust.edu.cn"],"graphic":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304515&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304530&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304525&type=","width":"22.00000000","height":"31.98062134","fontsize":""}],"data":[[{"name":"text","data":"李颀"},{"name":"text","data":"(1973—),女,陕西西安人,博士,教授,2013年于西北工业大学获得博士学位,主要从事机器视觉、信息融合等方面的研究。E-mail: "},{"name":"text","data":"liqidq@sust.edu.cn"}]]}],"email":"liqidq@sust.edu.cn","deceased":false},{"name":[{"lang":"zh","surname":"邓","givenname":"耀辉","namestyle":"eastern","prefix":""},{"lang":"en","surname":"DENG","givenname":"Yao-hui","namestyle":"eastern","prefix":""}],"stringName":[],"aff":[{"rid":"aff2","text":"2"}],"role":["corresp"],"corresp":[{"rid":"cor1","lang":"zh","text":"E-mail:173743077@qq.com","data":[{"name":"text","data":"E-mail:173743077@qq.com"}]}],"bio":[{"lang":"zh","text":["邓耀辉(1996—),男,陕西西安人,硕士研究生,2019年于陕西科技大学获得学士学位,主要从事信息融合和智能识别等方面的研究。E-mail: 173743077@qq.com"],"graphic":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304522&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304553&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304543&type=","width":"22.00000191","height":"31.14770126","fontsize":""}],"data":[[{"name":"text","data":"邓耀辉"},{"name":"text","data":"(1996—),男,陕西西安人,硕士研究生,2019年于陕西科技大学获得学士学位,主要从事信息融合和智能识别等方面的研究。E-mail: "},{"name":"text","data":"173743077@qq.com"}]]}],"email":"173743077@qq.com","deceased":false},{"name":[{"lang":"zh","surname":"王","givenname":"娇","namestyle":"eastern","prefix":""},{"lang":"en","surname":"WANG","givenname":"Jiao","namestyle":"eastern","prefix":""}],"stringName":[],"aff":[{"rid":"aff2","text":"2"}],"role":[],"deceased":false}],"aff":[{"id":"aff1","intro":[{"lang":"zh","label":"1","text":"陕西科技大学 电子信息与人工智能学院,陕西 西安 710021","data":[{"name":"text","data":"陕西科技大学 电子信息与人工智能学院,陕西 西安 710021"}]},{"lang":"en","label":"1","text":"School of Electrical Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi’an 710021, China","data":[{"name":"text","data":"School of Electrical Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi’an 710021, China"}]}]},{"id":"aff2","intro":[{"lang":"zh","label":"2","text":"陕西科技大学 电气与控制工程学院,陕西 西安 710021","data":[{"name":"text","data":"陕西科技大学 电气与控制工程学院,陕西 西安 710021"}]},{"lang":"en","label":"2","text":"School of Electrical and Control Engineering, Shaanxi University of Science and Technology, Xi’an 710021, China","data":[{"name":"text","data":"School of Electrical and Control Engineering, Shaanxi University of Science and Technology, Xi’an 710021, China"}]}]}]},"abstracts":[{"lang":"zh","data":[{"name":"p","data":[{"name":"text","data":"针对卷积神经网络和图卷积网络的两类算法在校园暴力行为识别中识别速度和识别率不高的问题,本文提出一种结合多信息流数据融合和时空注意力机制的轻量级图卷积网络。以人体骨架为研究对象,首先融合关节点和骨架相关的多信息流数据,通过减少网络参数量来提高运算速度;其次构建基于非局部运算的时空注意力模块关注最具动作判别性的关节点,通过减少冗余信息提高识别准确率;接着构建时空特征提取模块获得关注关节点时空关联信息;最终由Softmax层实现动作识别。实验结果表明:在校园安防实景中对拳打、脚踢、倒地、推搡、打耳光和跪地6种典型动作识别准确率分别为94.5%,97.0%,98.5%,95.0%,94.5%,95.5%,识别速度最大为20.6 fps。在UCF101数据集上对比两类基准网络,识别速度和准确率均有提升,验证了方法对其他动作的通用性,可以满足对校园典型暴力行为识别的实时性和可靠性要求。"}]}]},{"lang":"en","data":[{"name":"p","data":[{"name":"text","data":"Aiming at the problem of low recognition speed and recognition rate of convolution neural network and graph convolution network in campus violence recognition, this paper proposes a lightweight graph convolution network combined with multi-information flow data fusion and spatio-temporal attention mechanism. The human skeleton is taken as the research object. Firstly, the multi-information flow data related to joint points and skeleton are fused to improve the operation speed by reducing the number of network parameters. Secondly, the spatio-temporal attention module based on nonlocal operation is constructed to focus on the most action discriminant nodes, and the recognition accuracy is improved by reducing redundant information. Then, the spatio-temporal feature extraction module is constructed to obtain the spatio-temporal correlation information of the concerned nodes. Finally, action recognition is realized by Softmax layer. The experimental results show that the recognition accuracy of boxing, kicking, falling, pushing, earlighting and kneeling in campus security scene is 94.5%, 97.0%, 98.5%, 95.0%, 94.5% and 95.5%, respectively, and the maximum recognition speed is 20.6 fps. Compared with the two benchmark networks on UCF101 dataset, the recognition speed and accuracy are improved, which verifies the universality of the method for other actions. Therefore, it can meet the real-time and reliability requirements of typical campus violence identification."}]}]}],"keyword":[{"lang":"zh","data":[[{"name":"text","data":"校园暴力行为识别"}],[{"name":"text","data":"图卷积网络"}],[{"name":"text","data":"数据融合"}],[{"name":"text","data":"时空注意力模块"}]]},{"lang":"en","data":[[{"name":"text","data":"campus violence action recognition"}],[{"name":"text","data":"graph convolution network"}],[{"name":"text","data":"information flow data fusion"}],[{"name":"text","data":"spatio-temporal attention module"}]]}],"highlights":[],"body":[{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"1 引言"}],"level":"1","id":"s1"}},{"name":"p","data":[{"name":"text","data":"我国校园安全在依赖人工巡查的基础上,逐步向智能化方向发展,有关人脸检测"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"1","type":"bibr","rid":"R1","data":[{"name":"text","data":"1"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"与人脸识别"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"2","type":"bibr","rid":"R2","data":[{"name":"text","data":"2"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"系统应用已经非常广泛,然而缺乏成熟的异常行为识别系统。深度学习中基于卷积神经网络的暴力行为识别方法受图像光照和颜色等因素影响较大,识别速度和准确率有待大幅提高"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"3","type":"bibr","rid":"R3","data":[{"name":"text","data":"3"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。人体骨架序列不受光照和颜色影响,可以表征人体关节点和骨架变化与人体行为的关联信息,但基于骨架数据的图卷积网络的方法识别速度和识别率未能满足实际应用,有望通过改进图卷积网络提高实时性和可靠性。"}]},{"name":"p","data":[{"name":"text","data":"早期人体行为识别通过专家手工设计特征模拟关节之间的相关性实现"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"4","type":"bibr","rid":"R4","data":[{"name":"text","data":"4"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。Yang和Tian采用朴素贝叶斯最近邻分类器(Naïve-Bayes-Nearest-Neighbor,NBNN)实现了多类动作的识别"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"5","type":"bibr","rid":"R5","data":[{"name":"text","data":"5"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",但手工提取和调参表征能力有限且工作量大;Li和He等人通过深度卷积神经网络(Convolutional Neural Network,CNN)提取不同时间段的多尺度特征并得到最终识别结果,但映射过程信息丢失、网络参数量庞大"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"6","type":"bibr","rid":"R6","data":[{"name":"text","data":"6"}]}},{"name":"text","data":"]"}]},{"name":"text","data":";Zhao和Liu等人通过对原始骨架关节坐标进行尺度变换后输入残差独立循环神经网络(Recurrent Neural Network,RNN)得到识别结果,表征时间信息的能力增强,但易丢失原始关节点之间的关联信息"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"7","type":"bibr","rid":"R7","data":[{"name":"text","data":"7"}]}},{"name":"text","data":"]"}]},{"name":"text","data":";Yan和Xiong等人首次提出用图卷积网络(Graph Convolutional Network,GCN)进行行为识别,避免了手工设计遍历规则带来的缺陷"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"R8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"text","data":"基于人体骨架的行为识别受光照和背景等因素影响非常小,与基于RGB数据的方法相比具有很大优势。人体的关节骨架数据是一种拓扑图,图中每个关节点在相邻关节点数不同的情况下,传统的卷积神经网络不能直接使用同样大小的卷积核进行卷积计算去处理这种非欧式数据"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"9","type":"bibr","rid":"R9","data":[{"name":"text","data":"9"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。因此,在基于骨架的行为识别领域,基于图卷积网络的方法更为适合。从研究到应用阶段的转换,需要在保证准确率的同时实现网络的轻量化:(1)需要在多种信息流数据构成的数据集上分别多次训练,融合各训练结果得到最终结果,增加了网络参数量和计算复杂度;(2)输入的骨架序列中,存在冗余的关节点信息,导致识别速度和识别率降低。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2 轻量级图卷积网络搭建"}],"level":"1","id":"s2"}},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2.1 图卷积网络"}],"level":"2","id":"s2a"}},{"name":"p","data":[{"name":"text","data":"以图像为代表的欧式空间中,将图像中每个像素点当作一个结点,则结点规则排布且邻居结点数量固定,边缘上的点可进行Padding填充操作。但在图结构这种非欧空间中,结点排布无序且邻居结点数量不固定,无法通过传统的卷积神经网络固定大小的卷积核实现特征提取,需要一种能够处理变长邻居结点的卷积核"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"10","type":"bibr","rid":"R10","data":[{"name":"text","data":"10"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。对图而言,需要输入维度为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304558&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304551&type=","width":"10.15999985","height":"2.87866688","fontsize":""}}}]},{"name":"text","data":"的特征矩阵"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304563&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304573&type=","width":"2.70933342","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304582&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304565&type=","width":"10.58333302","height":"2.87866688","fontsize":""}}}]},{"name":"text","data":"的邻接矩阵"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304590&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304586&type=","width":"2.79399991","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"提取特征,其中"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304597&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304593&type=","width":"2.87866688","height":"2.87866688","fontsize":""}}}]},{"name":"text","data":"为图中结点数,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304618&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304630&type=","width":"2.53999996","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"为每个结点输入特征个数。相邻隐藏层的结点特征变换公式为:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(1)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304624&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304621&type=","width":"26.58533096","height":"4.23333359","fontsize":""}}},{"name":"text","data":" ,"}],"id":"DF1"}},{"name":"p","data":[{"name":"text","data":"其中"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305593&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305590&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"为层数,第一层"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304668&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304654&type=","width":"11.93799973","height":"3.04800010","fontsize":""}}}]},{"name":"text","data":";"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304721&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304718&type=","width":"4.99533367","height":"3.97933316","fontsize":""}}}]},{"name":"text","data":"为传播函数,不同的图卷积网络模型传播函数不同。每层"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304680&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304694&type=","width":"4.06400013","height":"3.04800010","fontsize":""}}}]},{"name":"text","data":"对应"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304686&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304705&type=","width":"11.00666618","height":"3.13266683","fontsize":""}}}]},{"name":"text","data":"维度特征矩阵,通过传播函数"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304721&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304718&type=","width":"4.99533367","height":"3.97933316","fontsize":""}}}]},{"name":"text","data":"将聚合后的特征变换为下一层的特征,使得特征越来越抽象。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2.2 轻量级图卷积网络框架"}],"level":"2","id":"s2b"}},{"name":"p","data":[{"name":"text","data":"为了使人体骨架序列中的动作特征被充分利用,且在识别准确率提高的同时实现动作识别模型的轻量化,本文提出了一种结合多信息流数据融合和时空注意力机制的轻量级自适应图卷积网络。以输入的人体骨架序列为研究对象,首先融合关节点信息流、骨长信息流、关节点偏移信息流和骨长变化信息流4种数据信息;接着构建基于非局部运算的可嵌入的时空注意力模块,关注信息流数据融合后人体骨架序列中最具动作判别性的关节点;最后通过Softmax得到对动作片段的识别结果,网络主体框架如"},{"name":"xref","data":{"text":"图1","type":"fig","rid":"F1","data":[{"name":"text","data":"图1"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"F1","caption":[{"lang":"zh","label":[{"name":"text","data":"图1"}],"title":[{"name":"text","data":"网络框架"}]},{"lang":"en","label":[{"name":"text","data":"Fig.1"}],"title":[{"name":"text","data":"Network framework"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304707&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304728&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304711&type=","width":"70.90834045","height":"9.17222214","fontsize":""}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2.3 多信息流数据融合"}],"level":"2","id":"s2c"}},{"name":"p","data":[{"name":"text","data":"现阶段基于图卷积的方法"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"R11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"多采用在多种不同数据集下多次训练,根据训练结果进行决策级融合,导致网络参数量大。因此,在训练之前对原始关节点坐标数据进行预处理,实现关节点信息流、骨长信息流、关节点偏移信息流和骨长变化信息流的数据级融合,减少网络参量,从而降低计算要求。"}]},{"name":"p","data":[{"name":"text","data":"人体骨架序列关节点的定义如"},{"name":"xref","data":{"text":"公式(2)","type":"disp-formula","rid":"DF2","data":[{"name":"text","data":"公式(2)"}]}},{"name":"text","data":"所示:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(2)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304735&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304731&type=","width":"63.41533279","height":"4.65666676","fontsize":""}}},{"name":"text","data":" ,"}],"id":"DF2"}},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304754&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304751&type=","width":"2.53999996","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"为序列中的总帧数,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304744&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304759&type=","width":"2.87866688","height":"2.87866688","fontsize":""}}}]},{"name":"text","data":"为总关节点数18,"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"为在"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304882&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304880&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"时刻的关节点。融合多信息流之前,需要进行骨架序列"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304774&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304771&type=","width":"1.26999998","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"的多样化预处理。关节点信息流由人体姿态估计算法OpenPose获取到的18个关节点坐标得到,相对于动作捕获设备成本大幅降低"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"blockXref","data":{"data":[{"name":"xref","data":{"text":"12","type":"bibr","rid":"R12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"-"},{"name":"xref","data":{"text":"13","type":"bibr","rid":"R13","data":[{"name":"text","data":"13"}]}}],"rid":["R12","R13"],"text":"12-13","type":"bibr"}},{"name":"text","data":"]"}]},{"name":"text","data":"。其他信息流数据定义如下。"}]},{"name":"p","data":[{"name":"text","data":"骨长信息流(Bone Length Information Flow):将靠近人体重心的关节点定义为源关节点,坐标表示为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304809&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304792&type=","width":"23.62199974","height":"4.91066647","fontsize":""}}}]},{"name":"text","data":";远离重心点的关节点定位为目标关节点,坐标表示为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304815&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304813&type=","width":"23.53733444","height":"4.91066647","fontsize":""}}}]},{"name":"text","data":"。通过两关节点作差获取骨长信息流:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(3)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304805&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304802&type=","width":"61.29866791","height":"4.91066647","fontsize":""}}},{"name":"text","data":" ."}],"id":"DF3"}},{"name":"p","data":[{"name":"text","data":"关节点偏移信息流(Joint Difference Information Flow):定义第"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304882&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304880&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"帧的关节点"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305593&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305590&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"的坐标表示为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304830&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304844&type=","width":"23.62199974","height":"4.91066647","fontsize":""}}}]},{"name":"text","data":",第"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304907&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304896&type=","width":"7.78933382","height":"2.96333337","fontsize":""}}}]},{"name":"text","data":"帧的关节点"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305593&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305590&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"的坐标表示为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304870&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304861&type=","width":"34.29000092","height":"4.91066647","fontsize":""}}}]},{"name":"text","data":",关节点偏移信息流可通过对相邻帧同一关节点坐标位置作差获得:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(4)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304877&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304876&type=","width":"53.50933075","height":"10.24466610","fontsize":""}}},{"name":"text","data":" "}],"id":"DF4"}},{"name":"p","data":[{"name":"text","data":"骨长变化信息流(Change of Bone Length Information Flow):相邻两帧中,同一节骨骼由于动作变化导致所表现出的长度不同,由"},{"name":"xref","data":{"text":"公式(3)","type":"disp-formula","rid":"DF3","data":[{"name":"text","data":"公式(3)"}]}},{"name":"text","data":"定义第"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304882&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304880&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"帧的骨长信息流为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304895&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304901&type=","width":"6.26533365","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":",则第"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304907&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304896&type=","width":"7.78933382","height":"2.96333337","fontsize":""}}}]},{"name":"text","data":"帧的骨长信息流为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304912&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304909&type=","width":"9.82133293","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":",通过对相邻帧同一骨骼长度作差获得骨长变化信息流:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(5)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304931&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304919&type=","width":"38.26933289","height":"3.72533321","fontsize":""}}},{"name":"text","data":" ."}],"id":"DF5"}},{"name":"p","data":[{"name":"text","data":"如"},{"name":"xref","data":{"text":"图2","type":"fig","rid":"F2","data":[{"name":"text","data":"图2"}]}},{"name":"text","data":"所示,根据对关节点信息流、骨长信息流、关节点偏移信息流和骨长变化信息流的定义,将多数据流加权融合成单一的特征向量,骨架序列维度由"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304934&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304932&type=","width":"24.21466637","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"变为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304940&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304937&type=","width":"26.07733345","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":":"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(6)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304952&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304960&type=","width":"44.78866577","height":"9.56733322","fontsize":""}}},{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304969&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304954&type=","width":"50.03799820","height":"3.55599999","fontsize":""}}},{"name":"text","data":" ,"}],"id":"DF6"}},{"name":"p","data":[{"name":"text","data":"其中:权重"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304990&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304988&type=","width":"10.92199993","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"由关节点偏移度"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304998&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29304978&type=","width":"28.02466965","height":"4.65666676","fontsize":""}}}]},{"name":"text","data":"和骨长变化度"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305005&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305001&type=","width":"31.07266808","height":"4.48733330","fontsize":""}}}]},{"name":"text","data":"决定,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305199&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305189&type=","width":"2.87866688","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"为前一帧坐标点"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305027&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305013&type=","width":"5.33400011","height":"3.80999994","fontsize":""}}}]},{"name":"text","data":"与后一帧坐标点"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305031&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305030&type=","width":"8.89000034","height":"3.80999994","fontsize":""}}}]},{"name":"text","data":"分别和坐标原点所构成直线的夹角,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305167&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305166&type=","width":"2.87866688","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"如"},{"name":"xref","data":{"text":"式(7)","type":"disp-formula","rid":"DF7","data":[{"name":"text","data":"式(7)"}]}},{"name":"text","data":"定义:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(7)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305059&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305057&type=","width":"54.10200119","height":"9.05933285","fontsize":""}}},{"name":"text","data":" ,"}],"id":"DF7"}},{"name":"p","data":[{"name":"text","data":"式中:绝对值运算代表骨骼长度,当"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305199&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305189&type=","width":"2.87866688","height":"3.72533321","fontsize":""}}},{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305203&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305200&type=","width":"3.47133350","height":"3.30200005","fontsize":""}}}]},{"name":"text","data":"30°且"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305080&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305072&type=","width":"7.02733374","height":"4.23333359","fontsize":""}}}]},{"name":"text","data":"50%时,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305142&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305140&type=","width":"3.64066648","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305144&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305131&type=","width":"3.64066648","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"权值为2,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305147&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305133&type=","width":"3.64066648","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305103&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305109&type=","width":"3.64066648","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"权值为1;当"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305105&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305113&type=","width":"2.87866688","height":"3.72533321","fontsize":""}}},{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305124&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305106&type=","width":"3.55599999","height":"3.30200005","fontsize":""}}}]},{"name":"text","data":"30°且"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305126&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305122&type=","width":"7.02733374","height":"4.14866638","fontsize":""}}}]},{"name":"text","data":"50%时,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305142&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305140&type=","width":"3.64066648","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305144&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305131&type=","width":"3.64066648","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"权值为1,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305147&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305133&type=","width":"3.64066648","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305153&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305136&type=","width":"3.64066648","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"权值为2;当"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305199&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305189&type=","width":"2.87866688","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305162&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305170&type=","width":"2.87866688","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"都小于阈值时,权值均为1;当"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305199&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305189&type=","width":"2.87866688","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305167&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305166&type=","width":"2.87866688","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"都大于阈值时,权值均为2。通过计算关节点偏移程度以及骨长变化程度,为变化程度大的信息流数据赋予了更高的权重,从而增强了信息流对动作的表征。再使用融合后的单一特征向量表示多信息流数据,将训练次数由4次减少为1次,降低了总体参数量,从而提高网络运算速度。"}]},{"name":"fig","data":{"id":"F2","caption":[{"lang":"zh","label":[{"name":"text","data":"图2"}],"title":[{"name":"text","data":"信息流数据融合"}]},{"lang":"en","label":[{"name":"text","data":"Fig.2"}],"title":[{"name":"text","data":"Data fusion of information flow"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305178&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305181&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305185&type=","width":"70.90834045","height":"28.92777443","fontsize":""}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2.4 时空注意力模块构建"}],"level":"2","id":"s2d"}},{"name":"p","data":[{"name":"text","data":"在保证网络运算速度提升的基础上,也要保证动作识别的准确性。一段人体骨架序列包含时间域和空间域的所有信息,但是只有对拳打、脚踢和倒地动作具有判别性的关节点关联信息值得关注,注意力机制大多只是去除无关项而关注感兴趣动作区域,但真正的冗余信息来自两个方面:(1)拳打动作发生时,只有肩膀、手肘和手腕3个关节点相互之间相关性强;脚踢动作发生时,只有髋、膝盖、脚踝跟3个关节点相互之间相关性强,这些关键关节点与其他关节点相关性弱或不相关。(2)受到暴力拳打或脚踢而倒地后,各关节点偏移幅度较小,前后帧的各关节点相关性几乎不变,无需对后一帧骨架信息进行提取。"}]},{"name":"p","data":[{"name":"text","data":"将每个关节点偏移度"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305199&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305189&type=","width":"2.87866688","height":"3.72533321","fontsize":""}}},{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305203&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305200&type=","width":"3.47133350","height":"3.30200005","fontsize":""}}}]},{"name":"text","data":"30°的关节点定义为源关节点,每次选取一个源关节点,其他关节点则为目标关节点,神经网络中的局部运算方法只能对目标关节点遍历后单独计算两两的相关性,使源关节点丢失全局表征能力。为了表征所有目标关节点对源关节点的相关性,如"},{"name":"xref","data":{"text":"图3","type":"fig","rid":"F3","data":[{"name":"text","data":"图3"}]}},{"name":"text","data":"所示,将非局部运算(Non-local operations)的思想融入时空注意力模块,并在特征输入后添加尺寸为2×2、步长为2的最大池化层(Maxpool layer),以保证压缩数据和参数数量的同时尽可能保留原有特征。"}]},{"name":"fig","data":{"id":"F3","caption":[{"lang":"zh","label":[{"name":"text","data":"图3"}],"title":[{"name":"text","data":"时空注意力模块"}]},{"lang":"en","label":[{"name":"text","data":"Fig.3"}],"title":[{"name":"text","data":"Spatio-temporal attention module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305195&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305205&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305213&type=","width":"70.90834045","height":"45.86111069","fontsize":""}]}},{"name":"p","data":[{"name":"text","data":"时空注意力模块(Spatio-temporal Attention Module,STA)包含一个空间注意力模块和时间注意力模块,其中空间注意力模块(Spatial Attention Module,SA)捕获帧内关节相关性,时间注意力模块(Temporal Attention Module,TA)捕获帧间关节的相关性,最终二者与输入特征相加融合。时空注意力模块输出特征的维度和输入相同,因此可以嵌入图卷积网络的网络结构之间。模块功能的实现分为4个步骤:"}]},{"name":"p","data":[{"name":"text","data":"(1)输入特征"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305209&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305216&type=","width":"2.62466669","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"的维度为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305219&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305228&type=","width":"18.20333290","height":"2.87866688","fontsize":""}}}]},{"name":"text","data":",其中"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305230&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305221&type=","width":"2.53999996","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"、"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305235&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305224&type=","width":"2.87866688","height":"2.87866688","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305240&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305238&type=","width":"2.62466669","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"分别对应帧、关节和通道的数目,将空间注意力模块的输入特征表示为"},{"name":"italic","data":[{"name":"text","data":"z"},{"name":"sup","data":[{"name":"text","data":"s"}]}]},{"name":"text","data":"="},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305258&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305226&type=","width":"4.74133301","height":"4.65666676","fontsize":""}}},{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305244&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305260&type=","width":"1.86266661","height":"3.13266683","fontsize":""}}},{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305270&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305267&type=","width":"30.56466866","height":"4.65666676","fontsize":""}}}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"text","data":"(2)将特征嵌入到高斯函数("},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305557&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305555&type=","width":"1.77800000","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305563&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305560&type=","width":"2.37066650","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":",卷积内核尺寸"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305618&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305600&type=","width":"8.63599968","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":")中计算任意位置两个关节"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305593&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305590&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305321&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305320&type=","width":"1.10066664","height":"3.64066648","fontsize":""}}}]},{"name":"text","data":"的相关性,由"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305321&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305320&type=","width":"1.10066664","height":"3.64066648","fontsize":""}}}]},{"name":"text","data":"进行枚举,得到关节点"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305593&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305590&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"的加权表示:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(8)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305308&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305293&type=","width":"45.72000122","height":"8.63599968","fontsize":""}}},{"name":"text","data":" ,"}],"id":"DF8"}},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305311&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305295&type=","width":"2.45533323","height":"3.97933316","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305314&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305312&type=","width":"2.45533323","height":"3.97933316","fontsize":""}}}]},{"name":"text","data":"分别表示关节点"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305593&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305590&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305321&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305320&type=","width":"1.10066664","height":"3.64066648","fontsize":""}}}]},{"name":"text","data":"的特征;函数"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305345&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305352&type=","width":"1.94733346","height":"3.64066648","fontsize":""}}}]},{"name":"text","data":"用来计算关节点"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305321&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305320&type=","width":"1.10066664","height":"3.64066648","fontsize":""}}}]},{"name":"text","data":"特征表示,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305302&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305322&type=","width":"20.91266632","height":"5.07999992","fontsize":""}}}]},{"name":"text","data":","},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305325&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305324&type=","width":"5.07999992","height":"3.97933316","fontsize":""}}}]},{"name":"text","data":"是待学习的权重矩阵;高斯函数"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305327&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305326&type=","width":"1.10066664","height":"3.64066648","fontsize":""}}}]},{"name":"text","data":"定义为:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(9)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305329&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305328&type=","width":"28.70199966","height":"5.50333309","fontsize":""}}},{"name":"text","data":" ,"}],"id":"DF9"}},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305331&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305335&type=","width":"20.65866661","height":"4.65666676","fontsize":""}}}]},{"name":"text","data":","},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305348&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305336&type=","width":"21.16666603","height":"5.16466665","fontsize":""}}}]},{"name":"text","data":",设定"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305340&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305338&type=","width":"32.00400162","height":"5.24933338","fontsize":""}}}]},{"name":"text","data":"为相关性表示的归一化因子。为了降低计算成本及最大程度保留低阶特征,在函数"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305557&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305555&type=","width":"1.77800000","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"、"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305563&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305560&type=","width":"2.37066650","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305345&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305352&type=","width":"1.94733346","height":"3.64066648","fontsize":""}}}]},{"name":"text","data":"之后添加了尺寸为2×2、步长为2的最大池化层。"}]},{"name":"p","data":[{"name":"text","data":"(3)通过函数"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305358&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305356&type=","width":"2.53999996","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"使加权后得到空间注意力信息"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305364&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305346&type=","width":"2.70933342","height":"3.97933316","fontsize":""}}},{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305366&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305365&type=","width":"21.59000015","height":"4.65666676","fontsize":""}}}]},{"name":"text","data":":"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(10)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305378&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305368&type=","width":"16.08666801","height":"4.82600021","fontsize":""}}},{"name":"text","data":" ."}],"id":"DF10"}},{"name":"p","data":[{"name":"text","data":"(4)将时间注意力模块的输入特征表示"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305382&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305380&type=","width":"44.19599915","height":"4.65666676","fontsize":""}}}]},{"name":"text","data":";重复(2)、(3)得到时间注意力信息"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305373&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305370&type=","width":"2.62466669","height":"3.97933316","fontsize":""}}},{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305387&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305384&type=","width":"21.59000015","height":"4.65666676","fontsize":""}}}]},{"name":"text","data":",与空间注意力信息和输入特征相加融合得到时空注意力信息 "},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305390&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305377&type=","width":"2.79399991","height":"3.97933316","fontsize":""}}},{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305398&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305396&type=","width":"21.67466736","height":"4.65666676","fontsize":""}}}]},{"name":"text","data":":"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(11)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305416&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305414&type=","width":"25.06133461","height":"4.06400013","fontsize":""}}},{"name":"text","data":" ."}],"id":"DF11"}},{"name":"p","data":[{"name":"text","data":"通过基于非局部运算的注意力机制得到具有判别性的关节点时空关联信息,去除了动作区域无关项和输入的冗余关节点信息的干扰,减少了不必要的计算,从而提高了准确率。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2.5 时空特征提取模块构建"}],"level":"2","id":"s2e"}},{"name":"p","data":[{"name":"text","data":"为了提取骨架序列在空间和时间维度上的特征,首先利用时空图卷积网络和空间划分策略对动态骨架进行建模,原始表达式为:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(12)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305420&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305403&type=","width":"42.50266647","height":"4.74133301","fontsize":""}}},{"name":"text","data":" ,"}],"id":"DF12"}},{"name":"p","data":[{"name":"text","data":"其中,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305426&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305423&type=","width":"4.23333359","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305432&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305439&type=","width":"5.33400011","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"分别为图卷积输入和输出特征,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305454&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305444&type=","width":"2.79399991","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"为空间域卷积核尺寸,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305457&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305455&type=","width":"4.57200003","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"为权重,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305495&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305494&type=","width":"3.55599999","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"为关节点"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305593&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305590&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"的邻接矩阵,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305477&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305466&type=","width":"3.72533321","height":"3.47133350","fontsize":""}}}]},{"name":"text","data":"代表点乘,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305498&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305487&type=","width":"4.31799984","height":"3.80999994","fontsize":""}}}]},{"name":"text","data":"为赋予连接权重的关节点映射矩阵。"}]},{"name":"p","data":[{"name":"text","data":"使用预先定义好的骨架结构数据无法对所有未知动作准确识别,因此需要设计一种具有自适应性的邻接矩阵"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305495&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305494&type=","width":"3.55599999","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":",使得图卷积网络模型具有自适应性。因此,为了在网络学习中改变骨架序列图的拓扑结构,将"},{"name":"xref","data":{"text":"式(12)","type":"disp-formula","rid":"DF12","data":[{"name":"text","data":"式(12)"}]}},{"name":"text","data":"中决定拓扑结构的邻接矩阵和映射矩阵分成"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305495&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305494&type=","width":"3.55599999","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"、"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305595&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305585&type=","width":"3.80999994","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305613&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305619&type=","width":"3.13266683","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":",自适应图卷积模块框图如"},{"name":"xref","data":{"text":"图4","type":"fig","rid":"F4","data":[{"name":"text","data":"图4"}]}},{"name":"text","data":"所示,输出特征重新构造为:"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(13)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305533&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305525&type=","width":"50.37666702","height":"4.74133301","fontsize":""}}},{"name":"text","data":" ."}],"id":"DF13"}},{"name":"fig","data":{"id":"F4","caption":[{"lang":"zh","label":[{"name":"text","data":"图4"}],"title":[{"name":"text","data":"自适应图卷积模块"}]},{"lang":"en","label":[{"name":"text","data":"Fig.4"}],"title":[{"name":"text","data":"Adaptive graph convolutional module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305548&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305553&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305551&type=","width":"73.02500153","height":"22.57777786","fontsize":""}]}},{"name":"p","data":[{"name":"xref","data":{"text":"图4","type":"fig","rid":"F4","data":[{"name":"text","data":"图4"}]}},{"name":"text","data":"中"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305557&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305555&type=","width":"1.77800000","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"和"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305563&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305560&type=","width":"2.37066650","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"即"},{"name":"xref","data":{"text":"式(9)","type":"disp-formula","rid":"DF9","data":[{"name":"text","data":"式(9)"}]}},{"name":"text","data":"中高斯嵌入函数,卷积内核尺寸为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305618&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305600&type=","width":"8.63599968","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":";第一部分"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305588&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305581&type=","width":"3.55599999","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"仍为关节点"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305593&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305590&type=","width":"1.10066664","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"的邻接矩阵;第二部分"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305595&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305585&type=","width":"3.80999994","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"作为对原始邻接矩阵的加法补充,能通过网络训练不断迭代更新;第三部分"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305613&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305619&type=","width":"3.13266683","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":"由数据不断驱动更新来学习连接权重,关节点相关性可由"},{"name":"xref","data":{"text":"式(8)","type":"disp-formula","rid":"DF8","data":[{"name":"text","data":"式(8)"}]}},{"name":"text","data":"计算得到后与"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305618&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305600&type=","width":"8.63599968","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"卷积相乘得到相似性矩阵"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305613&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305619&type=","width":"3.13266683","height":"3.72533321","fontsize":""}}}]},{"name":"text","data":":"}]},{"name":"dispformula","data":{"label":[{"name":"text","data":"(14)"}],"data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305633&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305621&type=","width":"45.55066681","height":"4.65666676","fontsize":""}}},{"name":"text","data":" ."}],"id":"DF14"}},{"name":"p","data":[{"name":"text","data":"通过以上计算,构建出具有自适应性的图卷积模块,接下来对骨架序列包含的时空信息进行提取。"}]},{"name":"p","data":[{"name":"text","data":"本文提出的时空特征提取模块如"},{"name":"xref","data":{"text":"图5","type":"fig","rid":"F5","data":[{"name":"text","data":"图5"}]}},{"name":"text","data":"所示。在每次完成卷积操作后通过BN(Batch normalization)层将数据归一化,再通过ReLU层提高模型表达能力。可嵌入的时空注意力模块STA已在2.4一节中搭建完成,将特征输入提取模块后对感兴趣动作关节点进行提取。接着通过具有自适应性的GCN在空间维度上获得骨架数据中同一帧各关节点的相关性,通过时间卷积网络(Temporal Convolutional Network,TCN)在时间维度上获得相邻帧同一关节点的关系。丢弃层(Dropout)减少隐层结点的相互作用避免了图卷积网络的过度拟合,参数设置为0.5,同时为了增加模型稳定性进行了残差连接。"}]},{"name":"fig","data":{"id":"F5","caption":[{"lang":"zh","label":[{"name":"text","data":"图5"}],"title":[{"name":"text","data":"时空特征提取模块"}]},{"lang":"en","label":[{"name":"text","data":"Fig.5"}],"title":[{"name":"text","data":"Spatio-temporal feature extracting module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305627&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305639&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305637&type=","width":"24.34166527","height":"63.85277557","fontsize":""}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"2.6 整体网络结构搭建"}],"level":"2","id":"s2f"}},{"name":"p","data":[{"name":"text","data":"如"},{"name":"xref","data":{"text":"图6","type":"fig","rid":"F6","data":[{"name":"text","data":"图6"}]}},{"name":"text","data":"所示,将9个时空特征提取模块"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305652&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305650&type=","width":"10.92199993","height":"3.64066648","fontsize":""}}}]},{"name":"text","data":"进行堆叠,从特征输入"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305658&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305654&type=","width":"2.62466669","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"到行为标签"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305678&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305664&type=","width":"8.63599968","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"输出方向上,BN层用于骨架图输入后进行标准化,"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305685&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305669&type=","width":"10.92199993","height":"3.64066648","fontsize":""}}}]},{"name":"text","data":"输出特征维度为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305693&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305675&type=","width":"32.34266663","height":"2.87866688","fontsize":""}}}]},{"name":"text","data":","},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305691&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305695&type=","width":"10.92199993","height":"3.64066648","fontsize":""}}}]},{"name":"text","data":"输出特征维度为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305712&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305708&type=","width":"37.50733566","height":"3.30200005","fontsize":""}}}]},{"name":"text","data":","},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305698&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305716&type=","width":"10.92199993","height":"3.64066648","fontsize":""}}}]},{"name":"text","data":"输出特征维度为"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305705&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305721&type=","width":"37.50733566","height":"3.30200005","fontsize":""}}}]},{"name":"text","data":",其中通道数分别为64,64,64,128,128,128,256,256,256。在空间和时间维度上应用全局平均池化操作(Global Average Pooling,GAP)将样本的特征图大小进行统一,最终使用softmax层得到0"},{"name":"inlineformula","data":[{"name":"math","data":{"math":"","graphicsData":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305738&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305724&type=","width":"3.30200005","height":"2.79399991","fontsize":""}}}]},{"name":"text","data":"1的数据进行人体行为的识别。"}]},{"name":"fig","data":{"id":"F6","caption":[{"lang":"zh","label":[{"name":"text","data":"图6"}],"title":[{"name":"text","data":"整体网络架构"}]},{"lang":"en","label":[{"name":"text","data":"Fig.6"}],"title":[{"name":"text","data":"Overall network architecture"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305739&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305742&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305728&type=","width":"56.09166336","height":"31.75000000","fontsize":""}]}}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"3 实验结果与分析"}],"level":"1","id":"s3"}},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"3.1 实验配置"}],"level":"2","id":"s3a"}},{"name":"p","data":[{"name":"text","data":"实验平台的配置为8代i7 CPU,64 G内存,4 TB固态硬盘存储,显卡为RTX2080Ti。深度学习框架为PyTorch1.3,Python版本为3.6。优化策略采用随机梯度下降(Stochastic gradient descent,SGD),每批次训练样本数(Batch size)设置为64,迭代次数(Epoch)设置为60,初始学习率(Learning rate)为0.1,Epoch达到20以后学习率设置为0.01。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"3.2 行为识别实验"}],"level":"2","id":"s3b"}},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"3"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"2"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"1 校园安防实景测试"}],"level":"3","id":"s3b1"}},{"name":"p","data":[{"name":"text","data":"本文面向实际应用,对校园马路、操场和湖边等不同场景制作了12 000个视频片段,拳打、脚踢、倒地、推搡、打耳光和跪地6种典型动作各2 000个,单个时长不大于5 s。所有人员身高、体重和身体比例等方面有所差异,以增强模型的泛化能力。根据实验配置进行训练,"},{"name":"xref","data":{"text":"图7","type":"fig","rid":"F7","data":[{"name":"text","data":"图7"}]}},{"name":"text","data":"为模型的训练损失与综合测试准确率的变化曲线。"}]},{"name":"fig","data":{"id":"F7","caption":[{"lang":"zh","label":[{"name":"text","data":"图7"}],"title":[{"name":"text","data":"模型训练损失与测试准确率变化图"}]},{"lang":"en","label":[{"name":"text","data":"Fig.7"}],"title":[{"name":"text","data":"Variation diagram of model training loss and test accuracy"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305729&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305749&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305748&type=","width":"65.26388550","height":"48.33055496","fontsize":""}]}},{"name":"p","data":[{"name":"text","data":"可以看出随着迭代次数的增长,模型的训练损失逐渐下降。当epoch在20左右时,由于学习率的下降,测试准确率开始大幅提高;当epoch超过35之后,训练损失与测试准确率几乎保持不变。使用训练好的模型分别对6类动作对应的测试集进行测试,主要识别过程如"},{"name":"xref","data":{"text":"图8","type":"fig","rid":"F8","data":[{"name":"text","data":"图8"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"F8","caption":[{"lang":"zh","label":[{"name":"text","data":"图8"}],"title":[{"name":"text","data":"6种典型动作识别过程"}]},{"lang":"en","label":[{"name":"text","data":"Fig.8"}],"title":[{"name":"text","data":"Six typical action recognition processes"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305732&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305736&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305734&type=","width":"150.00000000","height":"96.06742096","fontsize":""}]}},{"name":"p","data":[{"name":"xref","data":{"text":"图8","type":"fig","rid":"F8","data":[{"name":"text","data":"图8"}]}},{"name":"text","data":"中处理的5组动作片段从左至右分别为拳打和脚踢、倒地、推搡、打耳光及跪地,"},{"name":"xref","data":{"text":"图8","type":"fig","rid":"F8","data":[{"name":"text","data":"图8"}]}},{"name":"text","data":"(a)是原视频;"},{"name":"xref","data":{"text":"图8","type":"fig","rid":"F8","data":[{"name":"text","data":"图8"}]}},{"name":"text","data":"(b)是对输入的含有拳打和脚踢动作的视频片段使用OpenPose进行人体关节点提取,正确匹配各关节点后得到人体骨架;"},{"name":"xref","data":{"text":"图8","type":"fig","rid":"F8","data":[{"name":"text","data":"图8"}]}},{"name":"text","data":"(c)是将骨架序列输入本文改进的时空图卷积网络得到动作片段的识别结果。改进后模型的处理速度最大可达20.6 fps,对校园安防实景中拳打、脚踢、倒地、推搡、打耳光和跪地6种典型动作识别准确率分别为94.5%,97.0%,98.5%,95.0%,94.5%,95.5%,测试结果如"},{"name":"xref","data":{"text":"表1","type":"table","rid":"T1","data":[{"name":"text","data":"表1"}]}},{"name":"text","data":"所示。"}]},{"name":"table","data":{"id":"T1","caption":[{"lang":"zh","label":[{"name":"text","data":"表1"}],"title":[{"name":"text","data":"6种典型动作识别结果"}]},{"lang":"en","label":[{"name":"text","data":"Tab.1"}],"title":[{"name":"text","data":"Six typical action recognition results"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"动作类别"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"样本个数/个"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"识别准确率/%"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"最大识别速度/fps"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"拳打"}]},{"align":"center","data":[{"name":"text","data":"200"}]},{"align":"center","data":[{"name":"text","data":"94.5"}]},{"align":"center","data":[{"name":"text","data":"20.6"}]}],[{"align":"center","data":[{"name":"text","data":"脚踢"}]},{"align":"center","data":[{"name":"text","data":"200"}]},{"align":"center","data":[{"name":"text","data":"97.0"}]},{"align":"center","data":[{"name":"text","data":"20.6"}]}],[{"align":"center","data":[{"name":"text","data":"倒地"}]},{"align":"center","data":[{"name":"text","data":"200"}]},{"align":"center","data":[{"name":"text","data":"98.5"}]},{"align":"center","data":[{"name":"text","data":"20.6"}]}],[{"align":"center","data":[{"name":"text","data":"推搡"}]},{"align":"center","data":[{"name":"text","data":"200"}]},{"align":"center","data":[{"name":"text","data":"95.0"}]},{"align":"center","data":[{"name":"text","data":"20.6"}]}],[{"align":"center","data":[{"name":"text","data":"打耳光"}]},{"align":"center","data":[{"name":"text","data":"200"}]},{"align":"center","data":[{"name":"text","data":"94.5"}]},{"align":"center","data":[{"name":"text","data":"20.6"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"跪地"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"200"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"95.5"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"20.6"}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305755&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305737&type=","width":"77.00000000","height":"42.65208435","fontsize":""}}},{"name":"p","data":[{"name":"text","data":"为了验证不同体型(身高、体重和肩宽表示)人员对识别准确率存在影响,选取参与数据集制作的1~6号实验人员,每次使用由单一实验人员获取的6种典型动作片段作为训练集,将由其他5个实验人员获取的6种动作片段作为测试集,并记录对所有动作的平均识别准确率,实验参数及结果如"},{"name":"xref","data":{"text":"表2","type":"table","rid":"T2","data":[{"name":"text","data":"表2"}]}},{"name":"text","data":"所示。"}]},{"name":"table","data":{"id":"T2","caption":[{"lang":"zh","label":[{"name":"text","data":"表2"}],"title":[{"name":"text","data":"不同体型人员动作识别结果"}]},{"lang":"en","label":[{"name":"text","data":"Tab.2"}],"title":[{"name":"text","data":"Action recognition results of personnel with different body types"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"测试集人员"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"身高/cm"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"体重/kg"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"肩宽/cm"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"p","data":[{"name":"text","data":"平均识别"}]},{"name":"p","data":[{"name":"text","data":"准确率/%"}]}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"1号"}]},{"align":"center","data":[{"name":"text","data":"173"}]},{"align":"center","data":[{"name":"text","data":"76"}]},{"align":"center","data":[{"name":"text","data":"45"}]},{"align":"center","data":[{"name":"text","data":"83.5"}]}],[{"align":"center","data":[{"name":"text","data":"2号"}]},{"align":"center","data":[{"name":"text","data":"179"}]},{"align":"center","data":[{"name":"text","data":"67"}]},{"align":"center","data":[{"name":"text","data":"41"}]},{"align":"center","data":[{"name":"text","data":"68.7"}]}],[{"align":"center","data":[{"name":"text","data":"3号"}]},{"align":"center","data":[{"name":"text","data":"154"}]},{"align":"center","data":[{"name":"text","data":"46"}]},{"align":"center","data":[{"name":"text","data":"35"}]},{"align":"center","data":[{"name":"text","data":"72.4"}]}],[{"align":"center","data":[{"name":"text","data":"4号"}]},{"align":"center","data":[{"name":"text","data":"168"}]},{"align":"center","data":[{"name":"text","data":"59"}]},{"align":"center","data":[{"name":"text","data":"37"}]},{"align":"center","data":[{"name":"text","data":"85.6"}]}],[{"align":"center","data":[{"name":"text","data":"5号"}]},{"align":"center","data":[{"name":"text","data":"176"}]},{"align":"center","data":[{"name":"text","data":"80"}]},{"align":"center","data":[{"name":"text","data":"48"}]},{"align":"center","data":[{"name":"text","data":"85.0"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"6号"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"163"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"103"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"47"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"64.5"}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305772&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305757&type=","width":"76.90000153","height":"41.95209503","fontsize":""}}},{"name":"p","data":[{"name":"text","data":"由"},{"name":"xref","data":{"text":"表2","type":"table","rid":"T2","data":[{"name":"text","data":"表2"}]}},{"name":"text","data":"数据可知,使用单一实验人员所拍摄的6类动作片段作为数据集进行训练,并分别对其他人员的动作片段测试,测试结果最佳仅为85.6%,而使用所有实验人员视频片段识别准确率在94.5%以上,说明了不同人员体型的差异性可以增强模型的泛化能力,即鲁棒性。"}]},{"name":"p","data":[{"name":"xref","data":{"text":"表2","type":"table","rid":"T2","data":[{"name":"text","data":"表2"}]}},{"name":"text","data":"的1~6号实验人员中,2号的体型为179 cm/67 kg,身材过瘦;3号的体型为155 cm/46 kg,身材矮小,但身高体重比例正常;6号的体型为163 cm/103 kg,身材肥胖;1号、4号和5号体型基本正常。不同体型的人做同一种动作时,姿态检测算法获取的18个人体骨骼点坐标有差异,从而骨长也会产生差异,关节点信息流、骨长信息流、关节点偏移信息流和骨长变化信息流4种数据信息也有区别。因为2号过瘦,各关节点坐标较为集中,而6号过胖,各关节点坐标较为分散,导致2号和6号的平均识别准确率最低,仅为68.7%和64.5%;而3号身材比例正常,但身高过于矮小,也导致了关节坐标点分布不均匀,72.4%的准确率低于其他正常体型。"}]},{"name":"p","data":[{"name":"text","data":"综上,在数据集的制作过程中所有人员体型差异的多样性可以增强模型的泛化能力,实验结果也表明本文方法可快速有效地识别出校园暴力的典型动作。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"3"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"2"},{"name":"italic","data":[{"name":"text","data":"."}]},{"name":"text","data":"2 方法对比实验"}],"level":"3","id":"s3b2"}},{"name":"p","data":[{"name":"text","data":"为了验证本文方法的有效性,采用具有挑战性的UCF101数据集进行行为识别对比实验。该数据集有101类动作,13 320段视频,在人员姿态、外观、摄像机运动状态、和物体大小比例等方面具有多样性。"}]},{"name":"p","data":[{"name":"text","data":"按照6∶2∶2的比例,参与训练和验证的视频数据10 656个,测试视频2 664个,使用"},{"name":"xref","data":{"text":"表3","type":"table","rid":"T3","data":[{"name":"text","data":"表3"}]}},{"name":"text","data":"中5种方法进行对比实验,在当前配置下对视频片段处理速度由9.2~15.5 fps最大提高至19.3 fps,对数据集中101类动作平均识别准确率以及参数量变化对比结果如"},{"name":"xref","data":{"text":"表3","type":"table","rid":"T3","data":[{"name":"text","data":"表3"}]}},{"name":"text","data":"所示,并在"},{"name":"xref","data":{"text":"表4","type":"table","rid":"T4","data":[{"name":"text","data":"表4"}]}},{"name":"text","data":"中给出了数据集中6种动作的识别准确率。"}]},{"name":"table","data":{"id":"T3","caption":[{"lang":"zh","label":[{"name":"text","data":"表3"}],"title":[{"name":"text","data":"不同识别方法的对比结果"}]},{"lang":"en","label":[{"name":"text","data":"Tab.3"}],"title":[{"name":"text","data":"Comparison results of different recognition methods"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"方法"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"参数量/MB"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"平均识别准确率/%"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"最大识别速度/fps"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"Slow Fusion CNN"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"R14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"17.03"}]},{"align":"center","data":[{"name":"text","data":"65.4"}]},{"align":"center","data":[{"name":"text","data":"15.5"}]}],[{"align":"center","data":[{"name":"text","data":"VA"},{"name":"italic","data":[{"name":"text","data":"-"}]},{"name":"text","data":"CNN"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"15","type":"bibr","rid":"R15","data":[{"name":"text","data":"15"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"24.03"}]},{"align":"center","data":[{"name":"text","data":"82.8"}]},{"align":"center","data":[{"name":"text","data":"9.2"}]}],[{"align":"center","data":[{"name":"text","data":"ST"},{"name":"italic","data":[{"name":"text","data":"-"}]},{"name":"text","data":"GCN"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"7","type":"bibr","rid":"R7","data":[{"name":"text","data":"7"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"3.10"}]},{"align":"center","data":[{"name":"text","data":"85.6"}]},{"align":"center","data":[{"name":"text","data":"12.7"}]}],[{"align":"center","data":[{"name":"text","data":"Ours(无注意力模块)"}]},{"align":"center","data":[{"name":"text","data":"1.25"}]},{"align":"center","data":[{"name":"text","data":"86.8"}]},{"align":"center","data":[{"name":"text","data":"18.7"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"Ours(含注意力模块)"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"0.78"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"89.7"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"19.3"}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305764&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305762&type=","width":"77.00000000","height":"34.87708664","fontsize":""}}},{"name":"table","data":{"id":"T4","caption":[{"lang":"zh","label":[{"name":"text","data":"表4"}],"title":[{"name":"text","data":"6种数据集动作识别结果"}]},{"lang":"en","label":[{"name":"text","data":"Tab.4"}],"title":[{"name":"text","data":"Six typical action recognition results"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"动作类别"}]},{"align":"center","style":"border-top:solid;border-bottom:solid;","data":[{"name":"text","data":"识别准确率/%"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"拳打"}]},{"align":"center","data":[{"name":"text","data":"85.2"}]}],[{"align":"center","data":[{"name":"text","data":"脚踢"}]},{"align":"center","data":[{"name":"text","data":"90.7"}]}],[{"align":"center","data":[{"name":"text","data":"行走"}]},{"align":"center","data":[{"name":"text","data":"92.4"}]}],[{"align":"center","data":[{"name":"text","data":"下蹲"}]},{"align":"center","data":[{"name":"text","data":"88.2"}]}],[{"align":"center","data":[{"name":"text","data":"跳远"}]},{"align":"center","data":[{"name":"text","data":"87.6"}]}],[{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"推"}]},{"align":"center","style":"border-bottom:solid;","data":[{"name":"text","data":"91.5"}]}]],"foot":[]}],"graphics":{"small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305783&type=","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=29305767&type=","width":"77.00000000","height":"34.82500076","fontsize":""}}},{"name":"p","data":[{"name":"xref","data":{"text":"表3","type":"table","rid":"T3","data":[{"name":"text","data":"表3"}]}},{"name":"text","data":"数据表明:本文方法(无注意力模块)相对于两种卷积神经网络的方法,参数量分别减少约92.6%和94.7%,而识别准确率提高21.4%和4.0%;相对于改进前时空图卷积网络的方法,参数量减少约59.6%,而准确率提高1.2%。说明本文的多信息流数据融合方法可有效减少网络参数量,实现网络轻量化。其中,使用基于非局部运算的时空注意力机制相对于未使用时参数量减少约37.6%,准确率提高2.9%,说明改进后的时空注意力机制可有效减少冗余关节点信息,提高了特征的利用率,从而提高了识别准确率。"},{"name":"xref","data":{"text":"表4","type":"table","rid":"T4","data":[{"name":"text","data":"表4"}]}},{"name":"text","data":"数据列出了改进后方法在UCF101数据集中6种动作的识别准确率。由于该数据集中动作片段来源于不受约束的网络视频,存在相机运动、部分遮挡和低分辨率等影响导致视频质量差,实验中在OpenPose进行人体关节点提取阶段csv文件中所存的关节点坐标有部分缺失,因此相较于"},{"name":"xref","data":{"text":"表1","type":"table","rid":"T1","data":[{"name":"text","data":"表1"}]}},{"name":"text","data":"中实测数据集识别准确率均偏低。"}]},{"name":"p","data":[{"name":"text","data":"综上,本文方法在保证准确率提升的同时实现了网络的轻量化,从而提高了可靠性与实时性。"}]}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"title":[{"name":"text","data":"4 结论"}],"level":"1","id":"s4"}},{"name":"p","data":[{"name":"text","data":"针对校园智能安防识别速度和识别率不高导致可靠性和实时性差的问题,本文提出了一种基于轻量级图卷积的人体骨架数据的行为识别方法,通过多信息流数据融合与自适应图卷积相结合的方式,同时通过嵌入时空注意力模块提高特征的利用率,在校园安防实景中对拳打、脚踢、倒地、推搡、打耳光和跪地6种典型动作识别准确率分别为94.5%,97.0%,98.5%,95.0%,94.5%,95.5%,识别速度最快为20.6 fps,且验证了模型的泛化能力。同时在行为识别数据集UCF101上验证了方法的通用性,可以扩展至人体其他动作,在参数量比原始时空图卷积网络减少了74.8%的情况下,平均识别准确率由85.6%提高到89.7%,识别速度最大提高至19.3 fps,能够较好地完成校园实际安防中出现最多的典型暴力行为识别任务。"}]}]}],"footnote":[],"reflist":{"title":[{"name":"text","data":"参考文献"}],"data":[{"id":"R1","label":"1","citation":[{"lang":"zh","text":[{"name":"text","data":"林国军"},{"name":"text","data":","},{"name":"text","data":"杨明中"},{"name":"text","data":","},{"name":"text","data":"陈明举"},{"name":"text","data":","},{"name":"text","data":"等"},{"name":"text","data":"."},{"name":"text","data":"一种肤色定位的人脸检测算法"},{"name":"text","data":"[J]."},{"name":"text","data":"液晶与显示"},{"name":"text","data":","},{"name":"text","data":"2019"},{"name":"text","data":","},{"name":"text","data":"34"},{"name":"text","data":"("},{"name":"text","data":"1"},{"name":"text","data":"):"},{"name":"text","data":"70"},{"name":"text","data":"-"},{"name":"text","data":"73"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3788/yjyxs20193401.0070"}],"href":"http://dx.doi.org/10.3788/yjyxs20193401.0070"}}],"title":"一种肤色定位的人脸检测算法"},{"lang":"en","text":[{"name":"text","data":"LIN G J"},{"name":"text","data":", "},{"name":"text","data":"YANG M Z"},{"name":"text","data":", "},{"name":"text","data":"CHEN M J"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Face detection algorithm based on skin color location"},{"name":"text","data":" [J]. "},{"name":"text","data":"Chinese Journal of Liquid Crystals and Displays"},{"name":"text","data":", "},{"name":"text","data":"2019"},{"name":"text","data":", "},{"name":"text","data":"34"},{"name":"text","data":"("},{"name":"text","data":"1"},{"name":"text","data":"): "},{"name":"text","data":"70"},{"name":"text","data":"-"},{"name":"text","data":"73"},{"name":"text","data":". "},{"name":"text","data":"(in Chinese)"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3788/yjyxs20193401.0070"}],"href":"http://dx.doi.org/10.3788/yjyxs20193401.0070"}}],"title":"Face detection algorithm based on skin color location"}]},{"id":"R2","label":"2","citation":[{"lang":"zh","text":[{"name":"text","data":"李鹏飞"},{"name":"text","data":","},{"name":"text","data":"许金凯"},{"name":"text","data":","},{"name":"text","data":"韩文波"},{"name":"text","data":","},{"name":"text","data":"等"},{"name":"text","data":"."},{"name":"text","data":"基于S3C2440的人脸识别平台的设计"},{"name":"text","data":"[J]."},{"name":"text","data":"液晶与显示"},{"name":"text","data":","},{"name":"text","data":"2014"},{"name":"text","data":","},{"name":"text","data":"29"},{"name":"text","data":"("},{"name":"text","data":"3"},{"name":"text","data":"):"},{"name":"text","data":"417"},{"name":"text","data":"-"},{"name":"text","data":"421"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3788/yjyxs20142903.0417"}],"href":"http://dx.doi.org/10.3788/yjyxs20142903.0417"}}],"title":"基于S3C2440的人脸识别平台的设计"},{"lang":"en","text":[{"name":"text","data":"LI P F"},{"name":"text","data":", "},{"name":"text","data":"XU J K"},{"name":"text","data":", "},{"name":"text","data":"HAN W B"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Design of face identification platform based on S3C2440"},{"name":"text","data":" [J]. "},{"name":"text","data":"Chinese Journal of Liquid Crystals and Displays"},{"name":"text","data":", "},{"name":"text","data":"2014"},{"name":"text","data":", "},{"name":"text","data":"29"},{"name":"text","data":"("},{"name":"text","data":"3"},{"name":"text","data":"): "},{"name":"text","data":"417"},{"name":"text","data":"-"},{"name":"text","data":"421"},{"name":"text","data":". "},{"name":"text","data":"(in Chinese)"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3788/yjyxs20142903.0417"}],"href":"http://dx.doi.org/10.3788/yjyxs20142903.0417"}}],"title":"Design of face identification platform based on S3C2440"}]},{"id":"R3","label":"3","citation":[{"lang":"zh","text":[{"name":"text","data":"胡韬"},{"name":"text","data":"."},{"name":"text","data":"基于深度学习卷积神经网络的人体行为识别研究"},{"name":"text","data":"[J]."},{"name":"text","data":"科技传播"},{"name":"text","data":","},{"name":"text","data":"2020"},{"name":"text","data":","},{"name":"text","data":"12"},{"name":"text","data":"("},{"name":"text","data":"6"},{"name":"text","data":"):"},{"name":"text","data":"130"},{"name":"text","data":"-"},{"name":"text","data":"131"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3969/j.issn.1674-6708.2020.06.062"}],"href":"http://dx.doi.org/10.3969/j.issn.1674-6708.2020.06.062"}}],"title":"基于深度学习卷积神经网络的人体行为识别研究"},{"lang":"en","text":[{"name":"text","data":"HU T"},{"name":"text","data":". "},{"name":"text","data":"Research on human behavior recognition based on deep learning convolutional neural network"},{"name":"text","data":" [J]. "},{"name":"text","data":"Public Communication of Science & Technology"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"12"},{"name":"text","data":"("},{"name":"text","data":"6"},{"name":"text","data":"): "},{"name":"text","data":"130"},{"name":"text","data":"-"},{"name":"text","data":"131"},{"name":"text","data":". "},{"name":"text","data":"(in Chinese)"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3969/j.issn.1674-6708.2020.06.062"}],"href":"http://dx.doi.org/10.3969/j.issn.1674-6708.2020.06.062"}}],"title":"Research on human behavior recognition based on deep learning convolutional neural network"}]},{"id":"R4","label":"4","citation":[{"lang":"zh","text":[{"name":"text","data":"蔡强"},{"name":"text","data":","},{"name":"text","data":"邓毅彪"},{"name":"text","data":","},{"name":"text","data":"李海生"},{"name":"text","data":","},{"name":"text","data":"等"},{"name":"text","data":"."},{"name":"text","data":"基于深度学习的人体行为识别方法综述"},{"name":"text","data":"[J]."},{"name":"text","data":"计算机科学"},{"name":"text","data":","},{"name":"text","data":"2020"},{"name":"text","data":","},{"name":"text","data":"47"},{"name":"text","data":"("},{"name":"text","data":"4"},{"name":"text","data":"):"},{"name":"text","data":"85"},{"name":"text","data":"-"},{"name":"text","data":"93"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.11896/jsjkx.190300005"}],"href":"http://dx.doi.org/10.11896/jsjkx.190300005"}}],"title":"基于深度学习的人体行为识别方法综述"},{"lang":"en","text":[{"name":"text","data":"CAI Q"},{"name":"text","data":", "},{"name":"text","data":"DENG Y B"},{"name":"text","data":", "},{"name":"text","data":"LI H S"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Survey on human action recognition based on deep learning"},{"name":"text","data":" [J]. "},{"name":"text","data":"Computer Science"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"47"},{"name":"text","data":"("},{"name":"text","data":"4"},{"name":"text","data":"): "},{"name":"text","data":"85"},{"name":"text","data":"-"},{"name":"text","data":"93"},{"name":"text","data":". "},{"name":"text","data":"(in Chinese)"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.11896/jsjkx.190300005"}],"href":"http://dx.doi.org/10.11896/jsjkx.190300005"}}],"title":"Survey on human action recognition based on deep learning"}]},{"id":"R5","label":"5","citation":[{"lang":"en","text":[{"name":"text","data":"YANG X D"},{"name":"text","data":", "},{"name":"text","data":"TIAN Y L"},{"name":"text","data":". "},{"name":"text","data":"Effective 3D action recognition using EigenJoints"},{"name":"text","data":" [J]. "},{"name":"text","data":"Journal of Visual Communication and Image Representation"},{"name":"text","data":", "},{"name":"text","data":"2014"},{"name":"text","data":", "},{"name":"text","data":"25"},{"name":"text","data":"("},{"name":"text","data":"1"},{"name":"text","data":"): "},{"name":"text","data":"2"},{"name":"text","data":"-"},{"name":"text","data":"11"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1016/j.jvcir.2013.03.001"}],"href":"http://dx.doi.org/10.1016/j.jvcir.2013.03.001"}}],"title":"Effective 3D action recognition using EigenJoints"}]},{"id":"R6","label":"6","citation":[{"lang":"en","text":[{"name":"text","data":"LI B"},{"name":"text","data":", "},{"name":"text","data":"DAI Y C"},{"name":"text","data":", "},{"name":"text","data":"CHENG X L"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Skeleton based action recognition using translation-scale invariant image mapping and multi-scale deep CNN"},{"name":"text","data":" [C]//"},{"name":"text","data":"Proceedings of 2017 IEEE International Conference on Multimedia & Expo Workshops"},{"name":"text","data":". "},{"name":"text","data":"Hong Kong, China"},{"name":"text","data":": "},{"name":"text","data":"IEEE"},{"name":"text","data":", "},{"name":"text","data":"2017"},{"name":"text","data":": "},{"name":"text","data":"601"},{"name":"text","data":"-"},{"name":"text","data":"604"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/icmew.2017.8026282"}],"href":"http://dx.doi.org/10.1109/icmew.2017.8026282"}}],"title":"Skeleton based action recognition using translation-scale invariant image mapping and multi-scale deep CNN"}]},{"id":"R7","label":"7","citation":[{"lang":"en","text":[{"name":"text","data":"YAN S J"},{"name":"text","data":", "},{"name":"text","data":"XIONG Y J"},{"name":"text","data":", "},{"name":"text","data":"LIN D H"},{"name":"text","data":". "},{"name":"text","data":"Spatial temporal graph convolutional networks for skeleton-based action recognition"},{"name":"text","data":" [C]//"},{"name":"text","data":"Proceedings of the 32nd AAAI Conference on Artificial Intelligence"},{"name":"text","data":". "},{"name":"text","data":"New Orleans"},{"name":"text","data":": "},{"name":"text","data":"AAAI Press"},{"name":"text","data":", "},{"name":"text","data":"2018"},{"name":"text","data":": "},{"name":"text","data":"7444"},{"name":"text","data":"-"},{"name":"text","data":"7452"},{"name":"text","data":"."}],"title":"Spatial temporal graph convolutional networks for skeleton-based action recognition"}]},{"id":"R8","label":"8","citation":[{"lang":"zh","text":[{"name":"text","data":"赵明富"},{"name":"text","data":","},{"name":"text","data":"刘帅"},{"name":"text","data":","},{"name":"text","data":"宋涛"},{"name":"text","data":","},{"name":"text","data":"等"},{"name":"text","data":"."},{"name":"text","data":"基于残差独立循环神经网络的空间增强人体骨架行为识别"},{"name":"text","data":"[J]."},{"name":"text","data":"激光杂志"},{"name":"text","data":","},{"name":"text","data":"2020"},{"name":"text","data":","},{"name":"text","data":"41"},{"name":"text","data":"("},{"name":"text","data":"7"},{"name":"text","data":"):"},{"name":"text","data":"37"},{"name":"text","data":"-"},{"name":"text","data":"43"},{"name":"text","data":"."}],"title":"基于残差独立循环神经网络的空间增强人体骨架行为识别"},{"lang":"en","text":[{"name":"text","data":"ZHAO M F"},{"name":"text","data":", "},{"name":"text","data":"LIU S"},{"name":"text","data":", "},{"name":"text","data":"SONG T"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Spatial enhancement of human skeleton behavior recognition based on residual independent recurrent neural network"},{"name":"text","data":" [J]. "},{"name":"text","data":"Laser Journal"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"41"},{"name":"text","data":"("},{"name":"text","data":"7"},{"name":"text","data":"): "},{"name":"text","data":"37"},{"name":"text","data":"-"},{"name":"text","data":"43"},{"name":"text","data":". "},{"name":"text","data":"(in Chinese)"}],"title":"Spatial enhancement of human skeleton behavior recognition based on residual independent recurrent neural network"}]},{"id":"R9","label":"9","citation":[{"lang":"zh","text":[{"name":"text","data":"曹毅"},{"name":"text","data":","},{"name":"text","data":"刘晨"},{"name":"text","data":","},{"name":"text","data":"黄子龙"},{"name":"text","data":","},{"name":"text","data":"等"},{"name":"text","data":"."},{"name":"text","data":"时空自适应图卷积神经网络的骨架行为识别"},{"name":"text","data":"[J]."},{"name":"text","data":"华中科技大学学报(自然科学版)"},{"name":"text","data":","},{"name":"text","data":"2020"},{"name":"text","data":","},{"name":"text","data":"48"},{"name":"text","data":"("},{"name":"text","data":"11"},{"name":"text","data":"):"},{"name":"text","data":"5"},{"name":"text","data":"-"},{"name":"text","data":"10"},{"name":"text","data":"."}],"title":"时空自适应图卷积神经网络的骨架行为识别"},{"lang":"en","text":[{"name":"text","data":"CAO Y"},{"name":"text","data":", "},{"name":"text","data":"LIU C"},{"name":"text","data":", "},{"name":"text","data":"HUANG Z L"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Skeleton-based action recognition based on spatio-temporal adaptive graph convolutional neural-network"},{"name":"text","data":" [J]. "},{"name":"text","data":"Journal of Huazhong University of Science and Technology (Nature Science Edition)"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"48"},{"name":"text","data":"("},{"name":"text","data":"11"},{"name":"text","data":"): "},{"name":"text","data":"5"},{"name":"text","data":"-"},{"name":"text","data":"10"},{"name":"text","data":". "},{"name":"text","data":"(in Chinese)"}],"title":"Skeleton-based action recognition based on spatio-temporal adaptive graph convolutional neural-network"}]},{"id":"R10","label":"10","citation":[{"lang":"zh","text":[{"name":"text","data":"王健宗"},{"name":"text","data":","},{"name":"text","data":"孔令炜"},{"name":"text","data":","},{"name":"text","data":"黄章成"},{"name":"text","data":","},{"name":"text","data":"等"},{"name":"text","data":"."},{"name":"text","data":"图神经网络综述"},{"name":"text","data":"[J]."},{"name":"text","data":"计算机工程"},{"name":"text","data":","},{"name":"text","data":"2021"},{"name":"text","data":","},{"name":"text","data":"47"},{"name":"text","data":"("},{"name":"text","data":"4"},{"name":"text","data":"):"},{"name":"text","data":"1"},{"name":"text","data":"-"},{"name":"text","data":"12"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.19678/j.issn.1000-3428.0058382"}],"href":"http://dx.doi.org/10.19678/j.issn.1000-3428.0058382"}}],"title":"图神经网络综述"},{"lang":"en","text":[{"name":"text","data":"WANG J Z"},{"name":"text","data":", "},{"name":"text","data":"KONG L W"},{"name":"text","data":", "},{"name":"text","data":"HUANG Z C"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Survey of graph neural network"},{"name":"text","data":" [J]. "},{"name":"text","data":"Computer Engineering"},{"name":"text","data":", "},{"name":"text","data":"2021"},{"name":"text","data":", "},{"name":"text","data":"47"},{"name":"text","data":"("},{"name":"text","data":"4"},{"name":"text","data":"): "},{"name":"text","data":"1"},{"name":"text","data":"-"},{"name":"text","data":"12"},{"name":"text","data":". "},{"name":"text","data":"(in Chinese)"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.19678/j.issn.1000-3428.0058382"}],"href":"http://dx.doi.org/10.19678/j.issn.1000-3428.0058382"}}],"title":"Survey of graph neural network"}]},{"id":"R11","label":"11","citation":[{"lang":"zh","text":[{"name":"text","data":"孔玮"},{"name":"text","data":","},{"name":"text","data":"刘云"},{"name":"text","data":","},{"name":"text","data":"李辉"},{"name":"text","data":","},{"name":"text","data":"等"},{"name":"text","data":"."},{"name":"text","data":"基于图卷积网络的行为识别方法综述"},{"name":"text","data":"[J]."},{"name":"text","data":"控制与决策"},{"name":"text","data":","},{"name":"text","data":"2021"},{"name":"text","data":","},{"name":"text","data":"36"},{"name":"text","data":"("},{"name":"text","data":"7"},{"name":"text","data":"):"},{"name":"text","data":"1537"},{"name":"text","data":"-"},{"name":"text","data":"1546"},{"name":"text","data":"."}],"title":"基于图卷积网络的行为识别方法综述"},{"lang":"en","text":[{"name":"text","data":"KONG W"},{"name":"text","data":", "},{"name":"text","data":"LIU Y"},{"name":"text","data":", "},{"name":"text","data":"LI H"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"A survey of action recognition methods based on graph convolutional network"},{"name":"text","data":" [J]. "},{"name":"text","data":"Control and Decision"},{"name":"text","data":", "},{"name":"text","data":"2021"},{"name":"text","data":", "},{"name":"text","data":"36"},{"name":"text","data":"("},{"name":"text","data":"7"},{"name":"text","data":"): "},{"name":"text","data":"1537"},{"name":"text","data":"-"},{"name":"text","data":"1546"},{"name":"text","data":". "},{"name":"text","data":"(in Chinese)"}],"title":"A survey of action recognition methods based on graph convolutional network"}]},{"id":"R12","label":"12","citation":[{"lang":"zh","text":[{"name":"text","data":"朱洪堃"},{"name":"text","data":","},{"name":"text","data":"殷佳炜"},{"name":"text","data":","},{"name":"text","data":"冯文宇"},{"name":"text","data":","},{"name":"text","data":"等"},{"name":"text","data":"."},{"name":"text","data":"一种轻量化实时人体姿势检测模型研究与应用"},{"name":"text","data":"[J]."},{"name":"text","data":"系统仿真学报"},{"name":"text","data":","},{"name":"text","data":"2020"},{"name":"text","data":","},{"name":"text","data":"32"},{"name":"text","data":"("},{"name":"text","data":"11"},{"name":"text","data":"):"},{"name":"text","data":"2155"},{"name":"text","data":"-"},{"name":"text","data":"2165"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.16182/j.issn1004731x.joss.20-FZ0308"}],"href":"http://dx.doi.org/10.16182/j.issn1004731x.joss.20-FZ0308"}}],"title":"一种轻量化实时人体姿势检测模型研究与应用"},{"lang":"en","text":[{"name":"text","data":"ZHU H K"},{"name":"text","data":", "},{"name":"text","data":"YIN J W"},{"name":"text","data":", "},{"name":"text","data":"FENG W Y"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Research and application of a lightweight real-time human posture detection model"},{"name":"text","data":" [J]. "},{"name":"text","data":"Journal of System Simulation"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"32"},{"name":"text","data":"("},{"name":"text","data":"11"},{"name":"text","data":"): "},{"name":"text","data":"2155"},{"name":"text","data":"-"},{"name":"text","data":"2165"},{"name":"text","data":". "},{"name":"text","data":"(in Chinese)"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.16182/j.issn1004731x.joss.20-FZ0308"}],"href":"http://dx.doi.org/10.16182/j.issn1004731x.joss.20-FZ0308"}}],"title":"Research and application of a lightweight real-time human posture detection model"}]},{"id":"R13","label":"13","citation":[{"lang":"zh","text":[{"name":"text","data":"朱建宝"},{"name":"text","data":","},{"name":"text","data":"许志龙"},{"name":"text","data":","},{"name":"text","data":"孙玉玮"},{"name":"text","data":","},{"name":"text","data":"等"},{"name":"text","data":"."},{"name":"text","data":"基于OpenPose人体姿态识别的变电站危险行为检测"},{"name":"text","data":"[J]."},{"name":"text","data":"自动化与仪表"},{"name":"text","data":","},{"name":"text","data":"2020"},{"name":"text","data":","},{"name":"text","data":"35"},{"name":"text","data":"("},{"name":"text","data":"2"},{"name":"text","data":"):"},{"name":"text","data":"47"},{"name":"text","data":"-"},{"name":"text","data":"51"},{"name":"text","data":"."}],"title":"基于OpenPose人体姿态识别的变电站危险行为检测"},{"lang":"en","text":[{"name":"text","data":"ZHU J B"},{"name":"text","data":", "},{"name":"text","data":"XU Z L"},{"name":"text","data":", "},{"name":"text","data":"SUN Y W"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Detection of dangerous behaviors in power stations based on OpenPose multi-person attitude recognition"},{"name":"text","data":" [J]. "},{"name":"text","data":"Automation & Instrumentation"},{"name":"text","data":", "},{"name":"text","data":"2020"},{"name":"text","data":", "},{"name":"text","data":"35"},{"name":"text","data":"("},{"name":"text","data":"2"},{"name":"text","data":"): "},{"name":"text","data":"47"},{"name":"text","data":"-"},{"name":"text","data":"51"},{"name":"text","data":". "},{"name":"text","data":"(in Chinese)"}],"title":"Detection of dangerous behaviors in power stations based on OpenPose multi-person attitude recognition"}]},{"id":"R14","label":"14","citation":[{"lang":"en","text":[{"name":"text","data":"KARPATHY A"},{"name":"text","data":", "},{"name":"text","data":"TODERICI G"},{"name":"text","data":", "},{"name":"text","data":"SHETTY S"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"Large-scale video classification with convolutional neural networks"},{"name":"text","data":" [C]//"},{"name":"text","data":"Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition"},{"name":"text","data":". "},{"name":"text","data":"Columbus"},{"name":"text","data":": "},{"name":"text","data":"IEEE"},{"name":"text","data":", "},{"name":"text","data":"2014"},{"name":"text","data":": "},{"name":"text","data":"1725"},{"name":"text","data":"-"},{"name":"text","data":"1732"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/cvpr.2014.223"}],"href":"http://dx.doi.org/10.1109/cvpr.2014.223"}}],"title":"Large-scale video classification with convolutional neural networks"}]},{"id":"R15","label":"15","citation":[{"lang":"en","text":[{"name":"text","data":"ZHANG P F"},{"name":"text","data":", "},{"name":"text","data":"LAN C L"},{"name":"text","data":", "},{"name":"text","data":"XING J L"},{"name":"text","data":", "},{"name":"text","data":"et al"},{"name":"text","data":". "},{"name":"text","data":"View adaptive neural networks for high performance skeleton-based human action recognition"},{"name":"text","data":" [J]. "},{"name":"text","data":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"name":"text","data":", "},{"name":"text","data":"2019"},{"name":"text","data":", "},{"name":"text","data":"41"},{"name":"text","data":"("},{"name":"text","data":"8"},{"name":"text","data":"): "},{"name":"text","data":"1963"},{"name":"text","data":"-"},{"name":"text","data":"1978"},{"name":"text","data":". "},{"name":"text","data":" doi: "},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/tpami.2019.2896631"}],"href":"http://dx.doi.org/10.1109/tpami.2019.2896631"}}],"title":"View adaptive neural networks for high performance skeleton-based human action recognition"}]}]},"response":[],"contributions":[],"acknowledgements":[],"conflict":[],"supportedby":[],"articlemeta":{"doi":"10.37188/CJLCD.2021-0229","clc":[[{"name":"text","data":"TP391.4"}]],"dc":[{"name":"text","data":"A"}],"publisherid":"1007-2780(2022)04-0530-09","citeme":[],"fundinggroup":[{"lang":"zh","text":[{"name":"text","data":"西安市科技计划(No.201806117YF05NC13(1));西安市未央区科技计划(No.201305);陕西科技大学博士科研启动基金(No.BJ13-15)"}]},{"lang":"en","text":[{"name":"text","data":"Supported by Xi’an Science and Technology Plan Project(No.201806117YF05NC13(1));Xi’an Weiyang District Science and Technology Plan Project(No.201305);Doctoral Research Start-up Fund of Shaanxi University of Science and Technology(No.BJ13-15)"}]}],"history":{"received":"2021-08-31","revised":"2021-10-27","ppub":"2022-04-05","opub":"2022-06-14"}},"appendix":[],"type":"research-article","ethics":[],"backSec":[],"supplementary":[],"journalTitle":"液晶与显示","issue":"4","volume":"37","originalSource":[]}