{"defaultlang":"zh","titlegroup":{"articletitle":[{"lang":"zh","data":[{"name":"text","data":"基于时空上下文和随机森林的人眼跟踪定位算法研究"}]},{"lang":"en","data":[{"name":"text","data":"Human eye locating and tracking using space-time context and random forest"}]}]},"contribgroup":{"author":[{"name":[{"lang":"zh","surname":"刘","givenname":"林涛","namestyle":"eastern","prefix":""},{"lang":"en","surname":"LIU","givenname":"Lin-tao","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":"1"}],"role":["first-author"],"bio":[{"lang":"zh","text":["刘林涛(1990-), 男, 河南开封人, 硕士研究生, 2015年进入电子科技大学电子科学技术研究院攻读硕士学位。主要从事3D显示中的图像处理及人脸识别人眼识别等方面的研究。E-mail:lintaoliu@foxmail.com"],"graphic":[],"data":[[{"name":"text","data":"刘林涛(1990-), 男, 河南开封人, 硕士研究生, 2015年进入电子科技大学电子科学技术研究院攻读硕士学位。主要从事3D显示中的图像处理及人脸识别人眼识别等方面的研究。E-mail:"},{"name":"text","data":"lintaoliu@foxmail.com"}]]}],"email":"lintaoliu@foxmail.com","deceased":false},{"name":[{"lang":"zh","surname":"董","givenname":"雪莹","namestyle":"eastern","prefix":""},{"lang":"en","surname":"DONG","givenname":"Xue-ying","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff2","text":"2"}],"role":[],"deceased":false},{"name":[{"lang":"zh","surname":"刘","givenname":"俊","namestyle":"eastern","prefix":""},{"lang":"en","surname":"LIU","givenname":"Jun","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":"1"}],"role":[],"deceased":false},{"name":[{"lang":"zh","surname":"汪","givenname":"相如","namestyle":"eastern","prefix":""},{"lang":"en","surname":"WANG","givenname":"Xiang-ru","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff3","text":"3"}],"role":[],"deceased":false},{"name":[{"lang":"zh","surname":"黄","givenname":"子强","namestyle":"eastern","prefix":""},{"lang":"en","surname":"HUANG","givenname":"Zi-qiang","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":"1"}],"role":["corresp"],"corresp":[{"rid":"cor1","lang":"zh","text":"黄子强, E-mail:zqhuang@188.com","data":[{"name":"text","data":"黄子强, E-mail:zqhuang@188.com"}]}],"bio":[{"lang":"zh","text":["黄子强(1956-), 男, 四川成都人, 教授, 硕士生导师, 1986年从电子科技大学光电子技术系获得硕士学位, 同年留在电子科技大学光电子技术系工作。1992获得意大利政府奖学金以访问学者身份赴Calabria大学。2000年进入电子科技大学任教至今。主要从事性液晶显示器、可变视角液晶显示器、激光扫描用液晶光栅、机载、雷达用液晶显示器等方面的研究。E-mail:zqhuang@188.com"],"graphic":[],"data":[[{"name":"text","data":"黄子强(1956-), 男, 四川成都人, 教授, 硕士生导师, 1986年从电子科技大学光电子技术系获得硕士学位, 同年留在电子科技大学光电子技术系工作。1992获得意大利政府奖学金以访问学者身份赴Calabria大学。2000年进入电子科技大学任教至今。主要从事性液晶显示器、可变视角液晶显示器、激光扫描用液晶光栅、机载、雷达用液晶显示器等方面的研究。E-mail:"},{"name":"text","data":"zqhuang@188.com"}]]}],"email":"zqhuang@188.com","deceased":false}],"aff":[{"id":"aff1","intro":[{"lang":"zh","label":"1","text":"电子科技大学 电子科学技术研究院, 四川 成都 611731","data":[{"name":"text","data":"电子科技大学 电子科学技术研究院, 四川 成都 611731"}]},{"lang":"en","label":"1","text":"Research Institute Electronic Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China","data":[{"name":"text","data":"Research Institute Electronic Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China"}]}]},{"id":"aff2","intro":[{"lang":"zh","label":"2","text":"贵州大学 大数据与信息工程学院, 贵州 贵阳 550025","data":[{"name":"text","data":"贵州大学 大数据与信息工程学院, 贵州 贵阳 550025"}]},{"lang":"en","label":"2","text":"College of Big Data and Information Engineering, Guizhou University, Guiyang 550025, China","data":[{"name":"text","data":"College of Big Data and Information Engineering, Guizhou University, Guiyang 550025, China"}]}]},{"id":"aff3","intro":[{"lang":"zh","label":"3","text":"电子科技大学 光电科学与工程学院, 四川 成都 610054","data":[{"name":"text","data":"电子科技大学 光电科学与工程学院, 四川 成都 610054"}]},{"lang":"en","label":"3","text":"Research Institute School of Optoelectrong Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China","data":[{"name":"text","data":"Research Institute School of Optoelectrong Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China"}]}]}]},"abstracts":[{"lang":"zh","data":[{"name":"p","data":[{"name":"text","data":"为了得到人眼跟踪过程中更好的鲁棒性和实时性以及跟踪精度,提出一种基于自适应增强分类算法(AdaBoost)、随机森林(RF)和时空上下文(STC)的重定位跟踪算法。该算法结构分为3层,分别为AdaBoost人脸检测、STC人脸跟踪和RF人眼定位。首先,利用AdaBoost在第一帧识别出人脸,从而提取出人脸窗口。接着,使用时空上下文跟踪算法进行人脸跟踪。然后,联合定向梯度直方图(HOG)算法进行相似度判断,以达到目标丢失后继续跟踪的目的。最后,采用随机森林算法进行人眼定位。实验结果表明,与传统的随机森林人眼跟踪算法相比,该算法在跟踪速度达到原方法的2倍左右,并在跟踪精度和鲁棒性上和原算法相同。基本满足在裸眼3D显示时人脸跟踪和人眼定位的精度高、实时性快、鲁棒性好的要求。"}]}]},{"lang":"en","data":[{"name":"p","data":[{"name":"text","data":"In order to obtain better robustness, real-time and tracking accuracy in human eye tracking process, this paper proposed a relocation tracking method based on adaptive boosting (AdaBoost), random forest (RF) and space-time context (STC). The algorithm structure was divided into three layers, which was AdaBoost face detection, STC face tracking and RF eye positioning. First, AdaBoost was used to recognize faces in the first frame to extract face windows. Then, the spatio-temporal context algorithm was employed for face tracking. Afterwards, the histogram of oriented gradient (HOG) was used to judge the similarity, so as to achieve the goal of tracking after the target is lost. Finally, the random forest algorithm was used to locate the human eye. Experimental results indicate that the algorithm has a tracking speed of about 2 times as much as the original method. Moreover, this method had the same tracking accuracy and robustness as the original algorithm. It can satisfy high precision for human eye location, real-time, good robustness in naked eye 3D display."}]}]}],"keyword":[{"lang":"zh","data":[[{"name":"text","data":"级联分类器"}],[{"name":"text","data":"随机森林"}],[{"name":"text","data":"时空上下文"}],[{"name":"text","data":"人脸检测"}],[{"name":"text","data":"人眼定位"}]]},{"lang":"en","data":[[{"name":"text","data":"adaBoost"}],[{"name":"text","data":"random forest"}],[{"name":"text","data":"spatio-temporal context"}],[{"name":"text","data":"face detection"}],[{"name":"text","data":"human eye location"}]]}],"highlights":[],"body":[{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"1"}],"title":[{"name":"text","data":"引言"}],"level":"1","id":"s1"}},{"name":"p","data":[{"name":"text","data":"人脸探测和人眼跟踪是最近计算机视觉跟踪的研究热点,已广泛应用到人机交互、人工智能、3D显示、全息显示等各个领域。例如,熊晶莹"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"1","type":"bibr","rid":"b1","data":[{"name":"text","data":"1"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等人的应移动智能设备的目标跟踪器。近年来在3D显示和全息显示应用上,对人脸和人眼的跟踪算法研究的有很多,人眼作为面部最具特征的一个部分,在人与机器的交流中占据非常重要的作用。尤其是在裸眼3D显示器上为了减轻在人移动时串扰的影响。曾校蝴"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"2","type":"bibr","rid":"b2","data":[{"name":"text","data":"2"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、王嘉辉"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"3","type":"bibr","rid":"b3","data":[{"name":"text","data":"3"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、谢雨桐"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"4","type":"bibr","rid":"b4","data":[{"name":"text","data":"4"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"都在裸眼3D显示效果中的关键指标提到串扰的影响。解决串扰问题一方面试是3D裸眼显示器本身的优化,另一方面人眼跟踪定位的实时性非常重要。因此,为了增强用户的体验和舒适度,当前对人眼定位的准确性和实时性提出了更高的要求。"}]},{"name":"p","data":[{"name":"text","data":"人眼定位的方法大致分为3类:基于几何特征算法、基于模板匹配算法和基于机器学习的算法。R.valenti提出了利用面部的灰度等高线曲率的几何特征,并以曲度为权重对等高线中心进行投票来确定瞳孔的位置"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"5","type":"bibr","rid":"b5","data":[{"name":"text","data":"5"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。K.Peng使用了基于积分投影模板的算"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"6","type":"bibr","rid":"b6","data":[{"name":"text","data":"6"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",利用积分的局部极值中定位双眼的位置。近些年,机器学习在人眼定位研究中得到了广泛的应用。机器学习算法通过自身的训练过程得到人眼的特征。Fasel提出了利用Haar特征和AdaBoost级联分类器的人眼识别方法,通过将多个简单分类器连接成一个强分类器"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"7","type":"bibr","rid":"b7","data":[{"name":"text","data":"7"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。对于人眼的跟踪精度和速度,上述3种方法在3D显示应用中仍然存在一些缺陷,在跟踪精度和跟踪速度上已无法满足现在的需求。Ren,S提出的基于LBP局部和全局线性回归的随机森林算法大大提高了人脸和人眼的检测精度"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。Ren,S首先使用AdaBoost级联分类器初步进行人脸检测,再人脸检测的基础上再使用随机森林进行人脸标定。Ren,S的方法虽然提高了精度和鲁棒性,但在跟踪速度上仍然有稍许不足。因此本文提出一种基于时空上下文跟踪算法和随机森林的人眼跟踪改进型新方法。新方法使用人脸检测-人脸跟踪-人眼定位结构。首先利用级联的Adboost算法分类器找到人脸位置。再用STC算法对人脸进行跟踪,最后用基于LBP局部与全局回归的随机森林算法对每帧跟踪的人脸位置做人眼定位。实验结果表明,新方法使整体人眼跟踪速度提高了近2倍,解决了人眼跟踪过程中的实时问题。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2"}],"title":[{"name":"text","data":"人脸跟踪"}],"level":"1","id":"s2"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.1"}],"title":[{"name":"text","data":"人脸探测"}],"level":"2","id":"s2-1"}},{"name":"p","data":[{"name":"text","data":"在人眼检测中,要确定人眼的位置通常的做法是先确定人脸的位置,这样可以最大限度地降低搜索的区域。如果直接对每帧图像得所有区域都进行人眼检测,势必会浪费大量的时间,并且会降低检测的正确率。因此,首先利用速度快的AdaBoost算法对人脸区域进行检测,并标出人脸区域,然后在人脸区域进行人眼的的定位。这对于保证人眼定位的精准度和实时性都非常重要。"}]},{"name":"p","data":[{"name":"text","data":"AdaBoost是一种迭代算法,其核心理论是针对同一个训练集训练不同的分类器(弱分类器),然后把这些弱分类器集合起来,构成一个更强的终分类器(强分类器)。AdaBoost人脸检测首先要挑选弱分类器。弱分类器的选择是训练和提取多个人脸样本中有效的人脸特征。然后对这些特征进行排序,挑选出一个最优的特征"},{"name":"italic","data":[{"name":"text","data":"f"},{"name":"sub","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":",该特征值在样本分类中的错误率是最低的。最后将挑选出的若干个弱分类器级联成强分类器。具体主要计算过程如下。首先,对图像的每一帧图片进行归一化处理,如公式(1)所示。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"1"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644521&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644521&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644521&type=middle"}}}],"id":"yjyxs-33-5-443-E1"}}]},{"name":"p","data":[{"name":"text","data":"对于每一个特征"},{"name":"italic","data":[{"name":"text","data":"f"},{"name":"sub","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":",训练弱分类器"},{"name":"italic","data":[{"name":"text","data":"h"}]},{"name":"text","data":",并且计算特征的错误率"},{"name":"italic","data":[{"name":"text","data":"ε"},{"name":"sub","data":[{"name":"text","data":"f"}]}]},{"name":"text","data":",如公式(2)所示。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"2"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644526&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644526&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644526&type=middle"}}}],"id":"yjyxs-33-5-443-E2"}}]},{"name":"p","data":[{"name":"text","data":"权值"},{"name":"italic","data":[{"name":"text","data":"w"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"+1, "},{"name":"italic","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":"在分类器"},{"name":"italic","data":[{"name":"text","data":"H"},{"name":"sub","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":"中的大小是根据错误率"},{"name":"italic","data":[{"name":"text","data":"ε"},{"name":"sub","data":[{"name":"text","data":"f"}]}]},{"name":"text","data":"的改变而改变,具体计算如公式(3)所示。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"3"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644531&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644531&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644531&type=middle"}}}],"id":"yjyxs-33-5-443-E3"}}]},{"name":"p","data":[{"name":"text","data":"从公式中可以看出,错误率"},{"name":"italic","data":[{"name":"text","data":"ε"},{"name":"sub","data":[{"name":"text","data":"f"}]}]},{"name":"text","data":"越大其所对应的权重值"},{"name":"italic","data":[{"name":"text","data":"w"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"+1, "},{"name":"italic","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":"就越小。"}]},{"name":"p","data":[{"name":"text","data":"然后将确定好所有的最优弱分类器级联成AdaBoost强分类器。具体公式如式(4)所示:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"4"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644537&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644537&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644537&type=middle"}}}],"id":"yjyxs-33-5-443-E4"}}]},{"name":"p","data":[{"name":"text","data":"最后,通过不断调整检测窗口的位置和比例,在检测过程中找到人脸,从而实现人脸跟踪的目的。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.2"}],"title":[{"name":"text","data":"改进型基于AdaBoost-STC人脸跟踪算法"}],"level":"2","id":"s2-2"}},{"name":"p","data":[{"name":"text","data":"为了解决AdaBoost人脸探测中跟踪速度慢的问题,提出利用基于时空上下文的快速跟踪算法来提高人脸的跟踪速度。"}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.2.1"}],"title":[{"name":"text","data":"时空上下文跟踪算法"}],"level":"3","id":"s2-2-1"}},{"name":"p","data":[{"name":"text","data":"基于时空上下文的跟踪算法是利用视频帧序列图像的连续性。在跟踪过程上下文中,包含的跟踪目标及其在跟踪目标中的周围区域。因为在连续帧间目标的局部场景有强烈的时空关系,因此可以利用这些时空关系进行局域图像跟踪。"}]},{"name":"p","data":[{"name":"text","data":"跟踪问题可以描述为计算一个估计目标位置"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"似然函数的置信图, 如公式(5)。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"5"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644542&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644542&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644542&type=middle"}}}],"id":"yjyxs-33-5-443-E5"}}]},{"name":"p","data":[{"name":"text","data":"置信图"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")最大的那个位置"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"sup","data":[{"name":"text","data":"*"}]},{"name":"text","data":"就是目标的位置。从公式(5)可以看到,似然函数可以分解为两个概率部分。一个是建模目标与周围上下文信息的空间关系的条件概率"},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"|"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"z"}]},{"name":"text","data":"), "},{"name":"italic","data":[{"name":"text","data":"o"}]},{"name":"text","data":"),一个是建模局部上下文各个点"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"的上下文先验概率"},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"), "},{"name":"italic","data":[{"name":"text","data":"o"}]},{"name":"text","data":")。而条件概率"},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"|"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"z"}]},{"name":"text","data":"), "},{"name":"italic","data":[{"name":"text","data":"o"}]},{"name":"text","data":"),也就是目标位置和它的空间上下文的关系我们需要学习出来。"}]},{"name":"p","data":[{"name":"text","data":"空间的上下文描述是条件概率的函数,具体如公式(6)所示。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"6"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644546&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644546&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644546&type=middle"}}}],"id":"yjyxs-33-5-443-E6"}}]},{"name":"p","data":[{"name":"italic","data":[{"name":"text","data":"h"},{"name":"sup","data":[{"name":"text","data":"sc"}]}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"-"},{"name":"italic","data":[{"name":"text","data":"z"}]},{"name":"text","data":")是一个关于目标"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"和局部上下文位置"},{"name":"italic","data":[{"name":"text","data":"z"}]},{"name":"text","data":"的相对距离和方向的函数,它编码了目标和它的空间上下文的空间关系。计算公式如(7)所示。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"7"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644550&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644550&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644550&type=middle"}}}],"id":"yjyxs-33-5-443-E7"}}]},{"name":"p","data":[{"name":"italic","data":[{"name":"text","data":"I"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")是某帧图像,"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"sup","data":[{"name":"text","data":"*"}]},{"name":"text","data":"是目标位置,更新跟踪下一帧目标需要的时空上下文模型"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"+1"}]},{"name":"sup","data":[{"name":"italic","data":[{"name":"text","data":"stc"}]}]},{"name":"text","data":"和尺度参数。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"8"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644553&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644553&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644553&type=middle"}}}],"id":"yjyxs-33-5-443-E8"}}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"9"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644556&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644556&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644556&type=middle"}}}],"id":"yjyxs-33-5-443-E9"}}]},{"name":"p","data":[{"name":"text","data":"然后开始计算图像下一帧的置信图"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"+1"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"10"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644559&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644559&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644559&type=middle"}}}],"id":"yjyxs-33-5-443-E10"}}]},{"name":"p","data":[{"name":"text","data":"通过计算第"},{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"+1帧的置信图找到最大值,这个最大值的位置就是我们要求的目标位置"},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"+1"}]},{"name":"sup","data":[{"name":"text","data":"*"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"11"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644562&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644562&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644562&type=middle"}}}],"id":"yjyxs-33-5-443-E11"}}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.2.2"}],"title":[{"name":"text","data":"HOG相似度计算"}],"level":"3","id":"s2-2-2"}},{"name":"p","data":[{"name":"text","data":"STC算法在人脸跟踪过程中大大提高了跟踪速度"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"9","type":"bibr","rid":"b9","data":[{"name":"text","data":"9"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。然而,STC算法的鲁棒性较差,如目标消失、重复和严重遮挡,进而导致目标丢失。为了解决这个问题,我们使用梯度直方图(HOG)在人脸跟踪过程"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"10","type":"bibr","rid":"b10","data":[{"name":"text","data":"10"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"做相似性判断。当相似度低于标准值时,然后再用AdaBoost重新定位人脸位置。"}]},{"name":"p","data":[{"name":"text","data":"梯度直方图(HOG)相似度判断主要通过计算目标区域的水平方向梯度和垂直方向梯度进行是否跟踪正确判断,两个梯度的计算公式如(12)、(13)、(14)所示。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"12"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644565&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644565&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644565&type=middle"}}}],"id":"yjyxs-33-5-443-E12"}}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"13"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644566&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644566&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644566&type=middle"}}}],"id":"yjyxs-33-5-443-E13"}}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"14"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644569&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644569&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644569&type=middle"}}}],"id":"yjyxs-33-5-443-E14"}}]},{"name":"p","data":[{"name":"text","data":"相似度判断因子为Δ"},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"h"}]}]},{"name":"text","data":",Δ"},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"v"}]}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"15"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644571&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644571&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644571&type=middle"}}}],"id":"yjyxs-33-5-443-E15"}}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"16"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644573&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644573&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644573&type=middle"}}}],"id":"yjyxs-33-5-443-E16"}}]},{"name":"p","data":[{"name":"text","data":"从而求的最终的相似判断值Δ"},{"name":"sub","data":[{"name":"text","data":"HOG"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"17"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644574&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644574&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644574&type=middle"}}}],"id":"yjyxs-33-5-443-E17"}}]},{"name":"p","data":[{"name":"text","data":"利用以上算法,在David-indoor图像序列中随机挑选两张图片进行相似度判断, 具体效果图如"},{"name":"xref","data":{"text":"图 1","type":"fig","rid":"Figure1","data":[{"name":"text","data":"图 1"}]}},{"name":"text","data":"所示.其中左上图片为David-indoor图像序列第320帧,左下图片为David-indoor图像序列第570帧。"}]},{"name":"fig","data":{"id":"Figure1","caption":[{"lang":"zh","label":[{"name":"text","data":"图1"}],"title":[{"name":"text","data":"HOG相似度效果直方图"}]},{"lang":"en","label":[{"name":"text","data":"Fig 1"}],"title":[{"name":"text","data":"HOG similarity effect histogram"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644575&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644575&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644575&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"其中,两张图片的相似度为:Δ"},{"name":"sub","data":[{"name":"text","data":"h"}]},{"name":"text","data":"=0.9777, Δ"},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"v"}]}]},{"name":"text","data":"=0.9559, Δ"},{"name":"sub","data":[{"name":"text","data":"HOG"}]},{"name":"text","data":"=0.9668.随机选择的两张照片的相似度满足判断是额要求。"}]},{"name":"p","data":[{"name":"text","data":"依据以上几点设计的改进型人脸跟踪系统框图如"},{"name":"xref","data":{"text":"图 2","type":"fig","rid":"Figure2","data":[{"name":"text","data":"图 2"}]}},{"name":"text","data":"所示。在"},{"name":"xref","data":{"text":"图 2","type":"fig","rid":"Figure2","data":[{"name":"text","data":"图 2"}]}},{"name":"text","data":"中可以清楚的看出,人脸跟踪分为人脸检测、人脸跟踪和跟踪错误判断3部分组成。如果判断是则持续跟踪,如果判断否则返回至AdaBoost强分类器重新检测,然后再进行人脸跟踪。"}]},{"name":"fig","data":{"id":"Figure2","caption":[{"lang":"zh","label":[{"name":"text","data":"图2"}],"title":[{"name":"text","data":"AdaBoost-STC人脸跟踪系统结构图"}]},{"lang":"en","label":[{"name":"text","data":"Fig 2"}],"title":[{"name":"text","data":"Structure diagram of AdaBoost-STC face tracking system"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644576&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644576&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644576&type=middle"}]}}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3"}],"title":[{"name":"text","data":"随机森林人眼定位跟踪算法"}],"level":"1","id":"s3"}},{"name":"p","data":[{"name":"text","data":"随机森林(RF,Random Forests)是由Leo Breiman在2001年提出的算法"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"b11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。随机森林顾名思义,是用随机的方式建立一个森林,森林里面有很多的决策树组成,随机森林的每一棵决策树之间是没有关联的。在得到森林之后,当有一个新的输入样本进入的时候,就让森林中的每一棵决策树分别进行一下判断,看看这个样本应该属于哪一类,然后看看哪一类被选择最多,就预测这个样本为那一类。"}]},{"name":"p","data":[{"name":"text","data":"基于随机森林的人眼算法最近有一下几篇。Wei, Y分析了人脸定位和全局线性回归,Burgos Artizzu"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"b12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"分析了人脸特征点标定和Xiong, X"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"b13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"监督下降法人脸对齐。然而,它们的检测跟踪速度相对较慢。直到Ren’s的方法加快面部对齐速度。这章将以Ren’s的方法为基础,在人眼定位方面做些改进。"}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.1"}],"title":[{"name":"text","data":"随机森林人脸对齐训练算法"}],"level":"2","id":"s3-1"}},{"name":"p","data":[{"name":"text","data":"首先使用随机森林对每个人脸标记点进行学习,并抽取局部二值特征。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"18"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644578&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644578&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644578&type=middle"}}}],"id":"yjyxs-33-5-443-E18"}}]},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"italic","data":[{"name":"text","data":"φ"},{"name":"sub","data":[{"name":"text","data":"l"}]},{"name":"sup","data":[{"name":"text","data":"t"}]}]},{"name":"text","data":"表示第"},{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"级第l个人脸特征点的随机森林。最后在将局部二值特征输入到一个全局的回归器回归,从而预测出标记点的真实位置。"}]},{"name":"p","data":[{"name":"text","data":"训练的目标函数如公式(19)、(20)所示。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"19"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644580&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644580&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644580&type=middle"}}}],"id":"yjyxs-33-5-443-E19"}}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"20"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644581&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644581&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644581&type=middle"}}}],"id":"yjyxs-33-5-443-E20"}}]},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"italic","data":[{"name":"text","data":"Φ"},{"name":"sup","data":[{"name":"text","data":"t"}]}]},{"name":"text","data":"对应每个特征点,并且每个特征点都有一个对应的随机森林。在训练的过程中,将形状特征"},{"name":"italic","data":[{"name":"text","data":"S"}]},{"name":"sup","data":[{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"-1"}]},{"name":"text","data":"放入对应的随机森林中,求的对应的局部二值特征"},{"name":"italic","data":[{"name":"text","data":"Φ"}]},{"name":"sup","data":[{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"-1"}]},{"name":"text","data":",训练出公式(2)的最小值。求出人脸的68个标定点。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.2"}],"title":[{"name":"text","data":"随机森林人脸标定及人眼定位"}],"level":"2","id":"s3-2"}},{"name":"p","data":[{"name":"text","data":"随机森林的人脸标定过程为:"}]},{"name":"p","data":[{"name":"text","data":"将一个形状"},{"name":"italic","data":[{"name":"text","data":"S"}]},{"name":"sup","data":[{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"-1"}]},{"name":"text","data":"和测试图像输入到训练好的额随机森林中,得到局部二值特征(LBF)"},{"name":"italic","data":[{"name":"text","data":"Φ"}]},{"name":"sup","data":[{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"-1"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"text","data":"(1) 将局部二值特征(LBF)输入到全局回归器中进行回归,得到"},{"name":"italic","data":[{"name":"text","data":"S"},{"name":"sup","data":[{"name":"text","data":"t"}]}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"21"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644582&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644582&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644582&type=middle"}}}],"id":"yjyxs-33-5-443-E21"}}]},{"name":"p","data":[{"name":"text","data":"(2) 判断公式(4)是否收敛收敛退出,不收敛返回到(1),再次运算。计算出所需的68个人脸标定点。"}]},{"name":"p","data":[{"name":"text","data":"(3) 通过68个标定点,精准计算人眼球的位置。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"22"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644583&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644583&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644583&type=middle"}}}],"id":"yjyxs-33-5-443-E22"}}]},{"name":"p","data":[{"name":"text","data":"经过以上4步过程,最终确定人眼球的的精确位置,其效果图如"},{"name":"xref","data":{"text":"图 3","type":"fig","rid":"Figure3","data":[{"name":"text","data":"图 3"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure3","caption":[{"lang":"zh","label":[{"name":"text","data":"图3"}],"title":[{"name":"text","data":"人脸标定及人眼定位效果图"}]},{"lang":"en","label":[{"name":"text","data":"Fig 3"}],"title":[{"name":"text","data":"RF human eye positioning effect map"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644584&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644584&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644584&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"在"},{"name":"xref","data":{"text":"图 3(a)","type":"fig","rid":"Figure3","data":[{"name":"text","data":"图 3(a)"}]}},{"name":"text","data":"图和(b)图中,研究选择了4种不同的背景和不同的背景图像。左1图是复杂背景下图片、左2图倾斜人脸图、左3是佩戴眼镜得图、左4是暗光下的图片。从"},{"name":"xref","data":{"text":"图 3(a)","type":"fig","rid":"Figure3","data":[{"name":"text","data":"图 3(a)"}]}},{"name":"text","data":"中可以看出,无论是哪种情况,68个特征点正确匹配了人脸外部特征和周围的局部特征。在"},{"name":"xref","data":{"text":"图 3(b)","type":"fig","rid":"Figure3","data":[{"name":"text","data":"图 3(b)"}]}},{"name":"text","data":"中的4张图片可以看出,无论是哪种情况,左眼和右眼位置均被准确定位。因此,新方法在精度和鲁棒性上是非常有效的。"}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4"}],"title":[{"name":"text","data":"实验与结果"}],"level":"1","id":"s4"}},{"name":"p","data":[{"name":"text","data":"为了进一步验证新方法的有效性,分别进行了人脸跟踪实验和人眼跟踪实验。基于随机森林人眼定位算法利用LFPW,AFW等数据集进行人脸特征点训练。使用David-Indoor测试集和两个自拍视频文件(命名为Occlusion-video1,Occlusion-video2)在人脸跟踪和人眼跟踪中的测试样本。"}]},{"name":"p","data":[{"name":"text","data":"为了测试新方法的鲁棒性和实时性,它进行了大量的人脸和人眼跟踪实验。从环境亮度,侧面,人脸倾斜,人脸遮蔽等方面对新方法的鲁棒性进行了测试。实验结果如"},{"name":"xref","data":{"text":"图 4","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure4","caption":[{"lang":"zh","label":[{"name":"text","data":"图4"}],"title":[{"name":"text","data":"人脸及人眼跟踪效果图"}]},{"lang":"en","label":[{"name":"text","data":"Fig 4"}],"title":[{"name":"text","data":"Face and human eye tracking results"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644586&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644586&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1644586&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"在"},{"name":"xref","data":{"text":"图 4","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4"}]}},{"name":"text","data":"中,"},{"name":"xref","data":{"text":"图 4(a)","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4(a)"}]}},{"name":"text","data":"为测试的脸部部分遮挡的视频帧,"},{"name":"xref","data":{"text":"图 4(b)","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4(b)"}]}},{"name":"text","data":"为测试后的持续遮挡后的正面视频帧,"},{"name":"xref","data":{"text":"图 4","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4"}]}},{"name":"text","data":"("},{"name":"xref","data":{"text":"c","type":"fig","rid":"Figure4","data":[{"name":"text","data":"c"}]}},{"name":"text","data":","},{"name":"xref","data":{"text":"d","type":"fig","rid":"Figure4","data":[{"name":"text","data":"d"}]}},{"name":"text","data":")用于对照亮度变化对测试的影响,图(d,e)用于测试侧脸的影响,"},{"name":"xref","data":{"text":"图 4","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4"}]}},{"name":"text","data":"("},{"name":"xref","data":{"text":"e","type":"fig","rid":"Figure4","data":[{"name":"text","data":"e"}]}},{"name":"text","data":","},{"name":"xref","data":{"text":"f","type":"fig","rid":"Figure4","data":[{"name":"text","data":"f"}]}},{"name":"text","data":")用于面部和小角度倾斜测试。从"},{"name":"xref","data":{"text":"图 4","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4"}]}},{"name":"text","data":"可以看到背景亮度的变化、脸部小角度的倾斜、小角度侧面的人脸和部分遮挡人脸都可以正常跟踪人眼。这充分证明了新方法跟踪的精准度高和鲁棒性好的结果。"}]},{"name":"p","data":[{"name":"text","data":"为了更加清晰地证明新方法的有效性,研究对比了两种方法的鲁棒性和跟踪速度。并且在跟踪类别中做了面部跟踪和人眼跟踪比较。它在跟踪类别中进行了人脸跟踪和人眼跟踪比较。比较结果见表(1、2、3、4)。其中,"},{"name":"xref","data":{"text":"表 1","type":"table","rid":"Table1","data":[{"name":"text","data":"表 1"}]}},{"name":"text","data":"展示了AdaBoost人脸跟踪方法和AdaBoost -STC人脸跟踪方法的鲁棒性对比。"},{"name":"xref","data":{"text":"表 2","type":"table","rid":"Table2","data":[{"name":"text","data":"表 2"}]}},{"name":"text","data":"显示了AdaBoost- RF人眼跟踪方法和AdaBoost-STC-RF人眼跟踪方法的鲁棒性对比。"},{"name":"xref","data":{"text":"表 3","type":"table","rid":"Table3","data":[{"name":"text","data":"表 3"}]}},{"name":"text","data":"显示了人眼跟踪中的AdaBoost方法和AdaBoost-STC方法的跟踪速度对比结果。"},{"name":"xref","data":{"text":"表 4","type":"table","rid":"Table4","data":[{"name":"text","data":"表 4"}]}},{"name":"text","data":"显示了在人眼跟踪中的AdaBoost -RF方法和AdaBoost -STC- RF方法的速度对比结果。"}]},{"name":"table","data":{"id":"Table1","caption":[{"lang":"zh","label":[{"name":"text","data":"表1"}],"title":[{"name":"text","data":"人脸跟踪的鲁棒性分析"}]},{"lang":"en","label":[{"name":"text","data":"Table 1"}],"title":[{"name":"text","data":"Robustness analysis of face tracking"}]}],"note":[],"table":[{"head":[[{"align":"center","data":[]},{"align":"center","data":[{"name":"text","data":"正面"}]},{"align":"center","data":[{"name":"text","data":"部分遮挡"}]},{"align":"center","data":[{"name":"text","data":"侧脸"}]},{"align":"center","data":[{"name":"text","data":"倾斜"}]},{"align":"center","data":[{"name":"text","data":"亮度变化"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"AdaBoost"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]}],[{"align":"center","data":[{"name":"text","data":"AdaBoost-STC"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]}]],"foot":[]}]}},{"name":"table","data":{"id":"Table2","caption":[{"lang":"zh","label":[{"name":"text","data":"表2"}],"title":[{"name":"text","data":"人眼跟踪的鲁棒性分析"}]},{"lang":"en","label":[{"name":"text","data":"Table 2"}],"title":[{"name":"text","data":"Robustness analysis of humn eye traking"}]}],"note":[],"table":[{"head":[[{"align":"center","data":[]},{"align":"center","data":[{"name":"text","data":"正面"}]},{"align":"center","data":[{"name":"text","data":"部分遮挡"}]},{"align":"center","data":[{"name":"text","data":"侧脸"}]},{"align":"center","data":[{"name":"text","data":"倾斜"}]},{"align":"center","data":[{"name":"text","data":"亮度变化"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"AdaBoost-RF"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]}],[{"align":"center","data":[{"name":"text","data":"AdaBoost-STC_RF"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]},{"align":"center","data":[{"name":"text","data":"是"}]}]],"foot":[]}]}},{"name":"table","data":{"id":"Table3","caption":[{"lang":"zh","label":[{"name":"text","data":"表3"}],"title":[{"name":"text","data":"人脸跟踪速度比较(单位:帧/s)"}]},{"lang":"en","label":[{"name":"text","data":"Table 3"}],"title":[{"name":"text","data":"Face tracking speed comparison(Unit: F/s)"}]}],"note":[],"table":[{"head":[[{"align":"center","data":[]},{"align":"center","data":[{"name":"text","data":"David-Indoor"}]},{"align":"center","data":[{"name":"text","data":"Occlusion-video1"}]},{"align":"center","data":[{"name":"text","data":"Occlusion-video2"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"AdaBoost"}]},{"align":"center","data":[{"name":"text","data":"12.04"}]},{"align":"center","data":[{"name":"text","data":"12.51"}]},{"align":"center","data":[{"name":"text","data":"12.51"}]}],[{"align":"center","data":[{"name":"text","data":"AdaBoost-STC"}]},{"align":"center","data":[{"name":"text","data":"39.39"}]},{"align":"center","data":[{"name":"text","data":"27.87"}]},{"align":"center","data":[{"name":"text","data":"27.85"}]}]],"foot":[]}]}},{"name":"table","data":{"id":"Table4","caption":[{"lang":"zh","label":[{"name":"text","data":"表4"}],"title":[{"name":"text","data":"人眼跟踪速度比较(单位:帧/s)"}]},{"lang":"en","label":[{"name":"text","data":"Table 4"}],"title":[{"name":"text","data":"Human eye tracking speed comparison(Unit: F/s)"}]}],"note":[],"table":[{"head":[[{"align":"center","data":[]},{"align":"center","data":[{"name":"text","data":"David-Indoor"}]},{"align":"center","data":[{"name":"text","data":"Occlusion-video1"}]},{"align":"center","data":[{"name":"text","data":"Occlusion-video2"}]}]],"body":[[{"align":"center","data":[{"name":"text","data":"AdaBoost-RF"}]},{"align":"center","data":[{"name":"text","data":"10.58"}]},{"align":"center","data":[{"name":"text","data":"10.94"}]},{"align":"center","data":[{"name":"text","data":"10.94"}]}],[{"align":"center","data":[{"name":"text","data":"AdaBoost-STC-RF"}]},{"align":"center","data":[{"name":"text","data":"27.48"}]},{"align":"center","data":[{"name":"text","data":"21.48"}]},{"align":"center","data":[{"name":"text","data":"21.65"}]}]],"foot":[]}]}},{"name":"p","data":[{"name":"text","data":"通过以上4表,研究表明,该方法的在鲁棒星河跟踪精度不变的情况下,在人脸跟踪和人眼跟踪速度中提高了2~3倍,因此,该方法对提高人眼跟踪速度取得非常好的效果。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5"}],"title":[{"name":"text","data":"结论"}],"level":"1","id":"s5"}},{"name":"p","data":[{"name":"text","data":"本文根据人眼跟踪精度高、速度快的要求,提出先用AdaBoost检测、再用STC人脸跟踪、最后用随机森林人眼定位三层结构的方法,新方法在人眼跟踪精度和鲁棒性上和原本方法相同,在实验软件环境是Windows 7的64位和matlab-2014a,硬件条件intel5-4200双核处理器(1.6 GHz)和4G内存Kingston的情况下,将人脸跟踪速度从David-Indoor数据集的12.04帧/s提高到39.39帧/s。并且在其它数据集也有相同的结果。跟踪速度提高到原来的3倍左右。人眼跟踪速度从David-Indoor数据集的12.58帧/s提高到27.48帧/s。同样在其它数据集也有相同的结果,跟踪速度提高到原来的2倍左右。满足3D裸眼显示器消除串扰的跟踪要求。"}]}]}],"footnote":[],"reflist":{"title":[{"name":"text","data":"参考文献"}],"data":[{"id":"b1","label":"1","citation":[{"lang":"zh","text":[{"name":"text","data":"熊晶莹, 戴明.适应移动智能设备的目标跟踪器[J].光学精密工程, 2017, 12(25):3152-3159."}]},{"lang":"en","text":[{"name":"text","data":"XIONG J Y, DAI M.Design of tracker for mobile smart devices[J]."},{"name":"italic","data":[{"name":"text","data":"Opt. Precision Eng."}]},{"name":"text","data":", 2017, 12(25):3152-3159. (in Chinese)"}]}]},{"id":"b2","label":"2","citation":[{"lang":"zh","text":[{"name":"text","data":"田华, 曾小名, 戴涛涛, 等.柱透镜光栅投影3D显示的视点数与串扰容限[J].液晶与显示, 2013, 28(3):330-337."}]},{"lang":"en","text":[{"name":"text","data":"TIANG H, ZHENG X M, DAI X, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. New Problems about Number of Views and Grosstalk Tolerance in Projective AutoSterepic Display Based on Lenticlar Grating[J]."},{"name":"italic","data":[{"name":"text","data":"Chinese Journal of Liquid Crystals and Displays"}]},{"name":"text","data":", 2013, 28(3):330-337. (in Chinese)"}]}]},{"id":"b3","label":"3","citation":[{"lang":"zh","text":[{"name":"text","data":"王嘉辉, 邓玉桃, 苏剑邦, 等.全高清裸眼3D显示效果的评价与测量[J].液晶与显示, 2013, 28(5):805-809."}]},{"lang":"en","text":[{"name":"text","data":"WANG J H, DENG YT, SU J B, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. Evaluation and Measurement of Display Effect in Full High Resolution Autostereoscopic Display[J]."},{"name":"italic","data":[{"name":"text","data":"Journal of Liquid Crystals and Displays"}]},{"name":"text","data":", 2013, 28 (5):805-809. (in Chinese)"}]}]},{"id":"b4","label":"4","citation":[{"lang":"zh","text":[{"name":"text","data":"谢雨桐, 苏晓煌, 郑集文, 等.裸眼3D显示设备关键指标测试方案的研究[J].液晶与显示, 2015, 5(30):888-893."}]},{"lang":"en","text":[{"name":"text","data":"XIE Y T, SU X H, Z J W, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. Key properties of autostereoscopic display[J]."},{"name":"italic","data":[{"name":"text","data":"Journal of Liquid Crystals and Displays"}]},{"name":"text","data":", 2015, 5(30):888-893. (in Chinese)"}]}]},{"id":"b5","label":"5","citation":[{"lang":"en","text":[{"name":"text","data":"VALENTI R, GEVERS T. Accurate eye center location and tracking using isophote curvature[C]."},{"name":"italic","data":[{"name":"text","data":"Computer Vision and Pattern Recognition. America"}]},{"name":"text","data":": "},{"name":"italic","data":[{"name":"text","data":"CVPR"}]},{"name":"text","data":" 2008: 1-8."}]}]},{"id":"b6","label":"6","citation":[{"lang":"en","text":[{"name":"text","data":"PENG K, CHEN L T, RUAN S, KUKHAREY G. A robust algorithm for eye detection on gray intensity face without spectacles[J]."},{"name":"italic","data":[{"name":"text","data":"J. Comput. Sci. Technol,"}]},{"name":"text","data":" 2005, 5(8):127-132."}]}]},{"id":"b7","label":"7","citation":[{"lang":"en","text":[{"name":"text","data":"FASEL I, FORTENBERRY B, MOVELLAN J. A generative framework for real time object detection and classification[J]."},{"name":"italic","data":[{"name":"text","data":"Computer Vision & Image Understanding"}]},{"name":"text","data":", 2005, 98(1):182-210."}]}]},{"id":"b8","label":"8","citation":[{"lang":"en","text":[{"name":"text","data":"REN S, CAO X, WEI Y, SUN J. Face Alignment at 3000 FPS via Regressing Local Binary Features[C]."},{"name":"italic","data":[{"name":"text","data":"Computer Vision and Pattern Recognition. America"}]},{"name":"text","data":": "},{"name":"italic","data":[{"name":"text","data":"CVPR"}]},{"name":"text","data":", 2014: 1685-1692."}]}]},{"id":"b9","label":"9","citation":[{"lang":"en","text":[{"name":"text","data":"ZHANG K, ZHANG L, YANG M, ZHANG, D. Fast tracking via spatio-temporal context learning[J]."},{"name":"italic","data":[{"name":"text","data":"Computer Science,"}]},{"name":"text","data":" 2013, 11(8):1-16."}]}]},{"id":"b10","label":"10","citation":[{"lang":"en","text":[{"name":"text","data":"DALAL N, TRIGGS B. Histograms of Oriented Gradients for Human Detection[C]. P"},{"name":"italic","data":[{"name":"text","data":"Computer Vision and Pattern Recognition"}]},{"name":"text","data":". "},{"name":"italic","data":[{"name":"text","data":"America"}]},{"name":"text","data":": "},{"name":"italic","data":[{"name":"text","data":"CVPR"}]},{"name":"text","data":", 2005: 886-893."}]}]},{"id":"b11","label":"11","citation":[{"lang":"en","text":[{"name":"text","data":"BREIMAN L. Random Forests[J]."},{"name":"italic","data":[{"name":"text","data":"Machine Learing."}]},{"name":"text","data":" 2001, 45(1):5-32."}]}]},{"id":"b12","label":"12","citation":[{"lang":"en","text":[{"name":"text","data":"BURGOS A, X P, PERONA P, DOLL, PIOTR R. Robust Face Landmark Estimation under Occlusion[C]."},{"name":"italic","data":[{"name":"text","data":"Computer Vision and Pattern Recognition. America"}]},{"name":"text","data":": "},{"name":"italic","data":[{"name":"text","data":"CVPR"}]},{"name":"text","data":", 2013: 1513-1520."}]}]},{"id":"b13","label":"13","citation":[{"lang":"en","text":[{"name":"text","data":"XING X, TORRE F, F D L. Supervised descent method and its applications to face alignment[C]."},{"name":"italic","data":[{"name":"text","data":"Computer Vision and Pattern Recognition. America"}]},{"name":"text","data":": "},{"name":"italic","data":[{"name":"text","data":"CVPR"}]},{"name":"text","data":", 2013: 532-539."}]}]}]},"response":[],"contributions":[],"acknowledgements":[],"conflict":[],"supportedby":[],"articlemeta":{"doi":"10.3788/YJYXS20183305.0443","clc":[[{"name":"text","data":"TP751.1"}]],"dc":[],"publisherid":"yjyxs-33-5-443","citeme":[],"fundinggroup":[{"lang":"zh","text":[{"name":"text","data":"国家自然科学基金(No.61775026);装备预研基金重大项目(No.6140923070101)"}]},{"lang":"en","text":[{"name":"text","data":"Supported by National Natural Science Foundation of China(No.61775026);Equipment Pre-research Fund Major Projects(No. 6140923070101)"}]}],"history":{"received":"2018-03-07","accepted":"2018-03-15","opub":"2020-06-15"},"copyright":{"data":[{"lang":"zh","data":[{"name":"text","data":"版权所有©《液晶与显示》编辑部2018"}],"type":"copyright"},{"lang":"en","data":[{"name":"text","data":"Copyright ©2018 Chinese Journal of Liquid Crystals and Displays. All rights reserved."}],"type":"copyright"}],"year":"2018"}},"appendix":[],"type":"research-article","ethics":[],"backSec":[],"supplementary":[],"journalTitle":"液晶与显示","issue":"5","volume":"33","originalSource":[]}