{"defaultlang":"zh","titlegroup":{"articletitle":[{"lang":"zh","data":[{"name":"text","data":"双任务卷积神经网络的图像去模糊方法"}]},{"lang":"en","data":[{"name":"text","data":"Image deblurring method based on dual task convolution neural network"}]}]},"contribgroup":{"author":[{"name":[{"lang":"zh","surname":"陈","givenname":"清江","namestyle":"eastern","prefix":""},{"lang":"en","surname":"CHEN","givenname":"Qing-jiang","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":["first-author"],"bio":[{"lang":"zh","text":["陈清江(1966-),男,河南信阳人,博士,教授,2006年于西安交通大学获得博士学位,主要从事小波分析、图像处理与信号处理方面的研究。E-mail:qjchen66xytu@126.com"],"graphic":[],"data":[[{"name":"bold","data":[{"name":"text","data":"陈清江"}]},{"name":"text","data":"(1966-),男,河南信阳人,博士,教授,2006年于西安交通大学获得博士学位,主要从事小波分析、图像处理与信号处理方面的研究。E-mail:"},{"name":"text","data":"qjchen66xytu@126.com"}]]}],"email":"qjchen66xytu@126.com","deceased":false},{"name":[{"lang":"zh","surname":"胡","givenname":"倩楠","namestyle":"eastern","prefix":""},{"lang":"en","surname":"HU","givenname":"Qian-nan","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":["corresp"],"corresp":[{"rid":"cor1","lang":"zh","text":"胡倩楠, Email:1227409677@qq.com","data":[{"name":"text","data":"胡倩楠, Email:1227409677@qq.com"}]}],"bio":[{"lang":"zh","text":["胡倩楠(1996-), 女,陕西渭南人,硕士研究生,2019年于咸阳师范学院获得学士学位,主要从事小波分析、图像处理与信号处理方面的研究。Email:1227409677@qq.com"],"graphic":[],"data":[[{"name":"bold","data":[{"name":"text","data":"胡倩楠"}]},{"name":"text","data":"(1996-), 女,陕西渭南人,硕士研究生,2019年于咸阳师范学院获得学士学位,主要从事小波分析、图像处理与信号处理方面的研究。Email:"},{"name":"text","data":"1227409677@qq.com"}]]}],"email":"1227409677@qq.com","deceased":false},{"name":[{"lang":"zh","surname":"李","givenname":"金阳","namestyle":"eastern","prefix":""},{"lang":"en","surname":"LI","givenname":"Jin-yang","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":[],"deceased":false}],"aff":[{"id":"aff1","intro":[{"lang":"zh","label":"","text":"西安建筑科技大学 理学院, 陕西 西安 710055","data":[{"name":"text","data":"西安建筑科技大学 理学院, 陕西 西安 710055"}]},{"lang":"en","label":"","text":"School of Science, Xi'an University of Architecture and Technology, Xi'an 710055, China","data":[{"name":"text","data":"School of Science, Xi'an University of Architecture and Technology, Xi'an 710055, China"}]}]}]},"abstracts":[{"lang":"zh","data":[{"name":"p","data":[{"name":"text","data":"针对现有图像去模糊卷积神经网络在图像复原过程中易出现纹理细节丢失、不加区分地对待所有通道和空间特征信息以及去模糊效果不佳等问题,本文提出了一种基于双任务卷积神经网络的图像去模糊方法。将图像去模糊任务分为去模糊子任务和高频细节恢复子任务来进行。其一,提出一种基于残差注意力模块和八度卷积残差块的编解码子网络模型,将此网络模型用于图像去模糊子任务;其二,提出一种基于双残差连接的高频细节恢复子网络模型,将此网络模型用于高频细节恢复子任务。将两个子网络采用并联方式组合起来,并使用平均绝对误差损失与结构损失来共同约束训练方向,实现图像去模糊。实验结果表明,本文方法具有较强的图像复原能力和较丰富的细节纹理,峰值信噪比(PSNR)为32.427 6 dB,结构相似度为0.947 0。与目前先进的去模糊算法相比能够有效提升图像去模糊效果。"}]}]},{"lang":"en","data":[{"name":"p","data":[{"name":"text","data":"In order to solve the problems of texture details loss, indistinguishable treatment of all channel and spatial feature information and poor deblurring effect in the process of image restoration, an image deblurring method based on dual task convolutional neural network is proposed. The image deblurring task is divided into deblurring sub task and high frequency detail restoration sub task. Firstly, a coding and decoding sub network model based on Residual Attention Module and Octave Convolution Residual Block is proposed, which is used in image deblurring sub task. Secondly, a high frequency detail recovery sub network model based on Double Residual Connection is proposed, which is used in high frequency detail recovery sub task. The two subnetworks are combined in parallel, and the average absolute error loss and structure loss are used to constrain the training direction to achieve image deblurring. The experimental results show that the proposed method has strong image restoration ability and rich detail texture, the peak signal-to-noise ratio (PSNR) is 32.427 6 dB, and the structure similarity(SSIM) is 0.947 0. Compared with the current advanced deblurring algorithm, it can effectively improve the image deblurring effect."}]}]}],"keyword":[{"lang":"zh","data":[[{"name":"text","data":"卷积神经网络"}],[{"name":"text","data":"图像去模糊"}],[{"name":"text","data":"八度卷积"}],[{"name":"text","data":"注意力机制"}],[{"name":"text","data":"双残差"}]]},{"lang":"en","data":[[{"name":"text","data":"convolution neural network"}],[{"name":"text","data":"image deblurring"}],[{"name":"text","data":"octave convolution"}],[{"name":"text","data":"attention mechanism"}],[{"name":"text","data":"double residuals"}]]}],"highlights":[],"body":[{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"1"}],"title":[{"name":"text","data":"引言"}],"level":"1","id":"s1"}},{"name":"p","data":[{"name":"text","data":"近年来,随着手机等移动设备的普及,拍照成为了记录生活的主要方式。但不可避免地会出现相机抖动、物体移动等问题,获取的图像出现模糊现象,严重影响后续图像的使用。除了日常生活中的照片,图像模糊退化还会发生在刑事侦察、视频监控、卫星成像等"},{"name":"sup","data":[{"name":"text","data":"[1"}]},{"name":"sup","data":[{"name":"text","data":"-2"}]},{"name":"sup","data":[{"name":"text","data":"]"}]},{"name":"text","data":"方面,故图像去模糊技术具有重大的研究意义和应用价值。"}]},{"name":"p","data":[{"name":"text","data":"图像去模糊通常分为两类,非盲去模糊"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"3","type":"bibr","rid":"b3","data":[{"name":"text","data":"3"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"和盲去模糊"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"4","type":"bibr","rid":"b4","data":[{"name":"text","data":"4"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。早期的研究大多致力于非盲去模糊"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"3","type":"bibr","rid":"b3","data":[{"name":"text","data":"3"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",即模糊核已知。但在真实应用场景中模糊图像的模糊核往往是未知的。传统方法"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"5","type":"bibr","rid":"b5","data":[{"name":"text","data":"5"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"大多基于统计先验和正则化技术去除图像模糊,常见图像先验包括梯度稀疏先验"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"6","type":"bibr","rid":"b6","data":[{"name":"text","data":"6"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、低秩先验"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"7","type":"bibr","rid":"b7","data":[{"name":"text","data":"7"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等。但传统基于统计先验的去模糊方法采用的图像先验大多是假设的,存在假设不正确、细节丢失等问题。近年来,深度学习已广泛应用于解决计算机视觉问题,例如图像识别"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、图像分类"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"9","type":"bibr","rid":"b9","data":[{"name":"text","data":"9"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等问题。Xu等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"10","type":"bibr","rid":"b10","data":[{"name":"text","data":"10"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出基于卷积神经网络的图像去模糊方法,但该算法在去模糊过程中丢失了图像的细节纹理信息。Hradi等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"b11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出基于卷积神经网络的图像去模糊算法,但该算法只适用于文本图像。Su等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"b12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出基于卷积神经网络的视频去模糊方法,但该算法不适用单幅图像去模糊。Nah等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"4","type":"bibr","rid":"b4","data":[{"name":"text","data":"4"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出一种多尺度卷积网络,遵循传统的“从粗到细”的网络框架逐渐恢复出更高分辨率的清晰图像,该算法获得了较好的效果,但计算量大,运行速度较慢。Zhang等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"b13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出了一种深度分层多补丁网络,利用“从细到粗”的分层方式去除非均匀运动模糊,该算法降低了模型大小,提高了运行速度,但图像预处理过程较复杂,网络性能不稳定。以上各种算法均未加区分地处理空间与通道特征信息, 事实上,模糊图像中不同通道以及各个空间位置上包含不同类型的信息,其中一些有助于消除模糊,另一些有助于恢复细节。"}]},{"name":"p","data":[{"name":"text","data":"针对目前用于图像去模糊的单任务卷积神经网络高频细节丢失、不加区分地处理空间和通道特征信息等问题,本文提出一种基于双任务卷积神经网络的图像去模糊方法,网络由去模糊子网络、高频细节恢复子网络以及特征重建模块组成。由于不同的空间区域以及不同通道包含不同类型的特征信息,故在去模糊子网络中将残差注意力模块放置在编码器末端引导解码器重建出更加清晰的图像,并在去模糊子网络中引入八度卷积残差块作为整个编解码子网络的基础构建块,增强特征提取能力的同时降低空间冗余,减少参数量。编解码网络结构可以增加网络的感受野以及获取更多不同层级的图像特征信息,但由于编解码网络结构会丢失特征图的高频信息,故提出一种用以恢复细节纹理的基于双残差连接的高频细节恢复子网络。本文将去模糊子网络与高频细节恢复子网络采用并联的方式进行组合,可达到更好的去模糊效果。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2"}],"title":[{"name":"text","data":"相关理论"}],"level":"1","id":"s2"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.1"}],"title":[{"name":"text","data":"注意力机制"}],"level":"2","id":"s2-1"}},{"name":"p","data":[{"name":"text","data":"注意力机制可被视为一种自适应地将计算资源分配给输入特征中最有用部分的方法。近年来,空间注意力机制和通道注意力机制被广泛应用于计算机视觉任务中,例如目标识别"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、图像去雾"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"b14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等。空间注意力机制可突出有用空间位置的特征,通道注意力机制可以识别通道特征之间的相互依赖性并找到有用的特征。"}]},{"name":"p","data":[{"name":"text","data":"通道注意力模块从通道的角度对特征进行加权,该模块结构如"},{"name":"xref","data":{"text":"图 1","type":"fig","rid":"Figure1","data":[{"name":"text","data":"图 1"}]}},{"name":"text","data":"所示,通道注意主要关注于输入图像中什么是有意义的。计算过程如下:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"1"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783507&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783507&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783507&type=middle"}}}],"id":"yjyxs-36-11-1486-E1"}}]},{"name":"fig","data":{"id":"Figure1","caption":[{"lang":"zh","label":[{"name":"text","data":"图1"}],"title":[{"name":"text","data":"通道注意力模块"}]},{"lang":"en","label":[{"name":"text","data":"Fig 1"}],"title":[{"name":"text","data":"Channel attention module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783513&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783513&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783513&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"italic","data":[{"name":"text","data":"σ"}]},{"name":"text","data":"表示sigmoid激活函数,MLP表示多层感知机。"}]},{"name":"p","data":[{"name":"text","data":"与通道注意不同,空间注意侧重于输入图像‘何处’信息是有用的,是对通道注意的补充。该模块结构如"},{"name":"xref","data":{"text":"图 2","type":"fig","rid":"Figure2","data":[{"name":"text","data":"图 2"}]}},{"name":"text","data":"所示,计算过程如下:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"2"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783519&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783519&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783519&type=middle"}}}],"id":"yjyxs-36-11-1486-E2"}}]},{"name":"fig","data":{"id":"Figure2","caption":[{"lang":"zh","label":[{"name":"text","data":"图2"}],"title":[{"name":"text","data":"空间注意力模块"}]},{"lang":"en","label":[{"name":"text","data":"Fig 2"}],"title":[{"name":"text","data":"Spatial attention module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783523&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783523&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783523&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"italic","data":[{"name":"text","data":"σ"}]},{"name":"text","data":"表示Sigmoid函数,"},{"name":"italic","data":[{"name":"text","data":"f"}]},{"name":"sup","data":[{"name":"text","data":"7×7"}]},{"name":"text","data":"表示滤波器大小为7×7的卷积运算。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.2"}],"title":[{"name":"text","data":"八度卷积"}],"level":"2","id":"s2-2"}},{"name":"p","data":[{"name":"text","data":"自然图像可以分解为低空间频率和高空间频率,卷积层的输出特征图以及输入通道也存在着高、低频分量。高频信号支撑的是丰富细节,而低频信号支撑的是整体结构,显然低频分量中存在冗余,在编码过程中可以节省。而传统的卷积并未分解低高频分量,导致空间冗余。为解决上述问题,Chen等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"15","type":"bibr","rid":"b15","data":[{"name":"text","data":"15"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出了八度卷积(Octave Convolution),其核心原理就是利用空间尺度化理论将图像高频低频部分分开,下采样低频部分,并在其相应的频率上用不同的卷积处理它们,间隔一个八度,可以大幅降低参数量。OctConv被定义为一个单一的、通用的、即插即用的卷积单元,并且可以完美地嵌入到神经网络中,同时减少内存和计算成本。该模块结构如"},{"name":"xref","data":{"text":"图 3","type":"fig","rid":"Figure3","data":[{"name":"text","data":"图 3"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure3","caption":[{"lang":"zh","label":[{"name":"text","data":"图3"}],"title":[{"name":"text","data":"八度卷积模块"}]},{"lang":"en","label":[{"name":"text","data":"Fig 3"}],"title":[{"name":"text","data":"Octave convolution module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783528&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783528&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783528&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"如"},{"name":"xref","data":{"text":"图 3","type":"fig","rid":"Figure3","data":[{"name":"text","data":"图 3"}]}},{"name":"text","data":"所示,卷积层的输入输出以及卷积核都被分为了两个部分,一部分为高频信息["},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"sup","data":[{"name":"text","data":"H"}]},{"name":"text","data":","},{"name":"italic","data":[{"name":"text","data":"Y"}]},{"name":"sup","data":[{"name":"text","data":"H"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"W"}]},{"name":"sup","data":[{"name":"text","data":"H"}]},{"name":"text","data":"],另一部分为低频信息["},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"sup","data":[{"name":"text","data":"L"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"Y"}]},{"name":"sup","data":[{"name":"text","data":"L"}]},{"name":"text","data":","},{"name":"italic","data":[{"name":"text","data":"W"}]},{"name":"sup","data":[{"name":"text","data":"L"}]},{"name":"text","data":"],低频特征图的分辨率为高频特征图的一半,对于高频特征图,它的频率内信息更新过程就是普通卷积过程,而频率间的信息交流过程,则使用上采样操作再进行卷积。类似地,对于低频特征图,它的频率内信息更新过程是普通卷积过程,而频率间的信息交流过程,则使用平均池化操作再进行卷积。其卷积过程可以描述如下:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"3"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783535&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783535&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783535&type=middle"}}}],"id":"yjyxs-36-11-1486-E3"}}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"4"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783541&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783541&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783541&type=middle"}}}],"id":"yjyxs-36-11-1486-E4"}}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3"}],"title":[{"name":"text","data":"本文网络模型"}],"level":"1","id":"s3"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.1"}],"title":[{"name":"text","data":"网络结构"}],"level":"2","id":"s3-1"}},{"name":"p","data":[{"name":"text","data":"针对现有去模糊算法存在的高频细节丢失、不加区分地处理所有空间和通道特征信息等问题,本文提出一种基于双任务卷积神经网络的图像去模糊方法,将整个去模糊任务分为两个子任务进行,首先利用基于残差注意力模块和八度卷积残差块的编解码网络去除图像模糊,将残差注意力模块添加至编码器末端来引导解码器重建出更加清晰的图像,并且利用八度卷积残差块来降低空间冗余以及更充分地提取特征信息,去模糊子网络保留了整体结构信息,但由于编码过程采用下采样操作,会丢失高频细节,故提出一种基于双残差连接模块的高频细节恢复子网络来恢复其高频细节,在双残差连接模块中利用成对操作潜力来进行细节恢复,在扩大感受野的同时提高了网络性能。本文采用并联方式将去模糊子网络和高频细节恢复子网络组合起来,从而达到更好的去模糊效果。整体网络结构如"},{"name":"xref","data":{"text":"图 4","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure4","caption":[{"lang":"zh","label":[{"name":"text","data":"图4"}],"title":[{"name":"text","data":"双任务卷积神经网络模型"}]},{"lang":"en","label":[{"name":"text","data":"Fig 4"}],"title":[{"name":"text","data":"Dual task convolution neural network model"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783546&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783546&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783546&type=middle"}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.2"}],"title":[{"name":"text","data":"编解码结构"}],"level":"2","id":"s3-2"}},{"name":"p","data":[{"name":"text","data":"目前,编解码网络结构常被应用于计算机视觉任务中"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"b16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",其对于图像复原任务有较强的适用性和实用性。本文采用一种新型的自编解码网络结构,自编码结构由卷积层、八度卷积残差块以及残差注意力模块组成,输入的模糊图像经过3次步长为2的卷积层进行下采样,每经过一次下采样得到尺寸减少一半的特征图。自解码结构由上采样层、卷积层以及八度卷积残差块组成,经过3次上采样操作将特征图恢复到原来的尺寸大小。由于利用注意力模块可以使网络自适应地学习不同通道以及空间的重要特征,故将残差注意力模块应用于编码器末端来引导解码器去除运动模糊,重建出更加清晰的图像。图像去模糊任务中,具有较大感受野是去除严重模糊的关键,增加网络层数可以增大感受野,但也会导致梯度消失或梯度爆炸等问题,故本文在编码阶段和对称的解码阶段添加跳跃连接,可以缓解由于网络过深而导致的梯度消失或爆炸问题。编解码器的参数配置如"},{"name":"xref","data":{"text":"表 1","type":"table","rid":"Table1","data":[{"name":"text","data":"表 1"}]}},{"name":"text","data":"、"},{"name":"xref","data":{"text":"表 2","type":"table","rid":"Table2","data":[{"name":"text","data":"表 2"}]}},{"name":"text","data":"所示,编解码结构所有卷积层均使用3×3的卷积核。"}]},{"name":"table","data":{"id":"Table1","caption":[{"lang":"zh","label":[{"name":"text","data":"表1"}],"title":[{"name":"text","data":"编码结构参数配置表"}]},{"lang":"en","label":[{"name":"text","data":"Table 1"}],"title":[{"name":"text","data":"Parameters configuration table of encoding structure"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"结构"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"层"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"通道数"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"层"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"通道数"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"编码器1"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"卷积层1"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"16"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"八度卷积残差块1-1,1-2"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"16"}]}],[{"align":"center","data":[{"name":"text","data":"编码器2"}]},{"align":"center","data":[{"name":"text","data":"卷积层2 ↓"}]},{"align":"center","data":[{"name":"text","data":"32"}]},{"align":"center","data":[{"name":"text","data":"八度卷积残差块2-1,2-2"}]},{"align":"center","data":[{"name":"text","data":"32"}]}],[{"align":"center","data":[{"name":"text","data":"编码器3"}]},{"align":"center","data":[{"name":"text","data":"卷积层3 ↓"}]},{"align":"center","data":[{"name":"text","data":"64"}]},{"align":"center","data":[{"name":"text","data":"八度卷积残差块3-1,3-2"}]},{"align":"center","data":[{"name":"text","data":"64"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"编码器4"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"卷积层4 ↓"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"96"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"残差注意力模块"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"64"}]}]],"foot":[]}]}},{"name":"table","data":{"id":"Table2","caption":[{"lang":"zh","label":[{"name":"text","data":"表2"}],"title":[{"name":"text","data":"解码结构参数配置表"}]},{"lang":"en","label":[{"name":"text","data":"Table 2"}],"title":[{"name":"text","data":"Parameters configuration table of decoding structure"}]}],"note":[{"lang":"zh","data":[{"name":"p","data":[{"name":"text","data":"↑表示将特征图进行2倍上采样;↓表示特征图进行步长为2的下采样。"}]}]}],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"结构"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"层"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"通道数"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"层"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"通道数"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"层"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"通道数"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"解码器1"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"上采样层1 ↑"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"64"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"卷积层5"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"64"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"八度卷积残差块4"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"128"}]}],[{"align":"center","data":[{"name":"text","data":"解码器2"}]},{"align":"center","data":[{"name":"text","data":"上采样层2 ↑"}]},{"align":"center","data":[{"name":"text","data":"32"}]},{"align":"center","data":[{"name":"text","data":"卷积层6"}]},{"align":"center","data":[{"name":"text","data":"32"}]},{"align":"center","data":[{"name":"text","data":"八度卷积残差块5"}]},{"align":"center","data":[{"name":"text","data":"64"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"解码器3"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"上采样层3 ↑"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"16"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"卷积层7"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"16"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"八度卷积残差块6"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"32"}]}]],"foot":[]}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.3"}],"title":[{"name":"text","data":"残差注意力模块"}],"level":"2","id":"s3-3"}},{"name":"p","data":[{"name":"text","data":"残差结构的网络已被广泛应用于图像去雾"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"17","type":"bibr","rid":"b17","data":[{"name":"text","data":"17"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、图像去噪"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"18","type":"bibr","rid":"b18","data":[{"name":"text","data":"18"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等领域,其产生的目的是为了解决较深网络容易出现梯度消失、梯度爆炸、网络性能退化等问题。本文将注意力机制(CBAM)"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"19","type":"bibr","rid":"b19","data":[{"name":"text","data":"19"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"与残差结构"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"20","type":"bibr","rid":"b20","data":[{"name":"text","data":"20"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"相结合并应用于编码器末端用于引导解码器重建出更清晰的图像,并且能够保证在整体结构完整的情况下突出更有用的信息。该模块结构如"},{"name":"xref","data":{"text":"图 5","type":"fig","rid":"Figure5","data":[{"name":"text","data":"图 5"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure5","caption":[{"lang":"zh","label":[{"name":"text","data":"图5"}],"title":[{"name":"text","data":"残差注意力模块"}]},{"lang":"en","label":[{"name":"text","data":"Fig 5"}],"title":[{"name":"text","data":"Residual attention module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783551&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783551&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783551&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"输入特征首先经过一个3×3的卷积层,得到的特征分别进入主干分支"},{"name":"italic","data":[{"name":"text","data":"M"}]},{"name":"text","data":"和CBAM分支"},{"name":"italic","data":[{"name":"text","data":"C"}]},{"name":"text","data":"。进入CBAM分支的特征首先经过通道注意力机制模块,得到的特征再与经过卷积层的特征进行相应元素相乘,得到输出特征"},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":"′,之后"},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":"′进入空间注意力机制模块,再与"},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":"′进行相应元素相乘,得到CBAM分支的最终输出特征"},{"name":"italic","data":[{"name":"text","data":"C"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"c"}]}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")。主干分支保留了原始的特征"},{"name":"italic","data":[{"name":"text","data":"M"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"c"}]}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"),与CBAM分支所得到的输出"},{"name":"italic","data":[{"name":"text","data":"C"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"c"}]}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")相加得到残差注意力模块的最终输出"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"c"}]}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")。上述结构可表述为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"5"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783556&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783556&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783556&type=middle"}}}],"id":"yjyxs-36-11-1486-E5"}}]},{"name":"p","data":[{"name":"text","data":"其中: "},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"代表空间位置,"},{"name":"italic","data":[{"name":"text","data":"c"}]},{"name":"text","data":"代表特征通道的索引。"}]},{"name":"p","data":[{"name":"text","data":"该模块中的主干分支"},{"name":"italic","data":[{"name":"text","data":"M"}]},{"name":"text","data":"保留了原始输入特征信息,而CBAM分支生成的注意力机制特征图显著突出了对结果有用的区域,故此模块有利于提高网络的表达能力。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.4"}],"title":[{"name":"text","data":"八度卷积残差模块"}],"level":"2","id":"s3-4"}},{"name":"p","data":[{"name":"text","data":"Chen等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"15","type":"bibr","rid":"b15","data":[{"name":"text","data":"15"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"已经证明了八度卷积可以有效减少参数量,降低空间冗余,并且可以使一部分卷积专注提取高频信息,另一部分卷积提取低频信息。ResNet是He等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"20","type":"bibr","rid":"b20","data":[{"name":"text","data":"20"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"在2015年所提出的,它解决了深层网络中梯度弥散和精度下降的问题,使网络能够越来越深,既保证了精度,又控制了速度。本文将八度卷积引入残差块,将残差连接单元中普通卷积替换为八度卷积,在解决梯度消失、精度下降等问题的同时降低空间冗余,减少计算量。本文将八度卷积残差块作为编解码网络的基础构建块。该模块结构如"},{"name":"xref","data":{"text":"图 6","type":"fig","rid":"Figure6","data":[{"name":"text","data":"图 6"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure6","caption":[{"lang":"zh","label":[{"name":"text","data":"图6"}],"title":[{"name":"text","data":"八度卷积残差模块"}]},{"lang":"en","label":[{"name":"text","data":"Fig 6"}],"title":[{"name":"text","data":"Octave convolution residual module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783561&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783561&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783561&type=middle"}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.5"}],"title":[{"name":"text","data":"双残差连接模块"}],"level":"2","id":"s3-5"}},{"name":"p","data":[{"name":"text","data":"由于编解码网络结构会丢失部分细节特征,故本文设计了一个基于双残差连接方式的高频细节恢复子网络。这种连接方式配备了两个容器,可以插入任何操作。为扩大感受野,本文将第一个容器里放入一个3×3,步长为1的卷积层和膨胀因子为3的扩张卷积层;第二个容器里放入一个5×5,步长为1的卷积层,利用这两个容器的成对运算潜力来恢复出高频细节信息。在此模块中并未进行下采样操作,整个过程特征图大小与输入大小保持一致,只进行模糊图像高频特征的提取和融合,为最终的图像恢复过程提供高频信息。本文在高频细节恢复子网络中设计了5对成对操作。该模块的网络结构如"},{"name":"xref","data":{"text":"图 7","type":"fig","rid":"Figure7","data":[{"name":"text","data":"图 7"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure7","caption":[{"lang":"zh","label":[{"name":"text","data":"图7"}],"title":[{"name":"text","data":"双残差连接模型"}]},{"lang":"en","label":[{"name":"text","data":"Fig 7"}],"title":[{"name":"text","data":"Double residual connection model"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783567&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783567&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783567&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"特征"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"进入第"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"个成对操作后,首先经过第一个容器,也即经过一个3×3的卷积层和一个膨胀因子为3的扩张卷积层,所得到的特征"},{"name":"italic","data":[{"name":"text","data":"g"}]},{"name":"sub","data":[{"name":"text","data":"1"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")与第"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"-1个成对操作中经过第一个容器后所得到的特征"},{"name":"italic","data":[{"name":"text","data":"M"}]},{"name":"text","data":"进行相应元素相加,得到输出特征"},{"name":"italic","data":[{"name":"text","data":"G"}]},{"name":"text","data":", 之后"},{"name":"italic","data":[{"name":"text","data":"G"}]},{"name":"text","data":"进入第"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"个成对操作中的第二个容器得到特征"},{"name":"italic","data":[{"name":"text","data":"g"}]},{"name":"sub","data":[{"name":"text","data":"2"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"),"},{"name":"italic","data":[{"name":"text","data":"g"}]},{"name":"sub","data":[{"name":"text","data":"2"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")再与第"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"-1个成对操作中经过第二个容器后所得到的特征"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"text","data":"进行相应元素相加,得到最终输出特征"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")。上述过程可以表述为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"6"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783572&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783572&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783572&type=middle"}}}],"id":"yjyxs-36-11-1486-E6"}}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"7"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783577&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783577&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783577&type=middle"}}}],"id":"yjyxs-36-11-1486-E7"}}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4"}],"title":[{"name":"text","data":"损失函数"}],"level":"1","id":"s4"}},{"name":"p","data":[{"name":"text","data":"本文总损失函数由平均绝对误差损失(Mean Absolute Error,MAE)、结构损失(Structural Loss,SL)组成。其计算公式为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"8"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783583&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783583&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783583&type=middle"}}}],"id":"yjyxs-36-11-1486-E8"}}]},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"表示平均绝对误差损失(MAE),"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"表示结构损失(SL), "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"与"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"分别为对应的平衡权重。"}]},{"name":"p","data":[{"name":"text","data":"平均绝对误差(MAE)损失用于测量清晰图像和输出去模糊图像之间像素方面的差异。其计算过程可以表述为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"9"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783587&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783587&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783587&type=middle"}}}],"id":"yjyxs-36-11-1486-E9"}}]},{"name":"p","data":[{"name":"text","data":"其中,"},{"name":"italic","data":[{"name":"text","data":"I"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":"为真实清晰图像,"},{"name":"inlineformula","data":[{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783606&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783606&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783606&type=middle"}}}]},{"name":"text","data":"为本文网络输出的去模糊图像。"}]},{"name":"p","data":[{"name":"text","data":"平均绝对误差损失往往会丢失细节纹理,故在网络中引入结构损失作为约束项,保持图像细节,避免细节模糊。其计算过程可以表述为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"10"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783591&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783591&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783591&type=middle"}}}],"id":"yjyxs-36-11-1486-E10"}}]},{"name":"p","data":[{"name":"text","data":"其中,"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"SSIM"}]},{"name":"text","data":"可以表示为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"11"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783594&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783594&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783594&type=middle"}}}],"id":"yjyxs-36-11-1486-E11"}}]},{"name":"p","data":[{"name":"text","data":"多尺度结构相似损失(MS-SSIM)基于多层的SSIM损失函数,考虑了分辨率,"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"MS-SSIM"}]},{"name":"text","data":"可以表示为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"12"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783597&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783597&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783597&type=middle"}}}],"id":"yjyxs-36-11-1486-E12"}}]},{"name":"p","data":[{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"text","data":"为像素块的中间像素值。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5"}],"title":[{"name":"text","data":"实验结果与分析"}],"level":"1","id":"s5"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5.1"}],"title":[{"name":"text","data":"数据集与实验配置"}],"level":"2","id":"s5-1"}},{"name":"p","data":[{"name":"text","data":"本文实验训练与测试数据均来自Nah等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"4","type":"bibr","rid":"b4","data":[{"name":"text","data":"4"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"在2017年所提出的GoPro数据集,随机选取2 013张清晰-模糊图像对,并将2 013张清晰-模糊图像对按8∶2的比例划分为训练集和测试集,即训练集为1 610张,测试集为403张。为增加整个网络模型的鲁棒性,将训练集中的1 610张图像进行翻转90°与180°,得到4 830张清晰-模糊图像对。最终用于实验的训练集为4 830张,测试集为403张。"}]},{"name":"p","data":[{"name":"text","data":"本文所做实验基于TensorFlow2.0深度学习框架与python3.7环境进行训练与测试,所用计算机GPU配置为Nvidia GeForce 1660Ti(6 GB),显存为6 GB。在整个实验过程中,所有图像一律裁剪为256×256×3,batch_size=2,epoch=5 000。采用Adam优化器来优化损失函数,动量参数分别为0.9、0.999,实验设置固定学习率为0.000 1。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5.2"}],"title":[{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"与"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"取不同值对实验结果的影响"}],"level":"2","id":"s5-2"}},{"name":"p","data":[{"name":"text","data":"本文所用总损失函数为平均绝对误差损失与结构损失的加权和,其中不同的权重系数对实验结果有一定的影响,故本文分别验证"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"与"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"取不同值时对实验结果的影响。依然使用峰值信噪比(PSNR)和结构相似度(SSIM)这两项指标来作为性能衡量标准。为了快速验证,"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"与"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"取不同值的对比实验均在只在编码器末端添加残差注意力模块的情况下进行。"},{"name":"xref","data":{"text":"表 3","type":"table","rid":"Table3","data":[{"name":"text","data":"表 3"}]}},{"name":"text","data":"为"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"与"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"取不同值时网络模型在GoPro测试集上的性能对比结果。"},{"name":"xref","data":{"text":"图 8","type":"fig","rid":"Figure8","data":[{"name":"text","data":"图 8"}]}},{"name":"text","data":"为"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"与"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"取不同值时的网络模型在GoPro测试集上可视化对比示例图。"},{"name":"xref","data":{"text":"表 3","type":"table","rid":"Table3","data":[{"name":"text","data":"表 3"}]}},{"name":"text","data":"表明,当"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"=0.3, "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"=0.7时网络性能最好,故本文其他实验均选择"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"=0.3, "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"=0.7。由"},{"name":"xref","data":{"text":"图 8","type":"fig","rid":"Figure8","data":[{"name":"text","data":"图 8"}]}},{"name":"text","data":"也可看出当"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"=0.3, "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"=0.7时,图像纹理细节更加丰富,更加接近真实清晰图像。"}]},{"name":"table","data":{"id":"Table3","caption":[{"lang":"zh","label":[{"name":"text","data":"表3"}],"title":[{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"与"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"取不同值下的实验指标表"}]},{"lang":"en","label":[{"name":"text","data":"Table 3"}],"title":[{"name":"text","data":"Experimental indexes under different values of "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":" and "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"PSNR/dB"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"SSIM"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"=0.5, "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"=0.5"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"31.046 0"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.922 1"}]}],[{"align":"center","data":[{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"=0.3, "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"=0.7"}]},{"align":"center","data":[{"name":"text","data":"32.427 6"}]},{"align":"center","data":[{"name":"text","data":"0.947 0"}]}],[{"align":"center","data":[{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"=0.4, "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"=0.6"}]},{"align":"center","data":[{"name":"text","data":"31.079 8"}]},{"align":"center","data":[{"name":"text","data":"0.923 7"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"=0.7, "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"=0.3"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"30.030 1"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.910 2"}]}]],"foot":[]}]}},{"name":"fig","data":{"id":"Figure8","caption":[{"lang":"zh","label":[{"name":"text","data":"图8"}],"title":[{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"与"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"取不同值时可视化对比示例图"}]},{"lang":"en","label":[{"name":"text","data":"Fig 8"}],"title":[{"name":"text","data":"Example of visual comparison when "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":" and "},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":" take different values"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783600&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783600&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783600&type=middle"}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5.3"}],"title":[{"name":"text","data":"残差注意力模块在编解码网络结构中所处位置对训练结果的影响"}],"level":"2","id":"s5-3"}},{"name":"p","data":[{"name":"text","data":"残差注意力模块可以放置到编解码网络结构中的不同位置,为验证本文将残差注意力模块放置为最优,故先进行对比实验。残差注意力模块可以加入到编解码网络结构的不同位置,分别为:"}]},{"name":"p","data":[{"name":"text","data":"(1) C0:不添加残差注意力模块;"}]},{"name":"p","data":[{"name":"text","data":"(2) C1:在编码器末端添加残差注意力模块;"}]},{"name":"p","data":[{"name":"text","data":"(3) C2:在编码器的第3,4块添加残差注意力模块;"}]},{"name":"p","data":[{"name":"text","data":"(4) C3:在编码器的第2,3,4块添加残差注意力模块;"}]},{"name":"p","data":[{"name":"text","data":"(5) C4:在编码器后均添加残差注意力模块;"}]},{"name":"p","data":[{"name":"text","data":"(6) C5:在编码器末端和解码器第一块添加残差注意力模块;"}]},{"name":"p","data":[{"name":"text","data":"(7) C6:在解码器的第1,2块添加残差注意力模块;"}]},{"name":"p","data":[{"name":"text","data":"(8) C7:在解码器后均添加残差注意力模块。"}]},{"name":"p","data":[{"name":"text","data":"本文将验证这8种情况,"},{"name":"xref","data":{"text":"表 4","type":"table","rid":"Table4","data":[{"name":"text","data":"表 4"}]}},{"name":"text","data":"展示了残差注意力模块所处位置对网络性能的影响。"},{"name":"xref","data":{"text":"图 9","type":"fig","rid":"Figure9","data":[{"name":"text","data":"图 9"}]}},{"name":"text","data":"为残差注意力模块放置不同位置的网络模型在GoPro测试集上可视化对比示例图。"}]},{"name":"table","data":{"id":"Table4","caption":[{"lang":"zh","label":[{"name":"text","data":"表4"}],"title":[{"name":"text","data":"残差注意力模块不同位置对性能影响"}]},{"lang":"en","label":[{"name":"text","data":"Table 4"}],"title":[{"name":"text","data":"Impact of different positions of residual attention module on performance"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"残差注意力模块位置"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"PSNR/dB"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"SSIM"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"C0"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"29.87"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.921 7"}]}],[{"align":"center","data":[{"name":"text","data":"C1"}]},{"align":"center","data":[{"name":"text","data":"32.427 6"}]},{"align":"center","data":[{"name":"text","data":"0.9470"}]}],[{"align":"center","data":[{"name":"text","data":"C2"}]},{"align":"center","data":[{"name":"text","data":"31.936 9"}]},{"align":"center","data":[{"name":"text","data":"0.9418"}]}],[{"align":"center","data":[{"name":"text","data":"C3"}]},{"align":"center","data":[{"name":"text","data":"32.000 3"}]},{"align":"center","data":[{"name":"text","data":"0.942 6"}]}],[{"align":"center","data":[{"name":"text","data":"C4"}]},{"align":"center","data":[{"name":"text","data":"31.126 6"}]},{"align":"center","data":[{"name":"text","data":"0.943 2"}]}],[{"align":"center","data":[{"name":"text","data":"C5"}]},{"align":"center","data":[{"name":"text","data":"30.614 4"}]},{"align":"center","data":[{"name":"text","data":"0.919 8"}]}],[{"align":"center","data":[{"name":"text","data":"C6"}]},{"align":"center","data":[{"name":"text","data":"30.322 4"}]},{"align":"center","data":[{"name":"text","data":"0.920 2"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"C7"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"31.547 7"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.935 6"}]}]],"foot":[]}]}},{"name":"fig","data":{"id":"Figure9","caption":[{"lang":"zh","label":[{"name":"text","data":"图9"}],"title":[{"name":"text","data":"残差注意力模块放置不同位置的可视化对比"}]},{"lang":"en","label":[{"name":"text","data":"Fig 9"}],"title":[{"name":"text","data":"Visual comparison of different positions of residual attention module"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783602&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783602&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783602&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"如"},{"name":"xref","data":{"text":"表 4","type":"table","rid":"Table4","data":[{"name":"text","data":"表 4"}]}},{"name":"text","data":"所示,将残差注意力模块放置在C1位置时,能够更好地提升网络去模糊性能,获得了最高的PSNR和SSIM。同时观察到,添加多个残差注意力模块在编码器或解码器中,网络性能有所下降,故本文的其余实验均将残差注意力模块放置在编码器末端。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5.4"}],"title":[{"name":"text","data":"不同损失函数对实验结果的影响"}],"level":"2","id":"s5-4"}},{"name":"p","data":[{"name":"text","data":"针对不同损失函数对实验结果的影响,本文测试了不同情况下的实验结果。"},{"name":"xref","data":{"text":"表 5","type":"table","rid":"Table5","data":[{"name":"text","data":"表 5"}]}},{"name":"text","data":"展示了不同损失函数对网络性能的影响。"},{"name":"xref","data":{"text":"图 10","type":"fig","rid":"Figure10","data":[{"name":"text","data":"图 10"}]}},{"name":"text","data":"为应用不同损失函数的情况下在GoPro测试集上可视化对比示例图。"}]},{"name":"table","data":{"id":"Table5","caption":[{"lang":"zh","label":[{"name":"text","data":"表5"}],"title":[{"name":"text","data":"不同损失函数对网络性能的影响"}]},{"lang":"en","label":[{"name":"text","data":"Table 5"}],"title":[{"name":"text","data":"Influence of different loss functions on network performance"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"PSNR/dB"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"SSIM"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"A"}]}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"31.827 9"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.927 0"}]}],[{"align":"center","data":[{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"S"}]}]},{"align":"center","data":[{"name":"text","data":"31.171 3"}]},{"align":"center","data":[{"name":"text","data":"0.938 2"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"+"},{"name":"italic","data":[{"name":"text","data":"λ"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"S"}]}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"32.427 6"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.947 0"}]}]],"foot":[]}]}},{"name":"fig","data":{"id":"Figure10","caption":[{"lang":"zh","label":[{"name":"text","data":"图10"}],"title":[{"name":"text","data":"应用不同损失函数的可视化对比"}]},{"lang":"en","label":[{"name":"text","data":"Fig 10"}],"title":[{"name":"text","data":"Visual comparison of different loss functions"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783604&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783604&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783604&type=middle"}]}},{"name":"p","data":[{"name":"xref","data":{"text":"表 5","type":"table","rid":"Table5","data":[{"name":"text","data":"表 5"}]}},{"name":"text","data":"表明,"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"与"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"的加权和相比于"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"和"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"在两项指标上有略微的提升,由"},{"name":"xref","data":{"text":"图 10","type":"fig","rid":"Figure10","data":[{"name":"text","data":"图 10"}]}},{"name":"text","data":"也可看出单纯地使用"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"或"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"并不能有效去除模糊,细节纹理不够丰富,而利用"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"与"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"的加权和可以更好地去除模糊,因此选择"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"A"}]},{"name":"text","data":"与"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"的加权和作为损失函数。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5.5"}],"title":[{"name":"text","data":"不同对比算法在GoPro测试集上的结果分析"}],"level":"2","id":"s5-5"}},{"name":"p","data":[{"name":"text","data":"为了验证本文算法的有效性,选取6种目前先进的图像去模糊卷积神经网络与本文方法进行对比,使用峰值信噪比(PSNR)、结构相似度(SSIM)、模型大小以及网络模型的运行时间作为性能衡量标准,"},{"name":"xref","data":{"text":"表 6","type":"table","rid":"Table6","data":[{"name":"text","data":"表 6"}]}},{"name":"text","data":"展示了不同算法在测试集上的性能对比结果。"},{"name":"xref","data":{"text":"图 11","type":"fig","rid":"Figure11","data":[{"name":"text","data":"图 11"}]}},{"name":"text","data":"为不同算法在测试集上可视化对比示例图。"}]},{"name":"table","data":{"id":"Table6","caption":[{"lang":"zh","label":[{"name":"text","data":"表6"}],"title":[{"name":"text","data":"不同算法的性能对比"}]},{"lang":"en","label":[{"name":"text","data":"Table 6"}],"title":[{"name":"text","data":"Performance comparison of different algorithms"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"算法"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"PSRN/dB"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"SSIM"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Size/MB"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Runtime/s"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"Sun"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"24.64"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.842 9"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"54.1"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"12 000"}]}],[{"align":"center","data":[{"name":"text","data":"Nah"}]},{"align":"center","data":[{"name":"text","data":"29.08"}]},{"align":"center","data":[{"name":"text","data":"0.913 5"}]},{"align":"center","data":[{"name":"text","data":"303.6"}]},{"align":"center","data":[{"name":"text","data":"4 300"}]}],[{"align":"center","data":[{"name":"text","data":"Tao"}]},{"align":"center","data":[{"name":"text","data":"30.26"}]},{"align":"center","data":[{"name":"text","data":"0.934 2"}]},{"align":"center","data":[{"name":"text","data":"33.6"}]},{"align":"center","data":[{"name":"text","data":"1 600"}]}],[{"align":"center","data":[{"name":"text","data":"Zhang"}]},{"align":"center","data":[{"name":"text","data":"31.20"}]},{"align":"center","data":[{"name":"text","data":"0.945 4"}]},{"align":"center","data":[{"name":"text","data":"86.8"}]},{"align":"center","data":[{"name":"text","data":"424"}]}],[{"align":"center","data":[{"name":"text","data":"Ye"}]},{"align":"center","data":[{"name":"text","data":"30.28"}]},{"align":"center","data":[{"name":"text","data":"0.904 6"}]},{"align":"center","data":[{"name":"text","data":"24.5"}]},{"align":"center","data":[{"name":"text","data":"367.9"}]}],[{"align":"center","data":[{"name":"text","data":"Zhou"}]},{"align":"center","data":[{"name":"text","data":"30.55"}]},{"align":"center","data":[{"name":"text","data":"0.940 0"}]},{"align":"center","data":[{"name":"text","data":"29.8"}]},{"align":"center","data":[{"name":"text","data":"320"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"Ours"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"32.427 6"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.947 0"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"25.4"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"268"}]}]],"foot":[]}]}},{"name":"fig","data":{"id":"Figure11","caption":[{"lang":"zh","label":[{"name":"text","data":"图11"}],"title":[{"name":"text","data":"不同算法的可视化对比"}]},{"lang":"en","label":[{"name":"text","data":"Fig 11"}],"title":[{"name":"text","data":"Visual comparison of different algorithms"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783605&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783605&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=21783605&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"由"},{"name":"xref","data":{"text":"表 6","type":"table","rid":"Table6","data":[{"name":"text","data":"表 6"}]}},{"name":"text","data":"可以看出本文所提出的方法较其他4种算法得到了更高的PSRN和SSIM,并且本文的网络模型较除Ye算法之外的5种算法达到最小,虽然本文算法比Ye算法网络模型大,但本文算法运行时间较短且能达到更高的PSNR和SSIM。由"},{"name":"xref","data":{"text":"图 11","type":"fig","rid":"Figure11","data":[{"name":"text","data":"图 11"}]}},{"name":"text","data":"也可看出Sun等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"21","type":"bibr","rid":"b21","data":[{"name":"text","data":"21"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出的算法不能够有效去除模糊,Nah等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"4","type":"bibr","rid":"b4","data":[{"name":"text","data":"4"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、Tao等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"b16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、Zhang等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"b13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、Ye等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"22","type":"bibr","rid":"b22","data":[{"name":"text","data":"22"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"以及Zhou等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"23","type":"bibr","rid":"b23","data":[{"name":"text","data":"23"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"的去模糊效果虽较Sun等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"21","type":"bibr","rid":"b21","data":[{"name":"text","data":"21"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"的算法有一定的提升,但纹理细节恢复较差,没有达到更好的去模糊效果。本文方法在GoPro数据集上有着良好的表现,所得到的去模糊图像更接近清晰图像。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5.6"}],"title":[{"name":"text","data":"双任务网络模型复杂度分析"}],"level":"2","id":"s5-6"}},{"name":"p","data":[{"name":"text","data":"本文使用同一台电脑来评估本文算法及目前先进的6种算法。如"},{"name":"xref","data":{"text":"表 6","type":"table","rid":"Table6","data":[{"name":"text","data":"表 6"}]}},{"name":"text","data":"所示,Sun等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"21","type":"bibr","rid":"b21","data":[{"name":"text","data":"21"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"利用CNN算法来估计运动模糊核,但仍需要传统的非盲去模糊算法来生成干净的图像,这增加了计算成本;为增大感受野,Nah等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"4","type":"bibr","rid":"b4","data":[{"name":"text","data":"4"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"和Tao等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"b16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"使用了多尺度CNN算法来估计干净图像,与传统的算法相比花费了更少的时间,但多尺度框架不可避免地增加了计算量,与本文所提方法相比,其效率仍然不高;Zhang等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"b13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"使用了深度分层多补丁CNN算法来去除模糊,但其图像预处理过程较为复杂,花费时间更多,并且此模型比本文模型大得多;Ye等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"22","type":"bibr","rid":"b22","data":[{"name":"text","data":"22"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"使用一种新的尺度循环网络,降低了模型大小,但本文方法的运行时间比该方法快约0.73倍。本文提出的双残差连接模块利用不同膨胀率的卷积以较低的计算成本扩大感受域。另外,本文提出的网络中使用了八度卷积,进一步减小了网络的规模。从"},{"name":"xref","data":{"text":"表 6","type":"table","rid":"Table6","data":[{"name":"text","data":"表 6"}]}},{"name":"text","data":"可以看出,与现有的基于CNN的方法相比,本文提出的网络模型小,运行速度快,效率更高。"}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"6"}],"title":[{"name":"text","data":"结论"}],"level":"1","id":"s6"}},{"name":"p","data":[{"name":"text","data":"本文为解决现存去模糊卷积神经网络中存在的细节纹理丢失、不加区分地处理通道以及空间特征信息等问题,提出了一种并联的双任务卷积神经网络用于图像去模糊任务。首先利用基于残差注意力模块和八度卷积残差块的新型编解码网络结构用来去除模糊,且在编解码之间加入跳跃连接来丰富解码器的信息,实验结果表明,将残差注意力模块放置编码器末端有利于提升去模糊性能。其次提出一种基于双残差连接模块的子网络进行细节恢复,双残差连接方式对于图像细节恢复任务具有更强的效果。将这两个子网络采用并联的方式连接起来,最后利用一层卷积来重建出更加清晰的图像。实验结果表明,本文算法的峰值信噪比为32.427 6 dB,结构相似度为0.947 0,与目前先进的图像去模糊技术相比均有所提升。"}]}]}],"footnote":[],"reflist":{"title":[{"name":"text","data":"参考文献"}],"data":[{"id":"b1","label":"1","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"A KOUR"},{"name":"text","data":" , "},{"name":"text","data":"V K YADA"},{"name":"text","data":" , "},{"name":"text","data":"V MAHESHWARI"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"A review on image processing"},{"name":"text","data":" . "},{"name":"text","data":"International Journal of Electronics Communication and Computer Engineering"},{"name":"text","data":" , "},{"name":"text","data":"2013"},{"name":"text","data":" ."},{"name":"text","data":"4"},{"name":"text","data":" ( "},{"name":"text","data":"1"},{"name":"text","data":" ): "},{"name":"text","data":"270"},{"name":"text","data":" - "},{"name":"text","data":"275"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"http://www.researchgate.net/publication/271770488_a_review_on_image_processing"}],"href":"http://www.researchgate.net/publication/271770488_a_review_on_image_processing"}},{"name":"text","data":"."}],"title":"A review on image processing"}]},{"id":"b2","label":"2","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"MAIRAL J, BACH F, PONCE J, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Non-local sparse models for image restoration[C]//2009"},{"name":"italic","data":[{"name":"text","data":"IEEE"}]},{"name":"text","data":" 12"},{"name":"italic","data":[{"name":"text","data":"th International Conference on Computer Vision"}]},{"name":"text","data":". Kyoto, Japan: IEEE, 2009: 2272-2279."}]}]}]},{"id":"b3","label":"3","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"W S DONG"},{"name":"text","data":" , "},{"name":"text","data":"P Y WANG"},{"name":"text","data":" , "},{"name":"text","data":"W T YIN"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Denoising prior driven deep neural network for image restoration"},{"name":"text","data":" . "},{"name":"text","data":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"41"},{"name":"text","data":" ( "},{"name":"text","data":"10"},{"name":"text","data":" ): "},{"name":"text","data":"2305"},{"name":"text","data":" - "},{"name":"text","data":"2318"},{"name":"text","data":" . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/TPAMI.2018.2873610"}],"href":"http://doi.org/10.1109/TPAMI.2018.2873610"}},{"name":"text","data":"."}],"title":"Denoising prior driven deep neural network for image restoration"}]},{"id":"b4","label":"4","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"NAH S, KIM T H, LEE K M. Deep multi-scale convolutional neural network for dynamic scene deblurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Honolulu, HI, USA: IEEE, 2017: 257-265."}]}]}]},{"id":"b5","label":"5","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"PAN J S, HU Z, SU Z X, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Deblurring text images "},{"name":"italic","data":[{"name":"text","data":"via"}]},{"name":"text","data":" L"},{"name":"sub","data":[{"name":"text","data":"0"}]},{"name":"text","data":"-regularized intensity and gradient prior[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 2014 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Columbus, OH, USA: IEEE, 2014: 2901-2908."}]}]}]},{"id":"b6","label":"6","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"A LEVIN"},{"name":"text","data":" , "},{"name":"text","data":"R FERGUS"},{"name":"text","data":" , "},{"name":"text","data":"F DURAND"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Image and depth from a conventional camera with a coded aperture"},{"name":"text","data":" . "},{"name":"text","data":"ACM Transactions on Graphics"},{"name":"text","data":" , "},{"name":"text","data":"2007"},{"name":"text","data":" . "},{"name":"text","data":"26"},{"name":"text","data":" ( "},{"name":"text","data":"3"},{"name":"text","data":" ): "},{"name":"text","data":"70"},{"name":"text","data":" "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1145/1276377.1276464"}],"href":"http://doi.org/10.1145/1276377.1276464"}},{"name":"text","data":"."}],"title":"Image and depth from a conventional camera with a coded aperture"}]},{"id":"b7","label":"7","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"W Q REN"},{"name":"text","data":" , "},{"name":"text","data":"X C CAO"},{"name":"text","data":" , "},{"name":"text","data":"J S PAN"},{"name":"text","data":" , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Image deblurring "},{"name":"italic","data":[{"name":"text","data":"via"}]},{"name":"text","data":" enhanced low-rank prior"},{"name":"text","data":" . "},{"name":"text","data":"IEEE Transactions on Image Processing"},{"name":"text","data":" , "},{"name":"text","data":"2016"},{"name":"text","data":" . "},{"name":"text","data":"25"},{"name":"text","data":" ( "},{"name":"text","data":"7"},{"name":"text","data":" ): "},{"name":"text","data":"3426"},{"name":"text","data":" - "},{"name":"text","data":"3437"},{"name":"text","data":" . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/TIP.2016.2571062"}],"href":"http://doi.org/10.1109/TIP.2016.2571062"}},{"name":"text","data":"."}],"title":"Image deblurring via enhanced low-rank prior"}]},{"id":"b8","label":"8","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"张 万征"},{"name":"text","data":" , "},{"name":"text","data":"胡 志坤"},{"name":"text","data":" , "},{"name":"text","data":"李 小龙"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于LeNet-5的卷积神经图像识别算法"},{"name":"text","data":" . "},{"name":"text","data":"液晶与显示"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"35"},{"name":"text","data":" ( "},{"name":"text","data":"5"},{"name":"text","data":" ): "},{"name":"text","data":"486"},{"name":"text","data":" - "},{"name":"text","data":"490"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"http://cjlcd.lightpublishing.cn/thesisDetails#10.3788/YJYXS20203505.0486"}],"href":"http://cjlcd.lightpublishing.cn/thesisDetails#10.3788/YJYXS20203505.0486"}},{"name":"text","data":"."}],"title":"基于LeNet-5的卷积神经图像识别算法"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"W Z ZHANG"},{"name":"text","data":" , "},{"name":"text","data":"Z K HU"},{"name":"text","data":" , "},{"name":"text","data":"X L LI"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Convolutional neural image recognition algorithm based on LeNet-5"},{"name":"text","data":" . "},{"name":"text","data":"Chinese Journal of Liquid Crystals and Displays"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"35"},{"name":"text","data":" ( "},{"name":"text","data":"5"},{"name":"text","data":" ): "},{"name":"text","data":"486"},{"name":"text","data":" - "},{"name":"text","data":"490"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"http://cjlcd.lightpublishing.cn/thesisDetails#10.3788/YJYXS20203505.0486"}],"href":"http://cjlcd.lightpublishing.cn/thesisDetails#10.3788/YJYXS20203505.0486"}},{"name":"text","data":"."}],"title":"Convolutional neural image recognition algorithm based on LeNet-5"}]},{"id":"b9","label":"9","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"王 旖旎"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于Inception V3的图像状态分类技术"},{"name":"text","data":" . "},{"name":"text","data":"液晶与显示"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"35"},{"name":"text","data":" ( "},{"name":"text","data":"4"},{"name":"text","data":" ): "},{"name":"text","data":"389"},{"name":"text","data":" - "},{"name":"text","data":"394"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"http://cjlcd.lightpublishing.cn/thesisDetails#10.3788/YJYXS20203504.0389"}],"href":"http://cjlcd.lightpublishing.cn/thesisDetails#10.3788/YJYXS20203504.0389"}},{"name":"text","data":"."}],"title":"基于Inception V3的图像状态分类技术"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"Y N WANG"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Image classification technology based on Inception V3"},{"name":"text","data":" . "},{"name":"text","data":"Chinese Journal of Liquid Crystals and Displays"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"35"},{"name":"text","data":" ( "},{"name":"text","data":"4"},{"name":"text","data":" ): "},{"name":"text","data":"389"},{"name":"text","data":" - "},{"name":"text","data":"394"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"http://cjlcd.lightpublishing.cn/thesisDetails#10.3788/YJYXS20203504.0389"}],"href":"http://cjlcd.lightpublishing.cn/thesisDetails#10.3788/YJYXS20203504.0389"}},{"name":"text","data":"."}],"title":"Image classification technology based on Inception V3"}]},{"id":"b10","label":"10","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"XU L, REN J S J, LIU C, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Deep convolutional neural network for image deconvolution[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 27"},{"name":"italic","data":[{"name":"text","data":"th International Conference on Neural Information Processing Systems"}]},{"name":"text","data":". Montreal, Canada: MIT Press, 2014: 1790-1798."}]}]}]},{"id":"b11","label":"11","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"HRADIŠ M, KOTERA J, ZEMČÍK P, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Convolutional neural networks for direct text deblurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the British Machine Vision Conference"}]},{"name":"text","data":". Swansea, UK: BMVA Press, 2015: 6.1-6.13."}]}]}]},{"id":"b12","label":"12","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"SU S C, DELBRACIO M, WANG J, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Deep video deblurring for hand-held cameras[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Honolulu, HI, USA: IEEE, 2017: 237-246."}]}]}]},{"id":"b13","label":"13","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"ZHANG H G, DAI Y C, LI H D, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Deep stacked hierarchical multi-patch network for image deblurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 2019 "},{"name":"italic","data":[{"name":"text","data":"IEEE/CVF Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Long Beach, CA, USA: IEEE, 2019: 5978-5986."}]}]}]},{"id":"b14","label":"14","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"S B YIN"},{"name":"text","data":" , "},{"name":"text","data":"Y B WANG"},{"name":"text","data":" , "},{"name":"text","data":"Y H YANG"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"A novel image-dehazing network with a parallel attention block"},{"name":"text","data":" . "},{"name":"text","data":"Pattern Recognition"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"102"},{"name":"text","data":" "},{"name":"text","data":"107255"},{"name":"text","data":" "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1016/j.patcog.2020.107255"}],"href":"http://doi.org/10.1016/j.patcog.2020.107255"}},{"name":"text","data":"."}],"title":"A novel image-dehazing network with a parallel attention block"}]},{"id":"b15","label":"15","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"CHEN Y P, FAN H Q, XU B, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Drop an octave: Reducing spatial redundancy in convolutional neural networks with octave convolution[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 2019 "},{"name":"italic","data":[{"name":"text","data":"IEEE/CVF International Conference on Computer Vision"}]},{"name":"text","data":". Seoul, Korea (South): IEEE, 2019: 3434-3443."}]}]}]},{"id":"b16","label":"16","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"TAO X, GAO H Y, SHEN X Y, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Scale-recurrent network for deep image deblurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 2018 "},{"name":"italic","data":[{"name":"text","data":"IEEE/CVF Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Salt Lake City, UT, USA: IEEE, 2018: 8174-8182."}]}]}]},{"id":"b17","label":"17","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"陈 清江"},{"name":"text","data":" , "},{"name":"text","data":"张 雪"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"混合残差学习与导向滤波算法在图像去雾中的应用"},{"name":"text","data":" . "},{"name":"text","data":"光学 精密工程"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"27"},{"name":"text","data":" ( "},{"name":"text","data":"12"},{"name":"text","data":" ): "},{"name":"text","data":"2702"},{"name":"text","data":" - "},{"name":"text","data":"2712"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-GXJM201912023.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-GXJM201912023.htm"}},{"name":"text","data":"."}],"title":"混合残差学习与导向滤波算法在图像去雾中的应用"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"Q J CHEN"},{"name":"text","data":" , "},{"name":"text","data":"X ZHANG"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Application of hybrid residual learning and guided filtering algorithm in image defogging"},{"name":"text","data":" . "},{"name":"text","data":"Optics and Precision Engineering"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"27"},{"name":"text","data":" ( "},{"name":"text","data":"12"},{"name":"text","data":" ): "},{"name":"text","data":"2702"},{"name":"text","data":" - "},{"name":"text","data":"2712"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"https://www.cnki.com.cn/Article/CJFDTOTAL-GXJM201912023.htm"}],"href":"https://www.cnki.com.cn/Article/CJFDTOTAL-GXJM201912023.htm"}},{"name":"text","data":"."}],"title":"Application of hybrid residual learning and guided filtering algorithm in image defogging"}]},{"id":"b18","label":"18","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"LIU X, SUGANUMA M, SUN Z, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Dual residual networks leveraging the potential of paired operations for image restoration[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 2019 "},{"name":"italic","data":[{"name":"text","data":"IEEE/CVF Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Long Beach, CA, USA: IEEE, 2019: 7000-7009."}]}]}]},{"id":"b19","label":"19","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"WOO S, PARK J, LEE J Y, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". CBAM: Convolutional block attention module[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the European Conference on Computer Vision"}]},{"name":"text","data":". Munich, Germany: Springer, 2018: 3-19."}]}]}]},{"id":"b20","label":"20","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"HE K M, ZHANG X Y, REN S Q,"},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Deep residual learning for image recognition[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 2016 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Las Vegas, NV, USA: IEEE, 2016: 770-778."}]}]}]},{"id":"b21","label":"21","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"SUN J, CAO W F, XU Z B, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Learning a convolutional neural network for non-uniform motion blur removal[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 2015 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Boston, MA, USA: IEEE, 2015: 769-777."}]}]}]},{"id":"b22","label":"22","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"text","data":"M Y YE"},{"name":"text","data":" , "},{"name":"text","data":"D LYU"},{"name":"text","data":" , "},{"name":"text","data":"G S CHEN"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Scale-iterative upscaling network for image deblurring"},{"name":"text","data":" . "},{"name":"text","data":"IEEE Access"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"8"},{"name":"text","data":" "},{"name":"text","data":"18316"},{"name":"text","data":" - "},{"name":"text","data":"18325"},{"name":"text","data":" . "},{"name":"uri","data":{"text":[{"name":"text","data":"http://ieeexplore.ieee.org/document/8963625"}],"href":"http://ieeexplore.ieee.org/document/8963625"}},{"name":"text","data":"."}],"title":"Scale-iterative upscaling network for image deblurring"}]},{"id":"b23","label":"23","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"ZHOU S C, ZHANG J W, ZUO W M, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". DAVANet: Stereo deblurring with view aggregation[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 2019 "},{"name":"italic","data":[{"name":"text","data":"IEEE/CVF Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Long Beach, CA, USA: IEEE, 2019: 10988-10997."}]}]}]}]},"response":[],"contributions":[],"acknowledgements":[],"conflict":[],"supportedby":[],"articlemeta":{"doi":"10.37188/CJLCD.2021-0001","clc":[[{"name":"text","data":"TP391.41"}]],"dc":[],"publisherid":"yjyxs-36-11-1486","citeme":[],"fundinggroup":[{"lang":"zh","text":[{"name":"text","data":"国家自然科学基金(No.61403298);陕西省自然科学基金(No.2015JM1024)"}]},{"lang":"en","text":[{"name":"text","data":"Supported by National Natural Foundation of China(No.61403298); Natural Science Foundation of Shaanxi Province(No.2015JM1024)"}]}],"history":{"received":"2021-01-02","revised":"2021-04-20","opub":"2021-11-08"},"copyright":{"data":[{"lang":"zh","data":[{"name":"text","data":"版权所有©《液晶与显示》编辑部2021"}],"type":"copyright"},{"lang":"en","data":[{"name":"text","data":"Copyright ©2021 Chinese Journal of Liquid Crystals and Displays. All rights reserved."}],"type":"copyright"}],"year":"2021"}},"appendix":[],"type":"research-article","ethics":[],"backSec":[],"supplementary":[],"journalTitle":"液晶与显示","issue":"11","volume":"36","originalSource":[]}