{"defaultlang":"zh","titlegroup":{"articletitle":[{"lang":"zh","data":[{"name":"text","data":"基于深度残差生成对抗网络的运动图像去模糊"}]},{"lang":"en","data":[{"name":"text","data":"Motion image deblurring based on depth residual generative adversarial network"}]}]},"contribgroup":{"author":[{"name":[{"lang":"zh","surname":"魏","givenname":"丙财","namestyle":"eastern","prefix":""},{"lang":"en","surname":"WEI","givenname":"Bing-cai","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":["first-author"],"bio":[{"lang":"zh","text":["魏丙财(1997-), 男, 山东济南人, 硕士研究生, 2020年于曲阜师范大学获得学士学位, 主要从事图像复原及深度学习方面的研究。E-mail: 1394594109@qq.com"],"graphic":[],"data":[[{"name":"bold","data":[{"name":"text","data":"魏丙财"}]},{"name":"text","data":"(1997-), 男, 山东济南人, 硕士研究生, 2020年于曲阜师范大学获得学士学位, 主要从事图像复原及深度学习方面的研究。E-mail: "},{"name":"text","data":"1394594109@qq.com"}]]}],"email":"1394594109@qq.com","deceased":false},{"name":[{"lang":"zh","surname":"张","givenname":"立晔","namestyle":"eastern","prefix":""},{"lang":"en","surname":"ZHANG","givenname":"Li-ye","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":["corresp"],"corresp":[{"rid":"cor1","lang":"zh","text":"张立晔, E-mail: zhangliye@sdut.edu.cn","data":[{"name":"text","data":"张立晔, E-mail: zhangliye@sdut.edu.cn"}]}],"bio":[{"lang":"zh","text":["张立晔(1986-), 男, 山东济南人, 博士, 讲师, 2018年于哈尔滨工业大学获得博士学位, 主要从事机器学习与图像处理方面的研究。E-mail: zhangliye@sdut.edu.cn"],"graphic":[],"data":[[{"name":"bold","data":[{"name":"text","data":"张立晔"}]},{"name":"text","data":"(1986-), 男, 山东济南人, 博士, 讲师, 2018年于哈尔滨工业大学获得博士学位, 主要从事机器学习与图像处理方面的研究。E-mail: "},{"name":"text","data":"zhangliye@sdut.edu.cn"}]]}],"email":"zhangliye@sdut.edu.cn","deceased":false},{"name":[{"lang":"zh","surname":"孟","givenname":"晓亮","namestyle":"eastern","prefix":""},{"lang":"en","surname":"MENG","givenname":"Xiao-liang","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":[],"deceased":false},{"name":[{"lang":"zh","surname":"王","givenname":"康涛","namestyle":"eastern","prefix":""},{"lang":"en","surname":"WANG","givenname":"Kang-tao","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":""}],"role":[],"deceased":false}],"aff":[{"id":"aff1","intro":[{"lang":"zh","label":"","text":"山东理工大学 计算机科学与技术学院, 山东 淄博 255000","data":[{"name":"text","data":"山东理工大学 计算机科学与技术学院, 山东 淄博 255000"}]},{"lang":"en","label":"","text":"College of Computer Science and Technology, Shandong University of Technology, Zibo 255000, China","data":[{"name":"text","data":"College of Computer Science and Technology, Shandong University of Technology, Zibo 255000, China"}]}]}]},"abstracts":[{"lang":"zh","data":[{"name":"p","data":[{"name":"text","data":"针对图像拍摄过程中由于运动、抖动、电子干扰等产生的运动图像模糊问题,提出一种基于深度残差生成对抗网络的运动图像去模糊算法。对图像模糊模型与盲去模糊过程进行了研究,介绍了生成对抗网络,改进了残差块的结构。改进的残差块包含3个卷积层,两个ReLU激活函数,一个Dropout层以及一个跳跃连接块,提升了复原图像的质量。改进了PatchGAN的结构,在只增加少量参数与网络复杂性的情况下,将最底层感受野变为原先的两倍以上。利用GOPRO数据集和Lai数据集进行测试,测试结果表明,本文提出的基于深度残差生成对抗网络的去模糊算法复原图像可达到较高的客观评价指标,可以恢复出较高质量的清晰图像。在GOPRO数据集上,相比于其他同类方法,本文提出的算法具有较好的复原能力,可达到更高的峰值信噪比(28.31 dB)和较高的结构相似度(0.831 7);而在Lai数据集上,可以恢复出较高质量的图像。"}]}]},{"lang":"en","data":[{"name":"p","data":[{"name":"text","data":"A motion image deblurring algorithm based on a deep residual generative adversarial network is proposed for the motion image blurring problem arising from motion, jitter and electronic interference during image capture. Firstly, this paper investigates the image blurring model and the blind deblurring process. Secondly, the generative adversarial network is introduced, and the structure of the residual block is improved. The improved residual block contains three convolutional layers, two ReLU activation functions, a Dropout layer, and a skip connection block, which improves the quality of the recovered image. Thirdly, the structure of PatchGAN is improved, and the receptive field of the lowest layer is more than twice of the original one with only a few additional paramters and network complexity. The tests are conducted using the GOPRO dataset and Lai dataset. The test results show that the deblurring algorithm based on deep residual generation adversarial network proposed in this paper can achieve high objective evaluation indexes and can recover clear images of high quality. On the GOPRO dataset, compared with other similar methods, the algorithm proposed in this paper has better recovery ability and can achieve higher peak signal-to-noise ratio (28.31 dB) and higher structural similarity (0.831 7). On the Lai dataset, the higher quality images can be recovered."}]}]}],"keyword":[{"lang":"zh","data":[[{"name":"text","data":"图像去模糊"}],[{"name":"text","data":"运动模糊"}],[{"name":"text","data":"生成对抗网络"}],[{"name":"text","data":"残差块"}],[{"name":"text","data":"图像复原"}]]},{"lang":"en","data":[[{"name":"text","data":"image deblurring"}],[{"name":"text","data":"motion blur"}],[{"name":"text","data":"generative adversarial network"}],[{"name":"text","data":"residual block"}],[{"name":"text","data":"image restoration"}]]}],"highlights":[],"body":[{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"1"}],"title":[{"name":"text","data":"引言"}],"level":"1","id":"s1"}},{"name":"p","data":[{"name":"text","data":"由于相对运动、镜头抖动、相机内部传感器噪声、天气因素(雾霾等)、相机散焦等原因,导致图像在拍摄、传输和储存时会产生一定的退化,造成图像质量下降,产生模糊"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"1","type":"bibr","rid":"b1","data":[{"name":"text","data":"1"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。其中运动模糊图像主要是由于相机与物体在短曝光时间内发生相对运动造成的。为了从运动模糊图像中提取有用的信息,图像复原已成为图像处理的一个重要研究方向,也是数字图像处理的一个重要应用。图像复原技术可以消除或减少图像退化的问题,获得更清晰的图像。"}]},{"name":"p","data":[{"name":"text","data":"早期的图像去模糊研究,一般是在去模糊过程中假设模糊特征,利用图像的先验知识估计模糊核。因此,图像去模糊的重点之一是确定模糊核。根据模糊核的已知与否,去模糊方法可以分为两大类:一类模糊核已知,称为非盲复原;另一类模糊核未知,称为盲复原。"}]},{"name":"p","data":[{"name":"text","data":"非盲复原又称为传统图像复原算法,此种方法会根据已知的模糊核,进行解卷积操作,如逆滤波、L-R算法、维纳滤波等算法。由于在实际应用中很难获得精确的模糊核,因此非盲复原表现较差,无法得到清晰的复原图像。"}]},{"name":"p","data":[{"name":"text","data":"现实场景中盲复原的应用场景更广泛。早期的研究大多使用图像先验,包括全变差、重尾梯度先验或超拉普拉斯先验,它们通常以由粗到细的方式应用于图像,如Pan等人提出了基于图像暗通道先验的模糊核估计方法"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"2","type":"bibr","rid":"b2","data":[{"name":"text","data":"2"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",Levin等利用一种超拉普拉斯先验建模图像的梯度来估计模糊核"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"3","type":"bibr","rid":"b3","data":[{"name":"text","data":"3"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"text","data":"近年来,随着深度学习算法的发展,以卷积神经网络(Convolutional Neural Network, CNN)为代表的深度学习算法被大量应用到图像盲去模糊领域。相比于早期根据图像先验信息的盲去模糊算法,深度学习算法可以做到比图像先验更好的效果。Xu等人引入了一种新颖的、可分离结构的卷积结构来进行反卷积,取得了不错的去模糊效果"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"4","type":"bibr","rid":"b4","data":[{"name":"text","data":"4"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。Su等人利用CNN进行端到端训练,利用视频中帧与帧之间信息,实现了视频去模糊"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"5","type":"bibr","rid":"b5","data":[{"name":"text","data":"5"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"text","data":"在真实数据集上,由于图像模糊核未知,文献["},{"name":"xref","data":{"text":"6","type":"bibr","rid":"b6","data":[{"name":"text","data":"6"}]}},{"name":"text","data":"]和文献["},{"name":"xref","data":{"text":"7","type":"bibr","rid":"b7","data":[{"name":"text","data":"7"}]}},{"name":"text","data":"]中提出利用CNN预测图像模糊核,实现模糊数据集的合成,最终实现了图像去模糊。然而,核估计涉及到几个问题。首先,假设简单的核卷积不能模拟一些具有挑战性的情况,如闭塞区域或深度变化。其次,核估计过程是微妙的,对噪声和饱和度敏感,所以模糊模型必须花费大量精力进行精心设计。第三,为动态场景中的每个像素寻找空间变化的模糊核需要大量的内存和算力。当模糊核参数无法进行准确估计时,上述方法都无法获得理想的效果"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。因此,文献["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]和文献["},{"name":"xref","data":{"text":"9","type":"bibr","rid":"b9","data":[{"name":"text","data":"9"}]}},{"name":"text","data":"]摒弃了模糊核的估计过程,直接使用CNN实现了端到端的动态去模糊。"}]},{"name":"p","data":[{"name":"text","data":"2014年,Goodfellow等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"10","type":"bibr","rid":"b10","data":[{"name":"text","data":"10"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出了生成对抗网络(Generative Adversarial Networks, GAN)。GAN由两个相互竞争的网络构成,一个称为生成器,一个称为判别器。生成器负责接收随机噪声输入,然后合成数据样本,它的目标是令其尽量像正式数据样本,以“欺骗”判别器。判别器负责判断输入数据是生成器合成的“伪造”样本还是真实样本,它的目标是尽量将二者区分开。一个好的生成对抗网络目标就是让判别器判断真伪的概率接近0.5,即无法判断是否是生成器产生的样本。"}]},{"name":"p","data":[{"name":"text","data":"由于性能强大,GAN很快被用于图像去模糊领域。生成对抗网络中的生成器负责接受模糊图片,将其复原,目标是生成类似清晰图像的去模糊图像,以骗过判别器;而判别器负责分别接收原始清晰图片以及生成器去模糊后的图片,尽量将二者区分。"}]},{"name":"p","data":[{"name":"text","data":"Mao等人针对的是标准GAN生成的图片质量不高以及训练过程不稳定这两个缺陷进行改进,在判别器中使用最小二乘损失,提出了LSGAN,能够生成较高质量的图像"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"b11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。Johnson等人提出用于风格迁移任务的网络,提出了感知损失函数,可以较好地衡量模型的质量"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"b12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。Kupyn等人提出DeblurGAN"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"b13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",运用条件生成式对抗网络和内容损失函数(Content Loss Function)实现了运动图像模糊的去除。由于单个神经网络同时用于模糊与清晰数据的训练得到的合成模糊图像不能精确模拟真实场景的模糊过程,Zhang等人提出利用两个GAN,将其中一个负责图像的模糊,另一个负责图像的去模糊,实现了真实模糊去模糊"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"b14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"text","data":"受到上述研究的启发,本文对GAN进行了改进。首先,改进了PatchGAN的结构,在网络参数只增加2.38%的前提下,将其最底层感受野提升至原先的两倍以上。其次,改进了残差块的结构,增加了卷积层数量,用以提升复原图像的质量。最后,基于GOPRO数据集和Lai数据集的仿真结果验证了本文提出的算法的有效性。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2"}],"title":[{"name":"text","data":"图像模糊与去模糊"}],"level":"1","id":"s2"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.1"}],"title":[{"name":"text","data":"图像的模糊"}],"level":"2","id":"s2-1"}},{"name":"p","data":[{"name":"text","data":"图像模糊模型可以表示为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"1"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714507&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714507&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714507&type=middle"}}}],"id":"yjyxs-36-12-1693-E1"}}]},{"name":"p","data":[{"name":"text","data":"其中:"},{"name":"italic","data":[{"name":"text","data":"I"}]},{"name":"sub","data":[{"name":"text","data":"B"}]},{"name":"text","data":"为模糊后的图像,"},{"name":"italic","data":[{"name":"text","data":"I"}]},{"name":"sub","data":[{"name":"text","data":"S"}]},{"name":"text","data":"为原图像,"},{"name":"italic","data":[{"name":"text","data":"K"}]},{"name":"text","data":"为卷积核,"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"text","data":"为加性噪声,*为卷积操作。"}]},{"name":"p","data":[{"name":"text","data":"另外,模糊图像也可以通过逐帧模糊产生。对于模糊空间变化的图像,没有相机响应函数(Camera Response Function, CRF)的估计技术"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"15","type":"bibr","rid":"b15","data":[{"name":"text","data":"15"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",CRF可以近似为已知的CRF的平均值"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"b14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",如公式(2)所示:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"2"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714510&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714510&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714510&type=middle"}}}],"id":"yjyxs-36-12-1693-E2"}}]},{"name":"p","data":[{"name":"text","data":"其中"},{"name":"italic","data":[{"name":"text","data":"γ"}]},{"name":"text","data":"是一个参数,一般认为其等于2.2。潜在的清晰图像"},{"name":"italic","data":[{"name":"text","data":"I"}]},{"name":"sub","data":[{"name":"text","data":"S("},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":")"}]},{"name":"text","data":"可通过观察到的清晰图像"},{"name":"italic","data":[{"name":"text","data":"I"}]},{"name":"sub","data":[{"name":"text","data":"S′("},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":")"}]},{"name":"text","data":"得到。仿真的模糊图像"},{"name":"italic","data":[{"name":"text","data":"I"}]},{"name":"sub","data":[{"name":"text","data":"B"}]},{"name":"text","data":"可以通过式(3)得到:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"3"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714513&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714513&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714513&type=middle"}}}],"id":"yjyxs-36-12-1693-E3"}}]},{"name":"p","data":[{"name":"text","data":"其中: "},{"name":"italic","data":[{"name":"text","data":"M"}]},{"name":"text","data":"代表清晰帧的个数,"},{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"代表某个时间,"},{"name":"italic","data":[{"name":"text","data":"I"}]},{"name":"sub","data":[{"name":"text","data":"S("},{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":")"}]},{"name":"text","data":"代表时间"},{"name":"italic","data":[{"name":"text","data":"t"}]},{"name":"text","data":"对应的清晰帧。"}]},{"name":"p","data":[{"name":"text","data":"而真实的模糊图像实际上是多帧清晰图像的集成"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"b16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",可表示为式(4):"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"4"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714516&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714516&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714516&type=middle"}}}],"id":"yjyxs-36-12-1693-E4"}}]},{"name":"p","data":[{"name":"text","data":"其中"},{"name":"italic","data":[{"name":"text","data":"T"}]},{"name":"text","data":"为曝光时间周期。"}]},{"name":"p","data":[{"name":"text","data":"现实世界的真实模糊图像如"},{"name":"xref","data":{"text":"图 1","type":"fig","rid":"Figure1","data":[{"name":"text","data":"图 1"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure1","caption":[{"lang":"zh","label":[{"name":"text","data":"图1"}],"title":[{"name":"text","data":"现实生活中的运动模糊图片"}]},{"lang":"en","label":[{"name":"text","data":"Fig 1"}],"title":[{"name":"text","data":"Motion blurred images in real life"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714518&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714518&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714518&type=middle"}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.2"}],"title":[{"name":"text","data":"图像的去模糊"}],"level":"2","id":"s2-2"}},{"name":"p","data":[{"name":"text","data":"图像去模糊就是对给定的模糊图像进行复原,得出相应的原始图像"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"17","type":"bibr","rid":"b17","data":[{"name":"text","data":"17"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"text","data":"非盲去模糊是指通过给定的已知模糊核进行图像的去模糊,而盲去模糊问题是指从给定噪声图像Y中估计出原图像"},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"和模糊核"},{"name":"italic","data":[{"name":"text","data":"Z"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"text","data":"盲去模糊过程可以表示为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"5"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714521&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714521&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714521&type=middle"}}}],"id":"yjyxs-36-12-1693-E5"}}]},{"name":"p","data":[{"name":"text","data":"其中: "},{"name":"italic","data":[{"name":"text","data":"φ"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":")和"},{"name":"italic","data":[{"name":"text","data":"θ"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"Z"}]},{"name":"text","data":")分别是预期的清晰图像的正则化项和可能的模糊核。"}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3"}],"title":[{"name":"text","data":"相关工作"}],"level":"1","id":"s3"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.1"}],"title":[{"name":"text","data":"生成对抗网络"}],"level":"2","id":"s3-1"}},{"name":"p","data":[{"name":"text","data":"生成对抗网络(GAN)中包含两个相互竞争、相互对抗的网络——生成器和判别器("},{"name":"xref","data":{"text":"图 2","type":"fig","rid":"Figure2","data":[{"name":"text","data":"图 2"}]}},{"name":"text","data":")。GAN中的对抗思想可以追溯到博弈论的纳什均衡,对抗的双方分别是生成器和判别器。二者对抗的目标函数可以描述为:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"6"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714523&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714523&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714523&type=middle"}}}],"id":"yjyxs-36-12-1693-E6"}}]},{"name":"fig","data":{"id":"Figure2","caption":[{"lang":"zh","label":[{"name":"text","data":"图2"}],"title":[{"name":"text","data":"GAN的结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig 2"}],"title":[{"name":"text","data":"Structure of GAN"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714527&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714527&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714527&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"其中: "},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"表示来自"},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"sub","data":[{"name":"text","data":"data("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")"}]},{"name":"text","data":"真实样本,"},{"name":"italic","data":[{"name":"text","data":"E"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"~"},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"text","data":"data("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")"}]},{"name":"text","data":"为输入清晰图像的期望,"},{"name":"italic","data":[{"name":"text","data":"D"}]},{"name":"text","data":"(·)表示判别器"},{"name":"italic","data":[{"name":"text","data":"D"}]},{"name":"text","data":"的输出,"},{"name":"italic","data":[{"name":"text","data":"G"}]},{"name":"text","data":"(·)表示生成器"},{"name":"italic","data":[{"name":"text","data":"G"}]},{"name":"text","data":"的输出。"}]},{"name":"p","data":[{"name":"text","data":"GAN自发明以来,一直是深度学习领域研究的重点,又有多种变体,如DCGAN,将生成对抗网络与卷积神经网络结合,几乎完全使用卷积神经网络代替全连接层;条件生成对抗网络CGAN,在原始GAN的输入上进行改进,将额外条件信息如标签在输入阶段即传递给生成器与判别器;WGAN,在损失函数方面对GAN进行改进,提出wassertein距离损失函数与权重截断(Weight Clipping)措施,进一步提升了GAN的性能"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"18","type":"bibr","rid":"b18","data":[{"name":"text","data":"18"}]}},{"name":"text","data":"]"}]},{"name":"text","data":";WGAN-GP"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"19","type":"bibr","rid":"b19","data":[{"name":"text","data":"19"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",在WGAN基础上进行改进,提出权重惩罚措施,有效防止WGAN可能发生的梯度消失、梯度爆炸以及权重限制困难等问题。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.2"}],"title":[{"name":"text","data":"残差块及其改进"}],"level":"2","id":"s3-2"}},{"name":"p","data":[{"name":"text","data":"残差块"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"20","type":"bibr","rid":"b20","data":[{"name":"text","data":"20"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"包括:两个权重层,中间包含一个ReLU激活函数,然后是一个跳跃连接块,之后是一个ReLU激活函数。跳跃连接块可实现梯度的跨层传播,有助于克服梯度衰减现象。通过添加残差块,可以在加深网络结构的情况下,较好地解决梯度消失和梯度爆炸的问题。残差块结构如"},{"name":"xref","data":{"text":"图 3","type":"fig","rid":"Figure3","data":[{"name":"text","data":"图 3"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure3","caption":[{"lang":"zh","label":[{"name":"text","data":"图3"}],"title":[{"name":"text","data":"残差块的结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig 3"}],"title":[{"name":"text","data":"Structure of res-block"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714533&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714533&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714533&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"本文改进的残差块包括:3个卷积层,每个卷积层都是3×3的卷积核。使用两个ReLU激活函数,这样可以达到较快的收敛速度,并且在第一个卷积层与第二个卷积层之间添加一个概率为0.5的Dropout层,这样有助于防止模型过拟合,同时加快模型训练速度。最后是一个跳跃连接模块,有助于解决梯度消失问题以及梯度爆炸问题。同时由于BN层已被证明会增加计算复杂性,并且降低性能"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",因此本文的判别器去除了批归一化(Batch Normalization, BN)层,同时,本研究领域内使用深度学习去模糊的研究,大都使用小批次进行训练,如Nah"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等,训练批次为2;Kupyn"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"b13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等提出的DeblurGAN, 训练批次为1;Zhang"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"b14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等提出的基于真实模糊去模糊,批次为4;使用小批次训练时不适合使用批归一化层。结构如"},{"name":"xref","data":{"text":"图 4","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure4","caption":[{"lang":"zh","label":[{"name":"text","data":"图4"}],"title":[{"name":"text","data":"改进残差块的结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig 4"}],"title":[{"name":"text","data":"Structure of improved Res-block"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714537&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714537&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714537&type=middle"}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.3"}],"title":[{"name":"text","data":"损失函数"}],"level":"2","id":"s3-3"}},{"name":"p","data":[{"name":"text","data":"本文使用WGAN "},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"18","type":"bibr","rid":"b18","data":[{"name":"text","data":"18"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"中的Wassertein距离为判别器的损失函数,其定义如下:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"7"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714541&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714541&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714541&type=middle"}}}],"id":"yjyxs-36-12-1693-E7"}}]},{"name":"p","data":[{"name":"text","data":"其中,"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"和"},{"name":"italic","data":[{"name":"text","data":"y"}]},{"name":"text","data":"分别表示真实样本和生成样本,∏("},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"sub","data":[{"name":"text","data":"data"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"sub","data":[{"name":"text","data":"g"}]},{"name":"text","data":")是"},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"sub","data":[{"name":"text","data":"data"}]},{"name":"text","data":"和"},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"sub","data":[{"name":"text","data":"g"}]},{"name":"text","data":"的联合分布的集合,("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"y"}]},{"name":"text","data":")~"},{"name":"italic","data":[{"name":"text","data":"γ"}]},{"name":"text","data":"表示其中的采样,inf表示对采样出的真实样本和生成样本的距离期望,即"},{"name":"italic","data":[{"name":"text","data":"E"}]},{"name":"sub","data":[{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"y"}]},{"name":"text","data":")~"},{"name":"italic","data":[{"name":"text","data":"γ"}]}]},{"name":"text","data":"[‖"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"-"},{"name":"italic","data":[{"name":"text","data":"y"}]},{"name":"text","data":"‖]取下限。"}]},{"name":"p","data":[{"name":"text","data":"同时,本文使用内容损失"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"b12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"(Content loss)作为生成器的损失函数。内容损失是一种基于生成图像和目标图像的CNN特征图差异的L2损失,不同于普通的L2损失,内容损失通过预训练的网络某一层的输出特征来定义:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"8"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714544&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714544&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714544&type=middle"}}}],"id":"yjyxs-36-12-1693-E8"}}]},{"name":"p","data":[{"name":"text","data":"其中: "},{"name":"italic","data":[{"name":"text","data":"Φ"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"j"}]}]},{"name":"text","data":"代表通过预训练卷积神经网络提取的特征图,本文使用的预训练模型为VGG16。"},{"name":"italic","data":[{"name":"text","data":"W"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"j"}]}]},{"name":"text","data":"和"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":", "},{"name":"italic","data":[{"name":"text","data":"j"}]}]},{"name":"text","data":"为特征图的大小。"}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4"}],"title":[{"name":"text","data":"网络结构"}],"level":"1","id":"s4"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4.1"}],"title":[{"name":"text","data":"生成器的网络结构"}],"level":"2","id":"s4-1"}},{"name":"p","data":[{"name":"text","data":"受到Johnson"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"b12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等提出的用于风格迁移任务的网络的启发,本文生成器网络结构如"},{"name":"xref","data":{"text":"图 5","type":"fig","rid":"Figure5","data":[{"name":"text","data":"图 5"}]}},{"name":"text","data":"所示。其中的编码器层(Encoder)和解码器层(Decoder)均包含3层卷积层,每个卷积层后面还包括一个ReLU激活函数层。生成器中所有卷积层的填充方式均为same。最后的激活函数使用tanh激活函数,除此层外,生成器的激活函数均为ReLU激活函数。在这些结构之上,生成器中还包含一个跳跃连接块,用于解决由网络深度过深带来的梯度消失、梯度爆炸等问题。"}]},{"name":"fig","data":{"id":"Figure5","caption":[{"lang":"zh","label":[{"name":"text","data":"图5"}],"title":[{"name":"text","data":"生成器的网络结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig 5"}],"title":[{"name":"text","data":"Network structure of generator"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714547&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714547&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714547&type=middle"}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4.2"}],"title":[{"name":"text","data":"判别器的网络结构"}],"level":"2","id":"s4-2"}},{"name":"p","data":[{"name":"text","data":"PatchGAN是由Phillip Isola等"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"21","type":"bibr","rid":"b21","data":[{"name":"text","data":"21"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出的一种马尔科夫判别器。马尔科夫判别器可以将图像有效地建模为马尔科夫随机场。PatchGAN判别器试图对图像中的每个"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"text","data":"×"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"text","data":"块进行分类,以确定其真假,在图像上卷积运行这个鉴别器,对所有响应进行平均,作为最终的判别器输出。Patch通过5层卷积层的叠加,将最底层卷积层的感受野扩展为70×70。"}]},{"name":"p","data":[{"name":"text","data":"受到PatchGAN判别器的启发,本文对其进行改进,在参数数量只增加2.38%的前提下,将最底层感受野提升至142,而运行时间几乎没有增加。在PatchGAN中,最后对网络输出的特征图取均值作为判别器的最后输出。为了进一步降低算法复杂度,改进的网络在网络最后使用全局平均池化层代替均值操作,同样可以做到求取特征图均值的效果。PatchGAN结构以及本文改进的PatchGAN结构如"},{"name":"xref","data":{"text":"表 1","type":"table","rid":"Table1","data":[{"name":"text","data":"表 1"}]}},{"name":"text","data":"和"},{"name":"xref","data":{"text":"表 2","type":"table","rid":"Table2","data":[{"name":"text","data":"表 2"}]}},{"name":"text","data":"所示。"}]},{"name":"table","data":{"id":"Table1","caption":[{"lang":"zh","label":[{"name":"text","data":"表1"}],"title":[{"name":"text","data":"PatchGAN结构图"}]},{"lang":"en","label":[{"name":"text","data":"Table 1"}],"title":[{"name":"text","data":"Structure of PatchGAN"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"层"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"特征图尺寸"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"步长"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"参数"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"感受野"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"Input"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"256×265×3"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"-"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"-"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"-"}]}],[{"align":"center","data":[{"name":"text","data":"Conv"}]},{"align":"center","data":[{"name":"text","data":"128×128×64"}]},{"align":"center","data":[{"name":"text","data":"2"}]},{"align":"center","data":[{"name":"text","data":"3 136"}]},{"align":"center","data":[{"name":"text","data":"70"}]}],[{"align":"center","data":[{"name":"text","data":"Conv"}]},{"align":"center","data":[{"name":"text","data":"64×64×128"}]},{"align":"center","data":[{"name":"text","data":"2"}]},{"align":"center","data":[{"name":"text","data":"131 200"}]},{"align":"center","data":[{"name":"text","data":"34"}]}],[{"align":"center","data":[{"name":"text","data":"Conv"}]},{"align":"center","data":[{"name":"text","data":"32×32×256"}]},{"align":"center","data":[{"name":"text","data":"2"}]},{"align":"center","data":[{"name":"text","data":"524 544"}]},{"align":"center","data":[{"name":"text","data":"16"}]}],[{"align":"center","data":[{"name":"text","data":"Conv"}]},{"align":"center","data":[{"name":"text","data":"32×32×512"}]},{"align":"center","data":[{"name":"text","data":"1"}]},{"align":"center","data":[{"name":"text","data":"2 097 644"}]},{"align":"center","data":[{"name":"text","data":"7"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"Conv"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"16×16×1"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"1"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"8 193"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"4"}]}]],"foot":[]}]}},{"name":"table","data":{"id":"Table2","caption":[{"lang":"zh","label":[{"name":"text","data":"表2"}],"title":[{"name":"text","data":"本文提出的改进PatchGAN结构图"}]},{"lang":"en","label":[{"name":"text","data":"Table 2"}],"title":[{"name":"text","data":"Structure of improved PatchGAN proposed in this paper"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"层"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"特征图尺寸"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"步长"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"参数"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"感受野"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"Input"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"256×265×3"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"-"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"-"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"-"}]}],[{"align":"center","data":[{"name":"text","data":"Conv"}]},{"align":"center","data":[{"name":"text","data":"128×128×64"}]},{"align":"center","data":[{"name":"text","data":"2"}]},{"align":"center","data":[{"name":"text","data":"3 136"}]},{"align":"center","data":[{"name":"text","data":"142"}]}],[{"align":"center","data":[{"name":"text","data":"Conv"}]},{"align":"center","data":[{"name":"text","data":"64×64×64"}]},{"align":"center","data":[{"name":"text","data":"2"}]},{"align":"center","data":[{"name":"text","data":"65 600"}]},{"align":"center","data":[{"name":"text","data":"70"}]}],[{"align":"center","data":[{"name":"text","data":"Conv"}]},{"align":"center","data":[{"name":"text","data":"32×32×128"}]},{"align":"center","data":[{"name":"text","data":"2"}]},{"align":"center","data":[{"name":"text","data":"131 200"}]},{"align":"center","data":[{"name":"text","data":"34"}]}],[{"align":"center","data":[{"name":"text","data":"Conv"}]},{"align":"center","data":[{"name":"text","data":"16×16×256"}]},{"align":"center","data":[{"name":"text","data":"2"}]},{"align":"center","data":[{"name":"text","data":"524 544"}]},{"align":"center","data":[{"name":"text","data":"16"}]}],[{"align":"center","data":[{"name":"text","data":"Conv"}]},{"align":"center","data":[{"name":"text","data":"16×16×512"}]},{"align":"center","data":[{"name":"text","data":"1"}]},{"align":"center","data":[{"name":"text","data":"2 097 644"}]},{"align":"center","data":[{"name":"text","data":"7"}]}],[{"align":"center","data":[{"name":"text","data":"Conv"}]},{"align":"center","data":[{"name":"text","data":"16×16×1"}]},{"align":"center","data":[{"name":"text","data":"1"}]},{"align":"center","data":[{"name":"text","data":"8 193"}]},{"align":"center","data":[{"name":"text","data":"4"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"Pooling"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"1"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"-"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"-"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"-"}]}]],"foot":[]}]}}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5"}],"title":[{"name":"text","data":"实验结果与分析"}],"level":"1","id":"s5"}},{"name":"p","data":[{"name":"text","data":"本文的仿真实验在配置有Tesla-P100的服务器上进行,服务器系统为CentOS 7,使用TensorFlow2框架与Adam优化器,初始学习率设置为10"},{"name":"sup","data":[{"name":"text","data":"-4"}]},{"name":"text","data":"。经过若干次迭代训练,最终学习率线性衰减到10"},{"name":"sup","data":[{"name":"text","data":"-7"}]},{"name":"text","data":"。"}]},{"name":"p","data":[{"name":"text","data":"由于本文网络结构的输入要求,需要将训练数据集中的图片剪裁成256×256大小的图片。而生成器中全部都是卷积层,不存在全连接层,属于全卷积(Fully Convolutional Networks, FCN)神经网络,可以应用于任意大小的图像。"}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5.1"}],"title":[{"name":"text","data":"数据集信息"}],"level":"2","id":"s5-1"}},{"name":"p","data":[{"name":"text","data":"本文采用GOPRO数据集和Lai数据集"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"24","type":"bibr","rid":"b24","data":[{"name":"text","data":"24"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"对本文算法的复原效果进行测试。"}]},{"name":"p","data":[{"name":"text","data":"GOPRO数据集是目前进行图像去模糊研究的最常用的数据集之一,其使用GOPRO4相机拍摄240帧/s的视频,然后生成模糊图片来模拟真实的运动模糊。该数据集由3 214对清晰和模糊的图像组成,每张图像的分辨率都是1 280×720。我们采用其中2 103张图片作为训练集,其余1 111张图片作为测试集,并将其剪裁为256×256大小的图片,作为神经网络的输入。"}]},{"name":"p","data":[{"name":"text","data":"Lai数据集是一系列真实世界的模糊图像,是在真实场景中由不同的相机、不同的设置与不同的用户处捕捉的,没有清晰的对照物,无法进行定量分析。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5.2"}],"title":[{"name":"text","data":"图像质量客观评价"}],"level":"2","id":"s5-2"}},{"name":"p","data":[{"name":"text","data":"图像复原仿真实验的结果通常选用峰值信噪比(Peak Signal-to-Noise Ratio, PSNR)和结构相似性(Structural Similarity, SSIM)两项指标进行衡量。"}]},{"name":"p","data":[{"name":"xref","data":{"text":"表 1","type":"table","rid":"Table1","data":[{"name":"text","data":"表 1"}]}},{"name":"text","data":"的实验结果表明,在GOPRO数据集上,本文提出的基于深度残差生成对抗网络的运动去模糊算法具有更好的复原能力,可达到更高的PSNR和较高的SSIM。部分实验效果如"},{"name":"xref","data":{"text":"图 6","type":"fig","rid":"Figure6","data":[{"name":"text","data":"图 6"}]}},{"name":"text","data":"所示,其中左侧部分是模糊图像,中间部分是去模糊效果图,最右侧是清晰图像。"}]},{"name":"fig","data":{"id":"Figure6","caption":[{"lang":"zh","label":[{"name":"text","data":"图6"}],"title":[{"name":"text","data":"GOPRO数据集复原效果"}]},{"lang":"en","label":[{"name":"text","data":"Fig 6"}],"title":[{"name":"text","data":"Restoration effects of GOPRO dataset"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714548&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714548&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714548&type=middle"}]}},{"name":"table","data":{"id":"Table3","caption":[{"lang":"zh","label":[{"name":"text","data":"表3"}],"title":[{"name":"text","data":"GOPRO数据集上不同算法质量评估结果"}]},{"lang":"en","label":[{"name":"text","data":"Table 3"}],"title":[{"name":"text","data":"Quality evaluation results of different algorithms on GOPRO dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"方法"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"PSNR/dB"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"SSIM"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"Kim"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"22","type":"bibr","rid":"b22","data":[{"name":"text","data":"22"}]}},{"name":"text","data":"]"}]}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"23.64"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.823 9"}]}],[{"align":"center","data":[{"name":"text","data":"Sun"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"24.64"}]},{"align":"center","data":[{"name":"text","data":"0.842 9"}]}],[{"align":"center","data":[{"name":"text","data":"Wieschollek"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"23","type":"bibr","rid":"b23","data":[{"name":"text","data":"23"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"25.19"}]},{"align":"center","data":[{"name":"text","data":"0.779 4"}]}],[{"align":"center","data":[{"name":"text","data":"Kim"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"25","type":"bibr","rid":"b25","data":[{"name":"text","data":"25"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"26.82"}]},{"align":"center","data":[{"name":"text","data":"0.842 5"}]}],[{"align":"center","data":[{"name":"text","data":"Su"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"5","type":"bibr","rid":"b5","data":[{"name":"text","data":"5"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"27.31"}]},{"align":"center","data":[{"name":"text","data":"0.825 5"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"本文算法"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"28.31"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.831 7"}]}]],"foot":[]}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5.3"}],"title":[{"name":"text","data":"Lai数据集主观评价"}],"level":"2","id":"s5-3"}},{"name":"p","data":[{"name":"xref","data":{"text":"图 7","type":"fig","rid":"Figure7","data":[{"name":"text","data":"图 7"}]}},{"name":"text","data":"是Lai数据集中测试图像face2去模糊效果比较图,第一行图片从左至右依次是模糊图像,Sun"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"26","type":"bibr","rid":"b26","data":[{"name":"text","data":"26"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等、Krishnan"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"27","type":"bibr","rid":"b27","data":[{"name":"text","data":"27"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等、Whyte"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"28","type":"bibr","rid":"b28","data":[{"name":"text","data":"28"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等的结果;第二行图像从左至右依次是Nah"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等、Pan"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"2","type":"bibr","rid":"b2","data":[{"name":"text","data":"2"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等、Xu"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"29","type":"bibr","rid":"b29","data":[{"name":"text","data":"29"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等、本文算法的结果。从图中可以看出,本文提出的算法能够很好地获得复原效果,从图中可以清楚地获取图片人物的细节信息。"}]},{"name":"fig","data":{"id":"Figure7","caption":[{"lang":"zh","label":[{"name":"text","data":"图7"}],"title":[{"name":"text","data":"Lai数据集去模糊效果定性比较"}]},{"lang":"en","label":[{"name":"text","data":"Fig 7"}],"title":[{"name":"text","data":"Qualitative comparison of deblurring effects of Lai datasets"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714550&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714550&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714550&type=middle"}]}}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"6"}],"title":[{"name":"text","data":"结论"}],"level":"1","id":"s6"}},{"name":"p","data":[{"name":"text","data":"本文对图像去模糊领域进行了研究,提出了一种基于深度残差生成对抗网络的运动去模糊算法,实现了精度更高的运动模糊图像盲复原。改进了残差块的结构,使之能更好的地适应图像去模糊领域的应用,改进了PatchGAN的结构,在网络参数只增加2.38%的前提下,将其最底层感受野提升至原先的两倍以上。实验结果表明,在GOPRO数据集中,本文提出算法复原的图像可达到较高的客观评价指标,峰值信噪比PSNR可达到28.31 dB,结构相似性SSIM可达到0.831 7,可以恢复出较高质量的清晰图像。在Lai数据集上,复原的图像可以达到较好的主观视觉效果。"}]}]}],"footnote":[],"reflist":{"title":[{"name":"text","data":"参考文献"}],"data":[{"id":"b1","label":"1","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"王 海峰"},{"name":"text","data":"\t , "},{"name":"text","data":"李 萍"},{"name":"text","data":"\t , "},{"name":"text","data":"王 博"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"灰狼算法优化BP神经网络的图像去模糊复原"},{"name":"text","data":" . "},{"name":"text","data":"液晶与显示"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"34"},{"name":"text","data":" ("},{"name":"text","data":"10"},{"name":"text","data":" ):"},{"name":"text","data":"992"},{"name":"text","data":" -"},{"name":"text","data":"999"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3788/YJYXS20193410.0992"}],"href":"http://doi.org/10.3788/YJYXS20193410.0992"}},{"name":"text","data":"."}],"title":"灰狼算法优化BP神经网络的图像去模糊复原"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"H F WANG"},{"name":"text","data":"\t , "},{"name":"text","data":"P LI"},{"name":"text","data":"\t , "},{"name":"text","data":"B WANG"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Image deblurring restoration of BP neural network based on grey wolf algorithm"},{"name":"text","data":" . "},{"name":"text","data":"Chinese Journal of Liquid Crystals and Displays"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"34"},{"name":"text","data":" ("},{"name":"text","data":"10"},{"name":"text","data":" ):"},{"name":"text","data":"992"},{"name":"text","data":" -"},{"name":"text","data":"999"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3788/YJYXS20193410.0992"}],"href":"http://doi.org/10.3788/YJYXS20193410.0992"}},{"name":"text","data":"."}],"title":"Image deblurring restoration of BP neural network based on grey wolf algorithm"}]},{"id":"b2","label":"2","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"PAN J S, SUN D Q, PFISTER H, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Blind image deblurring using dark channel prior[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2016 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Las Vegas, NV, USA: IEEE, 2016: 1628-1636."}]}]}]},{"id":"b3","label":"3","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"LEVIN A, WEISS Y, DURAND F, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Understanding and evaluating blind deconvolution algorithms[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2009 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Miami, FL, USA: IEEE, 2009: 1964-1971."}]}]}]},{"id":"b4","label":"4","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"L XU"},{"name":"text","data":"\t , "},{"name":"text","data":"J S J REN"},{"name":"text","data":"\t , "},{"name":"text","data":"C LIU"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Deep convolutional neural network for image deconvolution"},{"name":"text","data":" . "},{"name":"text","data":"Advances in Neural Information Processing Systems"},{"name":"text","data":" , "},{"name":"text","data":"2014"},{"name":"text","data":" . "},{"name":"text","data":"2"},{"name":"text","data":" "},{"name":"text","data":"1790"},{"name":"text","data":" -"},{"name":"text","data":"1798"},{"name":"text","data":"\t\t . "},{"name":"uri","data":{"text":[{"name":"text","data":"http://ieeexplore.ieee.org/document/6843274/"}],"href":"http://ieeexplore.ieee.org/document/6843274/"}},{"name":"text","data":"."}],"title":"Deep convolutional neural network for image deconvolution"}]},{"id":"b5","label":"5","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"SU S C, DELBRACIO M, WANG J, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Deep video deblurring for hand-held cameras[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Honolulu, HI, USA: IEEE, 2017: 237-246."}]}]}]},{"id":"b6","label":"6","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"XU L, JIA J Y. Two-phase kernel estimation for robust motion deblurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 11"},{"name":"italic","data":[{"name":"text","data":"th European Conference on Computer Vision"}]},{"name":"text","data":". Berlin, Heidelberg: Springer, 2010: 157-170."}]}]}]},{"id":"b7","label":"7","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"YAN Y Y, REN W Q, GUO Y F, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Image deblurring via extreme channels prior[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Honolulu, HI, USA: IEEE, 2017: 4003-4011."}]}]}]},{"id":"b8","label":"8","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"NAH S, KIM T H, LEE K M. Deep multi-scale convolutional neural network for dynamic scene deblurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Honolulu, HI, USA: IEEE, 2017: 3883-3891."}]}]}]},{"id":"b9","label":"9","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"TAO X, GAO H Y, SHEN X Y, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Scale-recurrent network for deep image deblurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2018 "},{"name":"italic","data":[{"name":"text","data":"IEEE"}]},{"name":"text","data":"/"},{"name":"italic","data":[{"name":"text","data":"CVF Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Salt Lake City, USA: IEEE, 2018: 8174-8182."}]}]}]},{"id":"b10","label":"10","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"I GOODFELLOW"},{"name":"text","data":"\t , "},{"name":"text","data":"J POUGET-ABADIE"},{"name":"text","data":"\t , "},{"name":"text","data":"M MIRZA"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Generative adversarial networks"},{"name":"text","data":" . "},{"name":"text","data":"Advances in Neural Information Processing Systems"},{"name":"text","data":" , "},{"name":"text","data":"2014"},{"name":"text","data":" . "},{"name":"text","data":"27"},{"name":"text","data":" "},{"name":"text","data":"2672"},{"name":"text","data":" -"},{"name":"text","data":"2680"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1145/3422622"}],"href":"http://doi.org/10.1145/3422622"}},{"name":"text","data":"."}],"title":"Generative adversarial networks"}]},{"id":"b11","label":"11","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"MAO X D, LI Q, XIE H R, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Least squares generative adversarial networks[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE International Conference on Computer Vision"}]},{"name":"text","data":". Venice, Italy: IEEE, 2017: 2794-2802."}]}]}]},{"id":"b12","label":"12","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"JOHNSON J, ALAHI A, LI F F. Perceptual losses for real-time style transfer and super-resolution[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 14"},{"name":"italic","data":[{"name":"text","data":"th European Conference on Computer Vision"}]},{"name":"text","data":". Amsterdam, The Netherlands: Springer, 2016: 694-711."}]}]}]},{"id":"b13","label":"13","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"KUPYN O, BUDZAN V, MYKHAILYCH M, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". DeblurGAN: blind motion deblurring using conditional adversarial networks[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2018 "},{"name":"italic","data":[{"name":"text","data":"IEEE"}]},{"name":"text","data":"/"},{"name":"italic","data":[{"name":"text","data":"CVF Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Salt Lake City, USA: IEEE, 2018: 8183-8192."}]}]}]},{"id":"b14","label":"14","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"ZHANG K H, LUO W H, ZHONG Y R, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Deblurring by realistic blurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2020 "},{"name":"italic","data":[{"name":"text","data":"IEEE"}]},{"name":"text","data":"/"},{"name":"italic","data":[{"name":"text","data":"CVF Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Seattle, WA, USA: IEEE, 2020: 2734-2743."}]}]}]},{"id":"b15","label":"15","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"Y W TAI"},{"name":"text","data":"\t , "},{"name":"text","data":"X G CHEN"},{"name":"text","data":"\t , "},{"name":"text","data":"S KIM"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Nonlinear camera response functions and image deblurring: theoretical analysis and practice"},{"name":"text","data":" . "},{"name":"text","data":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"name":"text","data":" , "},{"name":"text","data":"2013"},{"name":"text","data":" . "},{"name":"text","data":"35"},{"name":"text","data":" ("},{"name":"text","data":"10"},{"name":"text","data":" ):"},{"name":"text","data":"2498"},{"name":"text","data":" -"},{"name":"text","data":"2512"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/TPAMI.2013.40"}],"href":"http://doi.org/10.1109/TPAMI.2013.40"}},{"name":"text","data":"."}],"title":"Nonlinear camera response functions and image deblurring: theoretical analysis and practice"}]},{"id":"b16","label":"16","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"HIRSCH M, SCHULER C J, HARMELING S, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Fast removal of non-uniform camera shake[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2011 "},{"name":"italic","data":[{"name":"text","data":"International Conference on Computer Vision"}]},{"name":"text","data":". Barcelona, Spain: IEEE, 2011: 463-470."}]}]}]},{"id":"b17","label":"17","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"李 俊山"},{"name":"text","data":"\t , "},{"name":"text","data":"杨 亚威"},{"name":"text","data":"\t , "},{"name":"text","data":"张 姣"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"退化图像复原方法研究进展"},{"name":"text","data":" . "},{"name":"text","data":"液晶与显示"},{"name":"text","data":" , "},{"name":"text","data":"2018"},{"name":"text","data":" . "},{"name":"text","data":"33"},{"name":"text","data":" ("},{"name":"text","data":"8"},{"name":"text","data":" ):"},{"name":"text","data":"676"},{"name":"text","data":" -"},{"name":"text","data":"689"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3788/YJYXS20183308.0676"}],"href":"http://doi.org/10.3788/YJYXS20183308.0676"}},{"name":"text","data":"."}],"title":"退化图像复原方法研究进展"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"J S LI"},{"name":"text","data":"\t , "},{"name":"text","data":"Y W YANG"},{"name":"text","data":"\t , "},{"name":"text","data":"J ZHANG"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Progress of degraded image restoration methods"},{"name":"text","data":" . "},{"name":"text","data":"Chinese Journal of Liquid Crystals and Displays"},{"name":"text","data":" , "},{"name":"text","data":"2018"},{"name":"text","data":" . "},{"name":"text","data":"33"},{"name":"text","data":" ("},{"name":"text","data":"8"},{"name":"text","data":" ):"},{"name":"text","data":"676"},{"name":"text","data":" -"},{"name":"text","data":"689"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3788/YJYXS20183308.0676"}],"href":"http://doi.org/10.3788/YJYXS20183308.0676"}},{"name":"text","data":"."}],"title":"Progress of degraded image restoration methods"}]},{"id":"b18","label":"18","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"ARJOVSKY M, CHINTALA S, BOTTOU L. Wasserstein GAN[J]. "},{"name":"italic","data":[{"name":"text","data":"arXiv preprint arXiv"}]},{"name":"text","data":": 1701.07875, 2017."}]}]}]},{"id":"b19","label":"19","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"GULRAJANI I, AHMED F, ARJOVSKY M, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Improved training of wasserstein GANs[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 31"},{"name":"italic","data":[{"name":"text","data":"st International Conference on Neural Information Processing Systems"}]},{"name":"text","data":". Red Hook, NY, USA: ACM, 2017: 5769-5779."}]}]}]},{"id":"b20","label":"20","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"HE K M, ZHANG X Y, REN S Q, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Deep residual learning for image recognition[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2016 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Las Vegas, USA: IEEE, 2016: 770-778."}]}]}]},{"id":"b21","label":"21","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"ISOLA P, ZHU J Y, ZHOU T H, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Image-to-image translation with conditional adversarial networks[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Honolulu, HI, USA: IEEE, 2017: 5967-5976."}]}]}]},{"id":"b22","label":"22","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"KIM T H, AHN B, LEE K M. Dynamic scene deblurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2013 "},{"name":"italic","data":[{"name":"text","data":"IEEE International Conference on Computer Vision"}]},{"name":"text","data":". Sydney, NSW, Australia: IEEE, 2013: 3160-3167."}]}]}]},{"id":"b23","label":"23","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"WIESCHOLLEK P, HIRSCH M, SCHÖLKOPF B, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Learning blind motion deblurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE International Conference on Computer Vision"}]},{"name":"text","data":". Venice, Italy: IEEE, 2017: 231-240."}]}]}]},{"id":"b24","label":"24","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"LAI W S, HUANG J B, HU Z, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". A comparative study for single image blind deblurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2016 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". LAS Vegas, USA: IEEE, 2016: 1701-1709."}]}]}]},{"id":"b25","label":"25","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"KIM T H, LEE K M, SCHÖLKOPF B, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Online video deblurring via dynamic temporal blending network[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE International Conference on Computer Vision"}]},{"name":"text","data":". Venice, Italy: IEEE, 2017: 4058-4067."}]}]}]},{"id":"b26","label":"26","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"SUN J, CAO W F, XU Z B, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Learning a convolutional neural network for non-uniform motion blur removal[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2015 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Boston, MA, USA: IEEE, 2015: 769-777."}]}]}]},{"id":"b27","label":"27","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"KRISHNAN D, TAY T, FERGUS R. Blind deconvolution using a normalized sparsity measure[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the CVPR"}]},{"name":"text","data":" 2011. Colorado Springs, USA: IEEE, 2011: 233-240."}]}]}]},{"id":"b28","label":"28","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"WHYTE O, SIVIC J, ZISSERMAN A, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Non-uniform deblurring for shaken images[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2010 "},{"name":"italic","data":[{"name":"text","data":"IEEE Computer Society Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". San Francisco, California, USA: IEEE, 2010: 491-498."}]}]}]},{"id":"b29","label":"29","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"XU L, ZHENG S C, JIA J Y. Unnatural "},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"0"}]},{"name":"text","data":" sparse representation for natural image deblurring[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of"}]},{"name":"text","data":" 2013 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":". Portland, Oregon, USA: IEEE, 2013: 1107-1114."}]}]}]}]},"response":[],"contributions":[],"acknowledgements":[],"conflict":[],"supportedby":[],"articlemeta":{"doi":"10.37188/CJLCD.2021-0120","clc":[[{"name":"text","data":"TP391.4"}]],"dc":[],"publisherid":"yjyxs-36-12-1693","citeme":[],"fundinggroup":[{"lang":"zh","text":[{"name":"text","data":"国家自然科学基金(No.62001272);山东省自然科学基金(No.ZR2019BF022)"}]},{"lang":"en","text":[{"name":"text","data":"Supported by National Natural Science Foundation of China(No.62001272); Shandong Provincial Natural Science Foundation (No.ZR2019BF022)"}]}],"history":{"received":"2021-04-08","revised":"2021-06-17","opub":"2021-12-10"},"copyright":{"data":[{"lang":"zh","data":[{"name":"text","data":"版权所有©《液晶与显示》编辑部2021"}],"type":"copyright"},{"lang":"en","data":[{"name":"text","data":"Copyright ©2021 Chinese Journal of Liquid Crystals and Displays. All rights reserved."}],"type":"copyright"}],"year":"2021"}},"appendix":[],"type":"research-article","ethics":[],"backSec":[],"supplementary":[],"journalTitle":"液晶与显示","issue":"12","volume":"36","originalSource":[]}