{"defaultlang":"zh","titlegroup":{"articletitle":[{"lang":"zh","data":[{"name":"text","data":"基于密集循环网络的视网膜图像分割方法"}]},{"lang":"en","data":[{"name":"text","data":"Retinal image segmentation method based on dense cycle networks"}]}]},"contribgroup":{"author":[{"name":[{"lang":"zh","surname":"杨","givenname":"云","namestyle":"eastern","prefix":""},{"lang":"en","surname":"YANG","givenname":"Yun","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":"1"}],"role":["corresp","first-author"],"corresp":[{"rid":"cor1","lang":"zh","text":"杨云, E-mail: yangyun3190@163.com","data":[{"name":"text","data":"杨云, E-mail: yangyun3190@163.com"}]}],"bio":[{"lang":"zh","text":["杨云(1965-), 女, 陕西西安人, 博士, 教授, 2009年于陕西科技大学获得博士学位, 主要从事图像处理及智能信息处理方面的研究。E-mail: yangyun3190@163.com"],"graphic":[],"data":[[{"name":"bold","data":[{"name":"text","data":"杨云"}]},{"name":"text","data":"(1965-), 女, 陕西西安人, 博士, 教授, 2009年于陕西科技大学获得博士学位, 主要从事图像处理及智能信息处理方面的研究。E-mail: "},{"name":"text","data":"yangyun3190@163.com"}]]}],"email":"yangyun3190@163.com","deceased":false},{"name":[{"lang":"zh","surname":"周","givenname":"舒婕","namestyle":"eastern","prefix":""},{"lang":"en","surname":"ZHOU","givenname":"Shu-jie","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff2","text":"2"}],"role":[],"deceased":false},{"name":[{"lang":"zh","surname":"李","givenname":"程辉","namestyle":"eastern","prefix":""},{"lang":"en","surname":"LI","givenname":"Cheng-hui","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff2","text":"2"}],"role":[],"deceased":false},{"name":[{"lang":"zh","surname":"张","givenname":"娟娟","namestyle":"eastern","prefix":""},{"lang":"en","surname":"ZHANG","givenname":"Juan-juan","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff2","text":"2"}],"role":[],"deceased":false}],"aff":[{"id":"aff1","intro":[{"lang":"zh","label":"1","text":"陕西科技大学 人工智能研究所, 陕西 西安 710021","data":[{"name":"text","data":"陕西科技大学 人工智能研究所, 陕西 西安 710021"}]},{"lang":"en","label":"1","text":"Institute of Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an 710021, China","data":[{"name":"text","data":"Institute of Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an 710021, China"}]}]},{"id":"aff2","intro":[{"lang":"zh","label":"2","text":"陕西科技大学 电子信息与人工智能学院, 陕西 西安 710021","data":[{"name":"text","data":"陕西科技大学 电子信息与人工智能学院, 陕西 西安 710021"}]},{"lang":"en","label":"2","text":"College of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an 710021, China","data":[{"name":"text","data":"College of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an 710021, China"}]}]}]},"abstracts":[{"lang":"zh","data":[{"name":"p","data":[{"name":"text","data":"针对视网膜血管在分割过程容易出现细节特征信息丢失、血管轮廓模糊等问题,提出一种改进的循环分割对抗网络算法。该算法改进了分割器的网络模型,在U型网络上、下采样过程中添加了密集连接结构,充分保留了图像的特征信息,提升了模型的泛化能力以及鲁棒性,缓解了过度分割现象。为防止网络退化,将损失函数替换为最小二乘函数,提高了图像的分割质量,提升了网络模型训练的稳定性。实验结果表明,本文的网络模型在DRIVE以及CHASE数据集中,两者分割的准确性、敏感性分别达到了96.93%、84.30%以及96.94%、79.92%。该算法具有较好的网络泛化能力以及分割准确率,可以为疾病诊断提供重要的依据。"}]}]},{"lang":"en","data":[{"name":"p","data":[{"name":"text","data":"Aiming at the problems of loss of detailed feature information and blurred contours of blood vessels in the segmentation process of retinal vessels, an improved cycle segmentation confrontation network algorithm is proposed. The algorithm improves the network model of the segmenter, adds dense connection structure in the up and down sampling processes of the U-Net network, fully retains the image feature information, which improves the generalization ability and robustness of the model, and alleviates the over segmentation phenomenon. In order to prevent network degradation, the loss function is replaced by the least square function, which improves the quality of image segmentation and the stability of network model training. The experimental results show that the segmentation accuracy and sensitivity of the network are 96.93%, 84.30% and 96.94%, 79.92% in the DRIVE and CHASE datasets respectively. The algorithm has good network generalization ability and segmentation accuracy, which can provide an important basis for disease diagnosis."}]}]}],"keyword":[{"lang":"zh","data":[[{"name":"text","data":"视网膜血管分割"}],[{"name":"text","data":"循环分割对抗网络"}],[{"name":"text","data":"U型网络"}],[{"name":"text","data":"密集连接结构"}],[{"name":"text","data":"损失函数"}]]},{"lang":"en","data":[[{"name":"text","data":"retinal blood vessels segmentation"}],[{"name":"text","data":"cycle segmentation adversarial networks"}],[{"name":"text","data":"U-Net work"}],[{"name":"text","data":"densely connected structure"}],[{"name":"text","data":"loss function"}]]}],"highlights":[],"body":[{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"1"}],"title":[{"name":"text","data":"引言"}],"level":"1","id":"s1"}},{"name":"p","data":[{"name":"text","data":"眼底视网膜血管作为人体中唯一可以直接肉眼观察到的血管,是许多慢性疾病诊断的一项重要指标。传统方法凭借医生的个人经验和主观判断对视网膜血管分割存在诸多的弊端,如研究人员前期需要花费大量的时间和精力学习微细血管和整体结构的分割工作、不同研究人员在分割同一张视网膜血管图像时往往存在较大的差异性等。因此,将计算机作为辅助技术对于准确、快速分割视网膜血管图像起着至关重要的作用。"}]},{"name":"p","data":[{"name":"text","data":"目前研究人员根据是否需要人工标记训练集,将眼底视网膜图像的血管分割方法大致分为两类:无监督学习方法和监督学习方法。常见的无监督学习方法有血管跟踪法"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"1","type":"bibr","rid":"b1","data":[{"name":"text","data":"1"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、形态学处理方法"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"2","type":"bibr","rid":"b2","data":[{"name":"text","data":"2"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、匹配滤波方法"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"3","type":"bibr","rid":"b3","data":[{"name":"text","data":"3"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等。基于深度学习的分割方法是一种常见的监督学习方法,例如:Oliveira"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"4","type":"bibr","rid":"b4","data":[{"name":"text","data":"4"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等人提出了将平稳小波变换提供的多尺度分析与多尺度全卷积神经网络相结合的方法来分割视网膜血管,从而提高网络性能;Guo"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"5","type":"bibr","rid":"b5","data":[{"name":"text","data":"5"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等人提出了基于全卷积U-Net的网络结构,该方法摒弃了传统卷积神经网络的Dropout层,而是使用正则化方法来提高模型性能;Roy"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"6","type":"bibr","rid":"b6","data":[{"name":"text","data":"6"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等人提出了ReLayNet网络结构,该网络在训练过程中同时使用了交叉熵和Dice损失函数进行优化;Gu"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"7","type":"bibr","rid":"b7","data":[{"name":"text","data":"7"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等人提出了上下文编码器网络来捕获更多的深层信息,该网络结构在特征编码和特征解码器之间加入了上下文提取器,该方法不仅适用于视网膜图像的分割还可以用于其他医学图像的分割。"}]},{"name":"p","data":[{"name":"text","data":"由于视网膜血管中布满了丰富的细小血管,上述方法在分割过程中容易出现边缘细节粗糙、梯度复杂、识别分辨率低等问题。因此,本文在循环生成对抗网络的基础上进行改进设计,具体工作如下:"}]},{"name":"p","data":[{"name":"text","data":"(1) 使用生成对抗网络的特殊变体循环生成对抗网络框架,通过两个生成器、两个判别器进行对抗训练,优化分割模型并设计一种全新的基于循环分割对抗网络的视网膜血管分割方法。"}]},{"name":"p","data":[{"name":"text","data":"(2) 将生成网络替换成改进后的分割网络,分割网络的编码器、解码器部分的网络结构替换成U型网络和密集连接相结合的模型结构,保证在提高分割精度的同时减少模型分割所需要的时间。"}]},{"name":"p","data":[{"name":"text","data":"(3) 将对抗损失函数替换为最小二乘损失函数,在还原图像分割细节的同时,保留血管末梢的细节信息,提高分割效率。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2"}],"title":[{"name":"text","data":"相关知识"}],"level":"1","id":"s2"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.1"}],"title":[{"name":"text","data":"循环生成对抗网络"}],"level":"2","id":"s2-1"}},{"name":"p","data":[{"name":"text","data":"近些年,由Goodfellow"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等人提出的生成对抗网络(Generative Adversarial Network,GAN)超越了传统神经网络,可以生成更加真实、清晰的图像。在此基础上,研究人员对GAN的改进模型进行了设计与研究,例如CGAN"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"9","type":"bibr","rid":"b9","data":[{"name":"text","data":"9"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、DCGAN"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"10","type":"bibr","rid":"b10","data":[{"name":"text","data":"10"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"、WGAN"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"b11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等不同的模型结构,在图像处理任务中可获得更好的性能。2017年由Zhu"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"b12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等人提出的循环生成对抗网络(Cycle-consistent Generative Adversarial Networks,CycleGAN)是传统生成对抗网络的变体,它包含两个相互对称的生成对抗网络,并通过共用两个生成器、两个判别器来实现图像端到端的相互映射。CycleGAN对架构的扩展之处在于使用了循环一致性,其第一个生成器输出的图像可以用作第二个生成器的输入图像,第二个生成器的输出图像应与原始图像匹配,反之亦然。它最大的特点是不需要图片的一对一配对,就能将一张图像从源领域映射到目标领域,以此方式来提高训练网络的稳定性。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2.2"}],"title":[{"name":"text","data":"生成器以及判别器"}],"level":"2","id":"s2-2"}},{"name":"p","data":[{"name":"text","data":"CycleGAN网络结构是由生成器模型和判别器模型组成。生成器包括编码器、转换器以及解码器。编码器通过卷积层提取图像的特征信息;转换器由6个残差块组成,每个残差块包含两个卷积层,可以保留更多图像的原始信息;解码器使用3层反卷积操作将图像还原为原始尺寸。"}]},{"name":"p","data":[{"name":"text","data":"CycleGAN网络的判别器由5层卷积神经网络组成,可以从图像中提取特征信息并且预测其为原始图像或是生成器生成的图像。"}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3"}],"title":[{"name":"text","data":"本文方法"}],"level":"1","id":"s3"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.1"}],"title":[{"name":"text","data":"基于改进的DU-CycleGAN网络模型"}],"level":"2","id":"s3-1"}},{"name":"p","data":[{"name":"text","data":"DU-CycleGAN网络训练图如"},{"name":"xref","data":{"text":"图 1","type":"fig","rid":"Figure1","data":[{"name":"text","data":"图 1"}]}},{"name":"text","data":"所示。该网络由两个分割器"},{"name":"italic","data":[{"name":"text","data":"G"}]},{"name":"text","data":"和"},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":"、两个判别器"},{"name":"italic","data":[{"name":"text","data":"D"},{"name":"sub","data":[{"name":"text","data":"X"}]}]},{"name":"text","data":"和"},{"name":"italic","data":[{"name":"text","data":"D"},{"name":"sub","data":[{"name":"text","data":"Y"}]}]},{"name":"text","data":"组成,该模型包含"},{"name":"italic","data":[{"name":"text","data":"G"}]},{"name":"text","data":":"},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"→"},{"name":"italic","data":[{"name":"text","data":"Y"}]},{"name":"text","data":"的映射以及"},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":":"},{"name":"italic","data":[{"name":"text","data":"Y"}]},{"name":"text","data":"→"},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"的逆映射,并且加入一个循环一致性损失函数以确保"},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"G"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"))≈"},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"。同时将原始图像和分割图像送入"},{"name":"italic","data":[{"name":"text","data":"D"},{"name":"sub","data":[{"name":"text","data":"X"}]}]},{"name":"text","data":"和"},{"name":"italic","data":[{"name":"text","data":"D"},{"name":"sub","data":[{"name":"text","data":"Y"}]}]},{"name":"text","data":"中来判别分割网络分割的真伪,这种循环的训练模式,分割器和判别器通过博弈式动态竞争,使判别器无法区分是真实的图像还是分割的图像,最终达到网络的动态均衡状态。"}]},{"name":"fig","data":{"id":"Figure1","caption":[{"lang":"zh","label":[{"name":"text","data":"图1"}],"title":[{"name":"text","data":"DU-CycleGAN网络训练图"}]},{"lang":"en","label":[{"name":"text","data":"Fig 1"}],"title":[{"name":"text","data":"DU-CycleGAN network training diagram"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714553&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714553&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714553&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"本文使用了由两个1×1卷积层和一个3×3卷积层组成的固定残差块。如"},{"name":"xref","data":{"text":"图 2","type":"fig","rid":"Figure2","data":[{"name":"text","data":"图 2"}]}},{"name":"text","data":"所示,改进后的残差块具有一个瓶颈结构可以进行降维操作,减少图像的通道数。与原模型相比,通过修改残差块的数目和结构,在提高输出图像质量的同时减少了参数,从而减少了计算量和处理时间。"}]},{"name":"fig","data":{"id":"Figure2","caption":[{"lang":"zh","label":[{"name":"text","data":"图2"}],"title":[{"name":"text","data":"ResNet网络结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig 2"}],"title":[{"name":"text","data":"ResNet network structure"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714557&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714557&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714557&type=middle"}]}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.1.1"}],"title":[{"name":"text","data":"U型分割网络"}],"level":"3","id":"s3-1-1"}},{"name":"p","data":[{"name":"text","data":"U型卷积神经网络(U-Net)是由Ronneberger"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"b13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等人提出,其方法特点就是通过上、下采样以及跳跃连接将浅层特征信息和深层特征信息融合,扩充特征信息,从而减轻训练负担,使图像中的边缘信息更加准确。由于视网膜血管具有复杂的表现特征,在分割的过程中容易产生血管不连续、边缘区域分割较为模糊等问题。因此,本文将DU-CycleGAN网络的分割器替换成改进后的U-Net网络,以此提高模型的分割准确率。"}]},{"name":"p","data":[{"name":"text","data":"本文改进后的分割器模型如"},{"name":"xref","data":{"text":"图 3","type":"fig","rid":"Figure3","data":[{"name":"text","data":"图 3"}]}},{"name":"text","data":"所示,分割器网络由编码器、转换器和解码器组成。编码器包含3个密集连接模块和3个下采样卷积操作,每经过一次下采样操作其特征图会缩小为原图像的1/2,经过3次下采样操作之后,特征图缩小为原图的1/8并输入到转换器中。特征转换器由6个残差块构成,它不改变特征图大小,特征图经过残差模块后输出到解码器部分。解码器同样包含了3个密集连接模块,采用了与下采样对称的反卷积操作,其特征通道的变化与下采样操作相反,每次经过反卷积层后特征图的尺寸都会变为原来的2倍,最终特征图像还原到原始尺寸。当编码阶段和解码阶段特征图像尺寸相同时,会通过跳跃连接将深层特征信息和浅层特征信息融合,从而增强细节信息的补充。"}]},{"name":"fig","data":{"id":"Figure3","caption":[{"lang":"zh","label":[{"name":"text","data":"图3"}],"title":[{"name":"text","data":"DU-CycleGAN分割器网络结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig 3"}],"title":[{"name":"text","data":"Network structure of DU-CycleGAN splitter"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714560&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714560&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714560&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"与此同时,为了加快网络模型的收敛,在每一层卷积操作后使用批量归一化(BN)层。BN可以对网络的输入数据归一化,使得特征输入值的均值与方差都在规定范围内,在一定程度上缓解了梯度消失现象。激活函数每一层均使用LReLU激活函数来替代原模型中的ReLU激活函数。ReLU激活函数在训练过程中很可能会导致神经元死亡,相应的参数无法更新。本文所使用的LReLU激活函数在特征输入小于0时会有一个负数的输出,可以缓解神经元死亡的问题。公式定义如下:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"1"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714561&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714561&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714561&type=middle"}}}],"id":"yjyxs-36-12-1702-E1"}}]},{"name":"p","data":[{"name":"text","data":"DU-CycleGAN分割网络在训练过程中的各层参数的设置以及具体输出如"},{"name":"xref","data":{"text":"表 1","type":"table","rid":"Table1","data":[{"name":"text","data":"表 1"}]}},{"name":"text","data":"所示。"}]},{"name":"table","data":{"id":"Table1","caption":[{"lang":"zh","label":[{"name":"text","data":"表1"}],"title":[{"name":"text","data":"DU-CycleGAN分割网络各层参数设置"}]},{"lang":"en","label":[{"name":"text","data":"Table 1"}],"title":[{"name":"text","data":"Parameter setting of each layer of DU-CycleGAN"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"网络层"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"特征图尺寸"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"尺寸/步长"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"输入"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"512×512"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"None"}]}],[{"align":"center","data":[{"name":"text","data":"密集连接模块"}]},{"align":"center","data":[{"name":"text","data":"512×512"}]},{"align":"center","data":[{"name":"text","data":"[3×3 Conv-32]×3"}]}],[{"align":"center","data":[{"name":"text","data":"下采样层"}]},{"align":"center","data":[{"name":"text","data":"256×256"}]},{"align":"center","data":[{"name":"text","data":"2×2/2"}]}],[{"align":"center","data":[{"name":"text","data":"密集连接模块"}]},{"align":"center","data":[{"name":"text","data":"256×256"}]},{"align":"center","data":[{"name":"text","data":"[3×3 Conv-64]×3"}]}],[{"align":"center","data":[{"name":"text","data":"下采样层"}]},{"align":"center","data":[{"name":"text","data":"128×128"}]},{"align":"center","data":[{"name":"text","data":"2×2/2"}]}],[{"align":"center","data":[{"name":"text","data":"密集连接模块"}]},{"align":"center","data":[{"name":"text","data":"128×128"}]},{"align":"center","data":[{"name":"text","data":"[3×3 Conv-128]×3"}]}],[{"align":"center","data":[{"name":"text","data":"下采样层"}]},{"align":"center","data":[{"name":"text","data":"64×64"}]},{"align":"center","data":[{"name":"text","data":"2×2/2"}]}],[{"align":"center","data":[{"name":"text","data":"Residual blocks"}]},{"align":"center","data":[{"name":"text","data":"64×64"}]},{"align":"center","data":[{"name":"text","data":"[1×1],[3×3],[1×1]× 6"}]}],[{"align":"center","data":[{"name":"text","data":"上采样层"}]},{"align":"center","data":[{"name":"text","data":"128×128"}]},{"align":"center","data":[{"name":"text","data":"2×2/2"}]}],[{"align":"center","data":[{"name":"text","data":"密集连接模块"}]},{"align":"center","data":[{"name":"text","data":"128×128"}]},{"align":"center","data":[{"name":"text","data":"[3×3 Conv-128]×3"}]}],[{"align":"center","data":[{"name":"text","data":"上采样层"}]},{"align":"center","data":[{"name":"text","data":"256×256"}]},{"align":"center","data":[{"name":"text","data":"2×2/2"}]}],[{"align":"center","data":[{"name":"text","data":"密集连接模块"}]},{"align":"center","data":[{"name":"text","data":"256×256"}]},{"align":"center","data":[{"name":"text","data":"[3×3 Conv-64]×3"}]}],[{"align":"center","data":[{"name":"text","data":"上采样"}]},{"align":"center","data":[{"name":"text","data":"512×512"}]},{"align":"center","data":[{"name":"text","data":"2×2/2"}]}],[{"align":"center","data":[{"name":"text","data":"密集连接模块"}]},{"align":"center","data":[{"name":"text","data":"512×512"}]},{"align":"center","data":[{"name":"text","data":"[3×3 Conv-32]×3"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"卷积层"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"512×512"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"1×1 Conv"}]}]],"foot":[]}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.1.2"}],"title":[{"name":"text","data":"密集连接网络"}],"level":"3","id":"s3-1-2"}},{"name":"p","data":[{"name":"text","data":"U-Net通过下采样减少空间维度,通过上采样恢复图像的细节及空间维度,经过上、下采样操作后会对数据信息造成一定的损失并且训练精度和测试精度会呈下降趋势。针对上述问题,本算法引入了密集卷积网络(Densely Connected Convolutional Networks,DenseNet)"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"b14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",它在减少了参数数量的同时加强了特征信息的传播和重复利用。"}]},{"name":"p","data":[{"name":"text","data":"传统的卷积网络有"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"text","data":"层时会有"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"text","data":"个网络连接,而DenseNet有"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"text","data":"层时包含"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"text","data":"+1)/2个连接,可以更好地加强特征传播以及特征重复利用,提高网络的分割准确率"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"15","type":"bibr","rid":"b15","data":[{"name":"text","data":"15"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。DenseNet改变了网络模型结构的梯度流动方式,将每层网络的最初输入信息和损失函数直接连接起来,使得整个网络模型结构更加清晰。DenseNet公式定义如下:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"2"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714562&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714562&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714562&type=middle"}}}],"id":"yjyxs-36-12-1702-E2"}}]},{"name":"p","data":[{"name":"text","data":"其中: "},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"表示特征信息的输出,"},{"name":"italic","data":[{"name":"text","data":"l"}]},{"name":"text","data":"表示网络结构层数,"},{"name":"italic","data":[{"name":"text","data":"H"},{"name":"sub","data":[{"name":"text","data":"l"}]}]},{"name":"text","data":"(·)代表非线性转化函数,它包括3个相同的3×3卷积操作,同时在每个卷积层后添加了BN层以及ReLU激活函数进行激活操作,以此提高密集连接模块的性能。DenseNet网络结构如"},{"name":"xref","data":{"text":"图 4","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure4","caption":[{"lang":"zh","label":[{"name":"text","data":"图4"}],"title":[{"name":"text","data":"DenseNet网络结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig 4"}],"title":[{"name":"text","data":"Densenet network structure"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714563&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714563&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714563&type=middle"}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.1.3"}],"title":[{"name":"text","data":"判别网络"}],"level":"3","id":"s3-1-3"}},{"name":"p","data":[{"name":"text","data":"判别器模型由5层卷积神经网络构成,其主要目的是预测分割的结果是真实图像还是分割网络分割的图像。判别器实际相当于二分类器,用来判断图像的分布是否一致,判断为真实图像输出结果为1,判断为生成图像输出结果为0。判别器网络结构如"},{"name":"xref","data":{"text":"图 5","type":"fig","rid":"Figure5","data":[{"name":"text","data":"图 5"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure5","caption":[{"lang":"zh","label":[{"name":"text","data":"图5"}],"title":[{"name":"text","data":"DU-CycleGAN判别器网络结构"}]},{"lang":"en","label":[{"name":"text","data":"Fig 5"}],"title":[{"name":"text","data":"Network structure of DU-CycleGAN discriminator"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714564&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714564&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714564&type=middle"}]}}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.2"}],"title":[{"name":"text","data":"损失函数"}],"level":"2","id":"s3-2"}},{"name":"p","data":[{"name":"text","data":"损失函数可以衡量吻合度、调整参数以及权重,使得映射的结果和实际类别相吻合,训练的结果更加准确。本文中损失函数是由对抗损失和循环一致性损失两部分组成。"}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.2.1"}],"title":[{"name":"text","data":"对抗损失函数"}],"level":"3","id":"s3-2-1"}},{"name":"p","data":[{"name":"text","data":"传统生成对抗网络使用的损失函数在训练过程中容易出现梯度弥漫、网络训练不稳定等问题。本文使用最小二乘损失函数来构成对抗损失,使分割结果更加接近于真实图像。最小二乘损失函数定义如下:其中,"},{"name":"italic","data":[{"name":"text","data":"E"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"y"}]},{"name":"text","data":"~"},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"text","data":"data("},{"name":"italic","data":[{"name":"text","data":"y"}]},{"name":"text","data":")"}]},{"name":"text","data":"表示样本"},{"name":"italic","data":[{"name":"text","data":"Y"}]},{"name":"text","data":"分布的期望值,"},{"name":"italic","data":[{"name":"text","data":"E"}]},{"name":"sub","data":[{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"~"},{"name":"italic","data":[{"name":"text","data":"P"}]},{"name":"text","data":"data("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")"}]},{"name":"text","data":"表示样本"},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"分布的期望值,分割模型"},{"name":"italic","data":[{"name":"text","data":"G"}]},{"name":"text","data":": "},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"→"},{"name":"italic","data":[{"name":"text","data":"Y"}]},{"name":"text","data":"及对应的判别模型"},{"name":"italic","data":[{"name":"text","data":"D"},{"name":"sub","data":[{"name":"text","data":"Y"}]}]},{"name":"text","data":",式(3)代表映射函数"},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"→"},{"name":"italic","data":[{"name":"text","data":"Y"}]},{"name":"text","data":"的过程。"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"3"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714566&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714566&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714566&type=middle"}}}],"id":"yjyxs-36-12-1702-E3"}}]},{"name":"p","data":[{"name":"text","data":"对于分割模型"},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":": "},{"name":"italic","data":[{"name":"text","data":"Y"}]},{"name":"text","data":"→"},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"及对应的判别模型"},{"name":"italic","data":[{"name":"text","data":"D"},{"name":"sub","data":[{"name":"text","data":"X"}]}]},{"name":"text","data":",式(4)代表映射函数"},{"name":"italic","data":[{"name":"text","data":"Y"}]},{"name":"text","data":"→"},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"的过程:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"4"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714569&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714569&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714569&type=middle"}}}],"id":"yjyxs-36-12-1702-E4"}}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.2.2"}],"title":[{"name":"text","data":"循环一致性损失函数"}],"level":"3","id":"s3-2-2"}},{"name":"p","data":[{"name":"text","data":"为了保持原始图像和转换后图像的高度一致性,本文采用了循环一致性损失函数。目标域"},{"name":"italic","data":[{"name":"text","data":"X"}]},{"name":"text","data":"的图像通过循环性可以将图像转换为原始图像,公式可表示为"},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"G"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":"))≈"},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":";同理,对于向后的循环一致性可采用公式"},{"name":"italic","data":[{"name":"text","data":"G"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"F"}]},{"name":"text","data":"("},{"name":"italic","data":[{"name":"text","data":"y"}]},{"name":"text","data":"))≈"},{"name":"italic","data":[{"name":"text","data":"y"}]},{"name":"text","data":"表示"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"b12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。本文引入"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"1"}]},{"name":"text","data":"范数来衡量原始图像和生成图像之间的差距,公式定义如下:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"5"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714572&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714572&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714572&type=middle"}}}],"id":"yjyxs-36-12-1702-E5"}}]},{"name":"p","data":[{"name":"text","data":"综上,本文网络的复合损失函数用公式(6)表示:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"6"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714574&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714574&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714574&type=middle"}}}],"id":"yjyxs-36-12-1702-E6"}}]},{"name":"p","data":[{"name":"text","data":"最终对网络的损失函数进行优化,为使分割图像与真实图像达到最大相似性,网络的总训练目标如下:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"7"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714577&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714577&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714577&type=middle"}}}],"id":"yjyxs-36-12-1702-E7"}}]}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4"}],"title":[{"name":"text","data":"实验结果与分析"}],"level":"1","id":"s4"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4.1"}],"title":[{"name":"text","data":"实验环境与参数分析"}],"level":"2","id":"s4-1"}},{"name":"p","data":[{"name":"text","data":"实验平台基于Python3.6环境搭建,使用TensorFlow来构建网络框架,实验环境均在Intel(R) i7-8565U CPU以及显卡RTX2080Ti进行,使用MATLAB R2017b对图像进行预处理操作。网络采用Adam优化算法,学习率"},{"name":"italic","data":[{"name":"text","data":"L"},{"name":"sub","data":[{"name":"text","data":"r"}]}]},{"name":"text","data":"设为0.000 2,以指数衰减的方式迭代更新学习率,网络迭代次数设为200个周期,训练时batch_size设为1。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4.2"}],"title":[{"name":"text","data":"数据集以及数据增强"}],"level":"2","id":"s4-2"}},{"name":"p","data":[{"name":"text","data":"为了验证本文算法的有效性和实用性,采用DRIVE和CHASE_DB1公共视网膜数据集。DRIVE数据集目前在视网膜图像处理领域中应用较为广泛。该数据集共包含40张眼底彩色视网膜图像,图像的分辨率为584×565大小像素,其中20张用于训练,其余20张用于测试。CHASE数据集共有28张眼底彩色视网膜图像,收集自14名儿童的双眼,每幅图像的分辨率为999×960,其中14张用于训练,其余14张用于测试。"}]},{"name":"p","data":[{"name":"text","data":"在深度学习中,需要大规模的训练样本才能提升模型的鲁棒性,使模型具有更强的泛化能力。由于视网膜血管样本集数量较少,在网络模型的训练过程中,极易出现过拟合现象。因此,为了获得显著的训练结果,需要对数据集进行数据增强操作,通过图像的旋转、平移变换等操作对训练集进行扩充。此外,对于扩充后的图像进行剪裁,通过向四周平移像素点将每张图像随机剪裁, 最终DRIVE和CHASE数据集分别得到15 620张和12 000张尺寸大小为512×512的patch块,其中90%用作网络训练,10%用作网络验证。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4.3"}],"title":[{"name":"text","data":"数据预处理"}],"level":"2","id":"s4-3"}},{"name":"p","data":[{"name":"text","data":"由于原始视网膜图像存在噪声干扰、光照不均以及血管特征不明显等现象,直接分割会影响图像的分割效果。为了改善模型性能,需要对视网膜图像进行如下预处理操作:"}]},{"name":"p","data":[{"name":"text","data":"(1) 为了提高血管树对比度,本文提取视网膜绿色通道。由"},{"name":"xref","data":{"text":"图 6(c)","type":"fig","rid":"Figure6","data":[{"name":"text","data":"图 6(c)"}]}},{"name":"text","data":"可以看出视网膜血管在绿色通道中血管轮廓与背景的对比度较为明显。"}]},{"name":"fig","data":{"id":"Figure6","caption":[{"lang":"zh","label":[{"name":"text","data":"图6"}],"title":[{"name":"text","data":"图像预处理前后效果对比图。(a) 原图像;(b) 红色通道;(c) 绿色通道;(d) 蓝色通道;(e) 预处理前;(f) 预处理后。"}]},{"lang":"en","label":[{"name":"text","data":"Fig 6"}],"title":[{"name":"text","data":"Image preprocessing before and after the effect comparison. (a) Original image; (b) Red channel; (c) Green channel; (d) Blue channel; (e) Before pretreatment; (f) After pretreatment."}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714579&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714579&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714579&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"(2) 为了加快模型的收敛速度,让训练数据分布一致,需要对视网膜图像数据进行归一化处理,使像素在(0, 1)的范围内。归一化公式如下:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"8"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714581&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714581&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714581&type=middle"}}}],"id":"yjyxs-36-12-1702-E8"}}]},{"name":"p","data":[{"name":"text","data":"其中: "},{"name":"italic","data":[{"name":"text","data":"x"},{"name":"sub","data":[{"name":"text","data":"i"}]}]},{"name":"text","data":"表示图像的像素点,max("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")和min("},{"name":"italic","data":[{"name":"text","data":"x"}]},{"name":"text","data":")分别代表图像的最大像素点和最小像素点。"}]},{"name":"p","data":[{"name":"text","data":"(3) 为了增强图像的对比度突出血管轮廓,本文使用MATLAB语言对图像进行自适应直方图均衡化操作。图像预处理前后效果对比如"},{"name":"xref","data":{"text":"图 6(e)","type":"fig","rid":"Figure6","data":[{"name":"text","data":"图 6(e)"}]}},{"name":"text","data":"、"},{"name":"xref","data":{"text":"(f)","type":"fig","rid":"Figure6","data":[{"name":"text","data":"(f)"}]}},{"name":"text","data":"所示。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4.4"}],"title":[{"name":"text","data":"评价指标"}],"level":"2","id":"s4-4"}},{"name":"p","data":[{"name":"text","data":"为了系统验证本文算法对视网膜血管分割的效果,实验采用分割准确性(Accuracy,Acc)、特异性(Specificity,Spe)和敏感性(Sensitivity,Sen)对网络性能评估。"}]},{"name":"p","data":[{"name":"text","data":"Acc可以预测正确的血管数目占总样本数目的百分比,它的取值范围为0~1,Acc的值越接近1,分割的准确率越高。公式如下:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"9"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714583&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714583&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714583&type=middle"}}}],"id":"yjyxs-36-12-1702-E9"}}]},{"name":"p","data":[{"name":"text","data":"在公式(9)中,真阳性(True Positive,TP)表示将视网膜血管图像正确分类的像素个数;真阴性(True Negative,TN)表示将非血管正确分类的像素个数;假阳性(False Positive,FP)表示将非血管错误分类为血管的像素数量;假阴性(False Negative,FN)表示将血管错误分类为非血管的像素数量。同时,上述4个值也用于计算特异性以及敏感性。特异性是指实际为非血管的样本占非血管总样本数目的百分比;敏感性是指正确的血管数目占血管总样本数目的百分比。特异性、敏感性的计算如公式(10)、(11)所示:"}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"10"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714586&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714586&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714586&type=middle"}}}],"id":"yjyxs-36-12-1702-E10"}}]},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"11"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714591&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714591&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714591&type=middle"}}}],"id":"yjyxs-36-12-1702-E11"}}]},{"name":"p","data":[{"name":"text","data":"除此之外,本文还引入模型评估指标AUC(Area Under Curve)。AUC的值可以判断模型的性能,AUC的值越大,正确率就越高,它的取值范围为[0, 1]。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4.5"}],"title":[{"name":"text","data":"实验结果分析"}],"level":"2","id":"s4-5"}},{"name":"p","data":[{"name":"text","data":"本实验使用DU-CycleGAN网络、U-Net以及DenseNet和U-Net(DU-Net)相结合的网络模型进行分割效果对比。考虑到实验严谨性,在训练网络模型时,采用与本文方法相同的数据扩充和数据预处理方法。实验结果如"},{"name":"xref","data":{"text":"表 2","type":"table","rid":"Table2","data":[{"name":"text","data":"表 2"}]}},{"name":"text","data":"所示。在DRIVE数据集中,本文网络模型分割的准确性较U-Net提高了1.31%,达到了96.93%;本文提出的算法在敏感性、特异性以及AUC指标上均有所提升,优于U-Net以及其他算法,说明本算法改进网络的有效性。同理,"},{"name":"xref","data":{"text":"表 3","type":"table","rid":"Table3","data":[{"name":"text","data":"表 3"}]}},{"name":"text","data":"列举了CHASE数据集实验对比结果,从中可以看出,本文算法的评估指标要略高于其他算法,其中,准确性具有一定的优势。实验结果表明该网络模型具有较好的视网膜血管图像分割性能。"}]},{"name":"table","data":{"id":"Table2","caption":[{"lang":"zh","label":[{"name":"text","data":"表2"}],"title":[{"name":"text","data":"DRIVE数据集上的分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Table 2"}],"title":[{"name":"text","data":"Segmentation results on the DRIVE dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Acc"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Sen"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Spe"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"AUC"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"U-Net"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.956 2"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.827 8"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.970 1"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.963 2"}]}],[{"align":"center","data":[{"name":"text","data":"DU-Net"}]},{"align":"center","data":[{"name":"text","data":"0.961 4"}]},{"align":"center","data":[{"name":"text","data":"0.832 7"}]},{"align":"center","data":[{"name":"text","data":"0.975 8"}]},{"align":"center","data":[{"name":"text","data":"0.970 9"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"Ours"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.969 3"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.843 0"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.981 7"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.979 5"}]}]],"foot":[]}]}},{"name":"table","data":{"id":"Table3","caption":[{"lang":"zh","label":[{"name":"text","data":"表3"}],"title":[{"name":"text","data":"CHASE数据集上的分割结果"}]},{"lang":"en","label":[{"name":"text","data":"Table 3"}],"title":[{"name":"text","data":"Segmentation results on CHASE dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Acc"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Sen"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Spe"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"AUC"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"U-Net"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.953 2"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.786 0"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.962 1"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.968 2"}]}],[{"align":"center","data":[{"name":"text","data":"DU-Net"}]},{"align":"center","data":[{"name":"text","data":"0.961 0"}]},{"align":"center","data":[{"name":"text","data":"0.789 3"}]},{"align":"center","data":[{"name":"text","data":"0.970 8"}]},{"align":"center","data":[{"name":"text","data":"0.972 5"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"Ours"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.969 4"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.799 2"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.975 1"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.980 1"}]}]],"foot":[]}]}},{"name":"p","data":[{"name":"text","data":"为了更直观地表现出DU-CycleGAN算法的优越性,"},{"name":"xref","data":{"text":"图 7","type":"fig","rid":"Figure7","data":[{"name":"text","data":"图 7"}]}},{"name":"text","data":"、"},{"name":"xref","data":{"text":"图 8","type":"fig","rid":"Figure8","data":[{"name":"text","data":"图 8"}]}},{"name":"text","data":"是本文算法与DU-Net、U-Net算法分别在DRIVE和CHASE数据集的受试者工作特征曲线(ROC)对比图,从图中可以看出本文算法的AUC面积要大于DU-Net以及U-Net网络。本算法最靠近左上角的临界值,纵坐标的真阳性率(True Positive Rate, TPR)较高,横坐标的假阳性率(False Positive Rate, FPR)低,表明本实验正确分割血管图像的可能性高,实验的诊断更有价值。"}]},{"name":"fig","data":{"id":"Figure7","caption":[{"lang":"zh","label":[{"name":"text","data":"图7"}],"title":[{"name":"text","data":"DRIVE数据集ROC曲线对比图"}]},{"lang":"en","label":[{"name":"text","data":"Fig 7"}],"title":[{"name":"text","data":"ROC curve comparison chart of DRIVE dataset"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714597&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714597&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714597&type=middle"}]}},{"name":"fig","data":{"id":"Figure8","caption":[{"lang":"zh","label":[{"name":"text","data":"图8"}],"title":[{"name":"text","data":"CHASE数据集ROC曲线对比图"}]},{"lang":"en","label":[{"name":"text","data":"Fig 8"}],"title":[{"name":"text","data":"ROC curve comparison chart of CHASE dataset"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714601&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714601&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714601&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"为了对DU-CycleGAN算法进行更加详细的比较和分析,"},{"name":"xref","data":{"text":"图 9","type":"fig","rid":"Figure9","data":[{"name":"text","data":"图 9"}]}},{"name":"text","data":"从左到右分别列出了原始图像、金标准图像、基于U-Net算法、基于DU-Net算法以及本文算法的微细血管末端局部放大图。通过局部分析3种不同模型的分割结果可得出,U-Net算法以及DU-Net算法对于视网膜血管的细小区域处理效果不佳。由于血管树的形态结构过于复杂,U-Net算法在经过上、下采样操作后会对血管的细节特征以及空间维度造成一定的损失,从而出现漏分割、分割断裂等现象。如"},{"name":"xref","data":{"text":"图 9(e)","type":"fig","rid":"Figure9","data":[{"name":"text","data":"图 9(e)"}]}},{"name":"text","data":"所示,本文经过优化改进后的模型可以捕捉更多微细血管的特征信息,不仅可以分割出粗细合理的血管树,还可以有效缓解漏分割现象。"}]},{"name":"fig","data":{"id":"Figure9","caption":[{"lang":"zh","label":[{"name":"text","data":"图9"}],"title":[{"name":"text","data":"各分割算法末端微细血管局部放大。(a) 原图像;(b) 金标准图像;(c) U-Net算法;(d) DU-Net算法;(e) 本文算法。"}]},{"lang":"en","label":[{"name":"text","data":"Fig 9"}],"title":[{"name":"text","data":"Each segmentation algorithm enlarges the end microvascular. (a) Original image; (b) Gold standard image; (c) U-Net algorithm; (d) DU-Net algorithm; (e) Our algorithm."}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714605&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714605&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714605&type=middle"}]}},{"name":"p","data":[{"name":"xref","data":{"text":"图 10","type":"fig","rid":"Figure10","data":[{"name":"text","data":"图 10"}]}},{"name":"text","data":"列举了本文算法以及U-Net算法在DRIVE和CHASE数据集上视网膜血管分割的效果图。从"},{"name":"xref","data":{"text":"图 10(d)","type":"fig","rid":"Figure10","data":[{"name":"text","data":"图 10(d)"}]}},{"name":"text","data":"可以看出,U-Net分割结果会将部分细小血管划分到背景区域中,虽然图像中含有的噪声较少,但血管连续性较差,对于血管末端的分叉处无法进行准确的分割,同时会出现血管断裂、细小血管分割较为模糊等细节问题。"},{"name":"xref","data":{"text":"图 10(e)","type":"fig","rid":"Figure10","data":[{"name":"text","data":"图 10(e)"}]}},{"name":"text","data":"列出了本文的分割结果,DU-CycleGAN网络在上、下采样过程中添加了密集连接结构,进一步提高了分割精度,使分割器可以更加有效地训练数据;同时,改进后的分割模型增加了网络的层数,使网络模型可以提取并拟合更多用于分割的特征信息。如"},{"name":"xref","data":{"text":"图 10(e)","type":"fig","rid":"Figure10","data":[{"name":"text","data":"图 10(e)"}]}},{"name":"text","data":"所示,DU-CycleGAN网络可以更完整地对细小血管及其分叉处进行分割,网络模型准确地拟合了特征信息,保留了更多的血管细节信息,使血管具有较好的连续性,更加接近金标准图像。"}]},{"name":"fig","data":{"id":"Figure10","caption":[{"lang":"zh","label":[{"name":"text","data":"图10"}],"title":[{"name":"text","data":"视网膜分割效果图。(a) 原图像;(b) 预处理后图像;(c) 金标准图像;(d) U-Net算法;(e) 本文算法。"}]},{"lang":"en","label":[{"name":"text","data":"Fig 10"}],"title":[{"name":"text","data":"Effect picture of retinal segmentation. (a) Original image; (b) Preprocessed image; (c) Gold standard image; (d) U-Net algorithm; (e) Our algorithm."}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714610&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714610&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=22714610&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"为了更好地体现DU-CycleGAN算法的有效性,"},{"name":"xref","data":{"text":"表 4","type":"table","rid":"Table4","data":[{"name":"text","data":"表 4"}]}},{"name":"text","data":"、"},{"name":"xref","data":{"text":"表 5","type":"table","rid":"Table5","data":[{"name":"text","data":"表 5"}]}},{"name":"text","data":"分别列举了不同分割算法使用DRIVE和CHASE数据集的性能指标。如"},{"name":"xref","data":{"text":"表 4","type":"table","rid":"Table4","data":[{"name":"text","data":"表 4"}]}},{"name":"text","data":"所示,在DRIVE数据集中,本文算法无论在准确性还是特异性上都具有一定的优势,尤其是敏感性比文献["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"b16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]高了7.39%。而在CHASE数据集中,文献["},{"name":"xref","data":{"text":"19","type":"bibr","rid":"b19","data":[{"name":"text","data":"19"}]}},{"name":"text","data":"]提出了一种基于U-Net模型的递归残差卷积神经网络,该算法的特异性指标相比于其他算法具有较好的表现,但该算法分割的准确度不够高,对于细小血管的分割容易出现断裂现象。文献["},{"name":"xref","data":{"text":"22","type":"bibr","rid":"b22","data":[{"name":"text","data":"22"}]}},{"name":"text","data":"]提出了基于U-Net的多分支卷机神经网络,并且增加了特征信息流路径,该模型与其他算法相比具有较好的分割性能,在CHASE数据集中AUC值也达到了98.39%,但在具体分割过程中,该算法存在微细血管分割较为模糊的现象。本文算法与文献["},{"name":"xref","data":{"text":"19","type":"bibr","rid":"b19","data":[{"name":"text","data":"19"}]}},{"name":"text","data":"]、文献["},{"name":"xref","data":{"text":"22","type":"bibr","rid":"b22","data":[{"name":"text","data":"22"}]}},{"name":"text","data":"]的算法模型相比,在总体性能指标相差不大的情况下,可以准确地拟合特征信息,保留更多的血管细节信息。由此可见,本文网络模型具有良好的鲁棒性和泛化能力。"}]},{"name":"table","data":{"id":"Table4","caption":[{"lang":"zh","label":[{"name":"text","data":"表4"}],"title":[{"name":"text","data":"DRIVE数据集分割结果比较"}]},{"lang":"en","label":[{"name":"text","data":"Table 4"}],"title":[{"name":"text","data":"Comparison of segmentation results of DRIVE dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Acc"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Sen"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Spe"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"AUC"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"Dasgupta"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"b16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]"}]}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.953 3"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.769 1"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.980 1"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.974 4"}]}],[{"align":"center","data":[{"name":"text","data":"Lahiri"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"17","type":"bibr","rid":"b17","data":[{"name":"text","data":"17"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"—"}]},{"align":"center","data":[{"name":"text","data":"0.750 0"}]},{"align":"center","data":[{"name":"text","data":"0.980 0"}]},{"align":"center","data":[{"name":"text","data":"0.948 0"}]}],[{"align":"center","data":[{"name":"text","data":"Vega"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"18","type":"bibr","rid":"b18","data":[{"name":"text","data":"18"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.941 2"}]},{"align":"center","data":[{"name":"text","data":"0.744 4"}]},{"align":"center","data":[{"name":"text","data":"0.960 0"}]},{"align":"center","data":[{"name":"text","data":"—"}]}],[{"align":"center","data":[{"name":"text","data":"Alom"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"19","type":"bibr","rid":"b19","data":[{"name":"text","data":"19"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.955 6"}]},{"align":"center","data":[{"name":"text","data":"0.779 2"}]},{"align":"center","data":[{"name":"text","data":"0.981 3"}]},{"align":"center","data":[{"name":"text","data":"0.978 4"}]}],[{"align":"center","data":[{"name":"text","data":"Li"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"20","type":"bibr","rid":"b20","data":[{"name":"text","data":"20"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.952 7"}]},{"align":"center","data":[{"name":"text","data":"0.756 9"}]},{"align":"center","data":[{"name":"text","data":"0.981 6"}]},{"align":"center","data":[{"name":"text","data":"0.973 8"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"Ours"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.9693"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.8430"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.9817"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.9795"}]}]],"foot":[]}]}},{"name":"table","data":{"id":"Table5","caption":[{"lang":"zh","label":[{"name":"text","data":"表5"}],"title":[{"name":"text","data":"CHASE数据集分割结果比较"}]},{"lang":"en","label":[{"name":"text","data":"Table 5"}],"title":[{"name":"text","data":"Comparison of segmentation results of CHASE dataset"}]}],"note":[],"table":[{"head":[[{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Method"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Acc"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Sen"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"Spe"}]},{"align":"center","style":"class:table_top_border","data":[{"name":"text","data":"AUC"}]}]],"body":[[{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"Alom"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"19","type":"bibr","rid":"b19","data":[{"name":"text","data":"19"}]}},{"name":"text","data":"]"}]}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.963 4"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.775 6"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.982 0"}]},{"align":"center","style":"class:table_top_border2","data":[{"name":"text","data":"0.981 5"}]}],[{"align":"center","data":[{"name":"text","data":"Li"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"20","type":"bibr","rid":"b20","data":[{"name":"text","data":"20"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.958 1"}]},{"align":"center","data":[{"name":"text","data":"0.750 7"}]},{"align":"center","data":[{"name":"text","data":"0.979 3"}]},{"align":"center","data":[{"name":"text","data":"0.971 6"}]}],[{"align":"center","data":[{"name":"text","data":"Orlando"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"21","type":"bibr","rid":"b21","data":[{"name":"text","data":"21"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"—"}]},{"align":"center","data":[{"name":"text","data":"0.727 7"}]},{"align":"center","data":[{"name":"text","data":"0.971 2"}]},{"align":"center","data":[{"name":"text","data":"0.952 4"}]}],[{"align":"center","data":[{"name":"text","data":"Zhuang"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"22","type":"bibr","rid":"b22","data":[{"name":"text","data":"22"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.965 6"}]},{"align":"center","data":[{"name":"text","data":"0.797 8"}]},{"align":"center","data":[{"name":"text","data":"0.981 8"}]},{"align":"center","data":[{"name":"text","data":"0.983 9"}]}],[{"align":"center","data":[{"name":"text","data":"Yan"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"23","type":"bibr","rid":"b23","data":[{"name":"text","data":"23"}]}},{"name":"text","data":"]"}]}]},{"align":"center","data":[{"name":"text","data":"0.961 0"}]},{"align":"center","data":[{"name":"text","data":"0.763 3"}]},{"align":"center","data":[{"name":"text","data":"0.980 9"}]},{"align":"center","data":[{"name":"text","data":"0.978 1"}]}],[{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"Ours"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.969 4"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.799 2"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.975 1"}]},{"align":"center","style":"class:table_bottom_border","data":[{"name":"text","data":"0.980 1"}]}]],"foot":[]}]}}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"5"}],"title":[{"name":"text","data":"结论"}],"level":"1","id":"s5"}},{"name":"p","data":[{"name":"text","data":"本文提出了一种基于循环分割对抗网络的视网膜图像分割算法,使用了U型网络和密集连接相结合的模型作为分割器的网络模型结构,并添加了最小二乘损失函数。实验结果证明,所改进的网络模型在DRIVE和CHASE数据集中,两者分割的准确性、敏感性分别达到了96.93%、84.30%以及96.94%、79.92%,说明同其他深度学习算法模型相比,本文网络模型分割效果更为精细。但在具体实验中,本文的网络模型过于复杂,对实验设备有着较高的要求,并且网络因为层数过多导致分割效率低于预期。在未来的工作中还需要对网络模型进一步优化,在不影响性能的情况下,降低网络复杂度,从而获得更优的分割结果。"}]}]}],"footnote":[],"reflist":{"title":[{"name":"text","data":"参考文献"}],"data":[{"id":"b1","label":"1","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"Y YIN"},{"name":"text","data":"\t , "},{"name":"text","data":"M ADEL"},{"name":"text","data":"\t , "},{"name":"text","data":"S BOURENNANE"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Retinal vessel segmentation using a probabilistic tracking method"},{"name":"text","data":" . "},{"name":"text","data":"Pattern Recognition"},{"name":"text","data":" , "},{"name":"text","data":"2012"},{"name":"text","data":" . "},{"name":"text","data":"45"},{"name":"text","data":" ("},{"name":"text","data":"4"},{"name":"text","data":" ):"},{"name":"text","data":"1235"},{"name":"text","data":" -"},{"name":"text","data":"1244"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1016/j.patcog.2011.09.019"}],"href":"http://doi.org/10.1016/j.patcog.2011.09.019"}},{"name":"text","data":"."}],"title":"Retinal vessel segmentation using a probabilistic tracking method"}]},{"id":"b2","label":"2","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"Y YANG"},{"name":"text","data":"\t , "},{"name":"text","data":"S Y HUANG"},{"name":"text","data":"\t , "},{"name":"text","data":"N N RAO"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"An automatic hybrid method for retinal blood vessel extraction"},{"name":"text","data":" . "},{"name":"text","data":"International Journal of Applied Mathematics and Computer Science"},{"name":"text","data":" , "},{"name":"text","data":"2008"},{"name":"text","data":" . "},{"name":"text","data":"18"},{"name":"text","data":" ("},{"name":"text","data":"3"},{"name":"text","data":" ):"},{"name":"text","data":"399"},{"name":"text","data":" -"},{"name":"text","data":"407"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.2478/v10006-008-0036-5"}],"href":"http://doi.org/10.2478/v10006-008-0036-5"}},{"name":"text","data":"."}],"title":"An automatic hybrid method for retinal blood vessel extraction"}]},{"id":"b3","label":"3","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"Q LI"},{"name":"text","data":"\t , "},{"name":"text","data":"J YOU"},{"name":"text","data":"\t , "},{"name":"text","data":"D ZHANG"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Vessel segmentation and width estimation in retinal images using multiscale production of matched filter responses"},{"name":"text","data":" . "},{"name":"text","data":"Expert Systems with Applications"},{"name":"text","data":" , "},{"name":"text","data":"2012"},{"name":"text","data":" . "},{"name":"text","data":"39"},{"name":"text","data":" ("},{"name":"text","data":"9"},{"name":"text","data":" ):"},{"name":"text","data":"7600"},{"name":"text","data":" -"},{"name":"text","data":"7610"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1016/j.eswa.2011.12.046"}],"href":"http://doi.org/10.1016/j.eswa.2011.12.046"}},{"name":"text","data":"."}],"title":"Vessel segmentation and width estimation in retinal images using multiscale production of matched filter responses"}]},{"id":"b4","label":"4","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"A OLIVEIRA"},{"name":"text","data":"\t , "},{"name":"text","data":"S PEREIRA"},{"name":"text","data":"\t , "},{"name":"text","data":"C A SILVA"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Retinal vessel segmentation based on fully convolutional neural networks"},{"name":"text","data":" . "},{"name":"text","data":"Expert Systems with Application"},{"name":"text","data":" , "},{"name":"text","data":"2018"},{"name":"text","data":" . "},{"name":"text","data":"112"},{"name":"text","data":" "},{"name":"text","data":"229"},{"name":"text","data":" -"},{"name":"text","data":"242"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1016/j.eswa.2018.06.034"}],"href":"http://doi.org/10.1016/j.eswa.2018.06.034"}},{"name":"text","data":"."}],"title":"Retinal vessel segmentation based on fully convolutional neural networks"}]},{"id":"b5","label":"5","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"GUO C L, SZEMENYEI M, PEI Y, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". SD-Unet: a structured dropout U-Net for retinal vessel segmentation[C]//2019 "},{"name":"italic","data":[{"name":"text","data":"IEEE"}]},{"name":"text","data":" 19"},{"name":"italic","data":[{"name":"text","data":"th International Conference on Bioinformatics and Bioengineering"}]},{"name":"text","data":" ("},{"name":"italic","data":[{"name":"text","data":"BIBE"}]},{"name":"text","data":"). Athens: IEEE, 2019: 439-444."}]}]}]},{"id":"b6","label":"6","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"A G ROY"},{"name":"text","data":"\t , "},{"name":"text","data":"S CONJETI"},{"name":"text","data":"\t , "},{"name":"text","data":"S P K KARRI"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks"},{"name":"text","data":" . "},{"name":"text","data":"Biomedical Optics Express"},{"name":"text","data":" , "},{"name":"text","data":"2017"},{"name":"text","data":" . "},{"name":"text","data":"8"},{"name":"text","data":" ("},{"name":"text","data":"8"},{"name":"text","data":" ):"},{"name":"text","data":"3627"},{"name":"text","data":" -"},{"name":"text","data":"3642"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1364/BOE.8.003627"}],"href":"http://doi.org/10.1364/BOE.8.003627"}},{"name":"text","data":"."}],"title":"ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks"}]},{"id":"b7","label":"7","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"Z W GU"},{"name":"text","data":"\t , "},{"name":"text","data":"J CHENG"},{"name":"text","data":"\t , "},{"name":"text","data":"H Z FU"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"CE-Net: context encoder network for 2D medical image segmentation"},{"name":"text","data":" . "},{"name":"text","data":"IEEE Transactions on Medical Imaging"},{"name":"text","data":" , "},{"name":"text","data":"2019"},{"name":"text","data":" . "},{"name":"text","data":"38"},{"name":"text","data":" ("},{"name":"text","data":"10"},{"name":"text","data":" ):"},{"name":"text","data":"2281"},{"name":"text","data":" -"},{"name":"text","data":"2292"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/TMI.2019.2903562"}],"href":"http://doi.org/10.1109/TMI.2019.2903562"}},{"name":"text","data":"."}],"title":"CE-Net: context encoder network for 2D medical image segmentation"}]},{"id":"b8","label":"8","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"I J GOODFELLOW"},{"name":"text","data":"\t , "},{"name":"text","data":"J POUGET-ABADIE"},{"name":"text","data":"\t , "},{"name":"text","data":"M MIRZA"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Generative adversarial networks"},{"name":"text","data":" . "},{"name":"text","data":"Advances in Neural Information Processing Systems"},{"name":"text","data":" , "},{"name":"text","data":"2014"},{"name":"text","data":" . "},{"name":"text","data":"3"},{"name":"text","data":" ("},{"name":"text","data":"11"},{"name":"text","data":" ):"},{"name":"text","data":"2672"},{"name":"text","data":" -"},{"name":"text","data":"2680"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1145/3422622"}],"href":"http://doi.org/10.1145/3422622"}},{"name":"text","data":"."}],"title":"Generative adversarial networks"}]},{"id":"b9","label":"9","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"Y Y CHEN"},{"name":"text","data":"\t , "},{"name":"text","data":"W Y JIN"},{"name":"text","data":"\t , "},{"name":"text","data":"M WANG"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Metallographic image segmentation of GCr15 bearing steel based on CGAN"},{"name":"text","data":" . "},{"name":"text","data":"International Journal of Applied Electromagnetics and Mechanics"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"64"},{"name":"text","data":" ("},{"name":"text","data":"1/4"},{"name":"text","data":" ):"},{"name":"text","data":"1237"},{"name":"text","data":" -"},{"name":"text","data":"1243"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.3233/JAE-209441"}],"href":"http://doi.org/10.3233/JAE-209441"}},{"name":"text","data":"."}],"title":"Metallographic image segmentation of GCr15 bearing steel based on CGAN"}]},{"id":"b10","label":"10","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"M Y LI"},{"name":"text","data":"\t , "},{"name":"text","data":"H L TANG"},{"name":"text","data":"\t , "},{"name":"text","data":"M D CHAN"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"DC-AL GAN: pseudoprogression and true tumor progression of glioblastoma multiform image classification based on DCGAN and AlexNet"},{"name":"text","data":" . "},{"name":"text","data":"Medical Physics"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"47"},{"name":"text","data":" ("},{"name":"text","data":"3"},{"name":"text","data":" ):"},{"name":"text","data":"1139"},{"name":"text","data":" -"},{"name":"text","data":"1150"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1002/mp.14003"}],"href":"http://doi.org/10.1002/mp.14003"}},{"name":"text","data":"."}],"title":"DC-AL GAN: pseudoprogression and true tumor progression of glioblastoma multiform image classification based on DCGAN and AlexNet"}]},{"id":"b11","label":"11","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"S KADAMBI"},{"name":"text","data":"\t , "},{"name":"text","data":"Z Y WANG"},{"name":"text","data":"\t , "},{"name":"text","data":"E XING"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"WGAN domain adaptation for the joint optic disc-and-cup segmentation in fundus images"},{"name":"text","data":" . "},{"name":"text","data":"International Journal of Computer Assisted Radiology and Surgery"},{"name":"text","data":" , "},{"name":"text","data":"2020"},{"name":"text","data":" . "},{"name":"text","data":"15"},{"name":"text","data":" ("},{"name":"text","data":"7"},{"name":"text","data":" ):"},{"name":"text","data":"1205"},{"name":"text","data":" -"},{"name":"text","data":"1213"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1007/s11548-020-02144-9"}],"href":"http://doi.org/10.1007/s11548-020-02144-9"}},{"name":"text","data":"."}],"title":"WGAN domain adaptation for the joint optic disc-and-cup segmentation in fundus images"}]},{"id":"b12","label":"12","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"ZHU J Y, PARK T, ISOLA P, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceedings of the"}]},{"name":"text","data":" 2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE International Conference on Computer Vision"}]},{"name":"text","data":" ("},{"name":"italic","data":[{"name":"text","data":"ICCV"}]},{"name":"text","data":"). Venice: IEEE, 2017: 2242-2251."}]}]}]},{"id":"b13","label":"13","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]//"},{"name":"italic","data":[{"name":"text","data":"Proceeding of the"}]},{"name":"text","data":" 18"},{"name":"italic","data":[{"name":"text","data":"th International Conference on Medical Image Computing and Computer"}]},{"name":"text","data":"-"},{"name":"italic","data":[{"name":"text","data":"assisted Intervention"}]},{"name":"text","data":" ("},{"name":"italic","data":[{"name":"text","data":"MICCAI"}]},{"name":"text","data":"). Munich: Springer, 2015: 234-241."}]}]}]},{"id":"b14","label":"14","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"HUANG G, LIU Z, VAN DER MAATEN L, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Densely connected convolutional networks[C]//2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE Conference on Computer Vision and Pattern Recognition"}]},{"name":"text","data":" ("},{"name":"italic","data":[{"name":"text","data":"CVPR"}]},{"name":"text","data":"). Honolulu: IEEE, 2017: 2261-2269."}]}]}]},{"id":"b15","label":"15","citation":[{"lang":"zh","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"陈 宗航"},{"name":"text","data":"\t , "},{"name":"text","data":"胡 海龙"},{"name":"text","data":"\t , "},{"name":"text","data":"姚 剑敏"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"基于改进生成对抗网络的单帧图像超分辨率重建"},{"name":"text","data":" . "},{"name":"text","data":"液晶与显示"},{"name":"text","data":" , "},{"name":"text","data":"2021"},{"name":"text","data":" . "},{"name":"text","data":"36"},{"name":"text","data":" ("},{"name":"text","data":"5"},{"name":"text","data":" ):"},{"name":"text","data":"705"},{"name":"text","data":" -"},{"name":"text","data":"712"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.37188/CJLCD.2020-0250"}],"href":"http://doi.org/10.37188/CJLCD.2020-0250"}},{"name":"text","data":"."}],"title":"基于改进生成对抗网络的单帧图像超分辨率重建"},{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"Z H CHEN"},{"name":"text","data":"\t , "},{"name":"text","data":"H L HU"},{"name":"text","data":"\t , "},{"name":"text","data":"J M YAO"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Single frame image super-resolution reconstruction based on improved generative adversarial network"},{"name":"text","data":" . "},{"name":"text","data":"Chinese Journal of Liquid Crystals and Displays"},{"name":"text","data":" , "},{"name":"text","data":"2021"},{"name":"text","data":" . "},{"name":"text","data":"36"},{"name":"text","data":" ("},{"name":"text","data":"5"},{"name":"text","data":" ):"},{"name":"text","data":"705"},{"name":"text","data":" -"},{"name":"text","data":"712"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.37188/CJLCD.2020-0250"}],"href":"http://doi.org/10.37188/CJLCD.2020-0250"}},{"name":"text","data":"."}],"title":"Single frame image super-resolution reconstruction based on improved generative adversarial network"}]},{"id":"b16","label":"16","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"DASGUPTA A, SINGH S. A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentation[C]//2017 "},{"name":"italic","data":[{"name":"text","data":"IEEE"}]},{"name":"text","data":" 14"},{"name":"italic","data":[{"name":"text","data":"th International Symposium on Biomedical Imaging"}]},{"name":"text","data":" ("},{"name":"italic","data":[{"name":"text","data":"ISBI"}]},{"name":"text","data":" 2017). Melbourne: IEEE, 2017: 248-251."}]}]}]},{"id":"b17","label":"17","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"LAHIRI A, ROY A G, SHEET D, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Deep neural ensemble for retinal vessel segmentation in fundus images towards achieving label-free angiography[C]//2016 38"},{"name":"italic","data":[{"name":"text","data":"th Annual International Conference of the IEEE Engineering in Medicine and Biology Society"}]},{"name":"text","data":" ("},{"name":"italic","data":[{"name":"text","data":"EMBC"}]},{"name":"text","data":"). Orlando: IEEE, 2016: 1340-1343."}]}]}]},{"id":"b18","label":"18","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"R VEGA"},{"name":"text","data":"\t , "},{"name":"text","data":"G SANCHEZ-ANTE"},{"name":"text","data":"\t , "},{"name":"text","data":"L E FALCON-MORALES"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Retinal vessel extraction using lattice neural networks with dendritic processing"},{"name":"text","data":" . "},{"name":"text","data":"Computers in Biology and Medicine"},{"name":"text","data":" , "},{"name":"text","data":"2015"},{"name":"text","data":" . "},{"name":"text","data":"58"},{"name":"text","data":" "},{"name":"text","data":"20"},{"name":"text","data":" -"},{"name":"text","data":"30"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1016/j.compbiomed.2014.12.016"}],"href":"http://doi.org/10.1016/j.compbiomed.2014.12.016"}},{"name":"text","data":"."}],"title":"Retinal vessel extraction using lattice neural networks with dendritic processing"}]},{"id":"b19","label":"19","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"ALOM M Z, HASAN M, YAKOPCIC C, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":". Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation[J]. "},{"name":"italic","data":[{"name":"text","data":"arXiv preprint arXiv"}]},{"name":"text","data":": 1802.06955, 2018."}]}]}]},{"id":"b20","label":"20","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"Q L LI"},{"name":"text","data":"\t , "},{"name":"text","data":"B W FENG"},{"name":"text","data":"\t , "},{"name":"text","data":"L P XIE"},{"name":"text","data":"\t , "},{"name":"text","data":"等"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"A cross-modality learning approach for vessel segmentation in retinal images"},{"name":"text","data":" . "},{"name":"text","data":"IEEE Transactions on Medical Imaging"},{"name":"text","data":" , "},{"name":"text","data":"2016"},{"name":"text","data":" . "},{"name":"text","data":"35"},{"name":"text","data":" ("},{"name":"text","data":"1"},{"name":"text","data":" ):"},{"name":"text","data":"109"},{"name":"text","data":" -"},{"name":"text","data":"118"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/TMI.2015.2457891"}],"href":"http://doi.org/10.1109/TMI.2015.2457891"}},{"name":"text","data":"."}],"title":"A cross-modality learning approach for vessel segmentation in retinal images"}]},{"id":"b21","label":"21","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"J I ORLANDO"},{"name":"text","data":"\t , "},{"name":"text","data":"E PROKOFYEVA"},{"name":"text","data":"\t , "},{"name":"text","data":"M B BLASCHKO"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images"},{"name":"text","data":" . "},{"name":"text","data":"IEEE Transactions on Biomedical Engineering"},{"name":"text","data":" , "},{"name":"text","data":"2017"},{"name":"text","data":" . "},{"name":"text","data":"64"},{"name":"text","data":" ("},{"name":"text","data":"1"},{"name":"text","data":" ):"},{"name":"text","data":"16"},{"name":"text","data":" -"},{"name":"text","data":"27"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/TBME.2016.2535311"}],"href":"http://doi.org/10.1109/TBME.2016.2535311"}},{"name":"text","data":"."}],"title":"A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images"}]},{"id":"b22","label":"22","citation":[{"lang":"en","text":[{"name":"p","data":[{"name":"text","data":"ZHUANG J. LadderNet: Multi-path networks based on U-Net for medical image segmentation[J]. "},{"name":"italic","data":[{"name":"text","data":"arXiv preprint arXiv"}]},{"name":"text","data":": 1810.07810, 2018."}]}]}]},{"id":"b23","label":"23","citation":[{"lang":"en","text":[{"name":"text","data":" "},{"name":"text","data":"\t "},{"name":"text","data":"Z Q YAN"},{"name":"text","data":"\t , "},{"name":"text","data":"X YANG"},{"name":"text","data":"\t , "},{"name":"text","data":"K T CHENG"},{"name":"text","data":" "},{"name":"text","data":" . "},{"name":"text","data":"Joint segment-level and pixel-wise losses for deep learning based retinal vessel segmentation"},{"name":"text","data":" . "},{"name":"text","data":"IEEE Transactions on Biomedical Engineering"},{"name":"text","data":" , "},{"name":"text","data":"2018"},{"name":"text","data":" . "},{"name":"text","data":"65"},{"name":"text","data":" ("},{"name":"text","data":"9"},{"name":"text","data":" ):"},{"name":"text","data":"1912"},{"name":"text","data":" -"},{"name":"text","data":"1923"},{"name":"text","data":"\t\t . "},{"name":"text","data":"DOI:"},{"name":"extlink","data":{"text":[{"name":"text","data":"10.1109/TBME.2018.2828137"}],"href":"http://doi.org/10.1109/TBME.2018.2828137"}},{"name":"text","data":"."}],"title":"Joint segment-level and pixel-wise losses for deep learning based retinal vessel segmentation"}]}]},"response":[],"contributions":[],"acknowledgements":[],"conflict":[],"supportedby":[],"articlemeta":{"doi":"10.37188/CJLCD.2021-0142","clc":[[{"name":"text","data":"TP391.4;TP183"}]],"dc":[],"publisherid":"yjyxs-36-12-1702","citeme":[],"fundinggroup":[{"lang":"zh","text":[{"name":"text","data":"国家自然科学基金(No.61971272);国家重点研发计划重点专项(No.2019YFC1520204);国家自然科学基金青年科学基金(No.61601271);陕西省教育厅专项科研计划(No.15JK1086)"}]},{"lang":"en","text":[{"name":"text","data":"Supported by National Natural Science Foundation of China(No.61971272);National Key Research and Development Program of China(No.2019YFC1520204);National Science Foundation for Young Scholars of China(No.61601271);Education Department Research and Development Program of Shaanxi Province(No.15JK1086)"}]}],"history":{"received":"2021-05-22","revised":"2021-07-01","opub":"2021-12-10"},"copyright":{"data":[{"lang":"zh","data":[{"name":"text","data":"版权所有©《液晶与显示》编辑部2021"}],"type":"copyright"},{"lang":"en","data":[{"name":"text","data":"Copyright ©2021 Chinese Journal of Liquid Crystals and Displays. All rights reserved."}],"type":"copyright"}],"year":"2021"}},"appendix":[],"type":"research-article","ethics":[],"backSec":[],"supplementary":[],"journalTitle":"液晶与显示","issue":"12","volume":"36","originalSource":[]}