1.南京工程学院 计算机工程学院, 江苏 南京 211167
2.东南大学 信息显示与可视化国际联合实验室, 电子科学与工程学院, 江苏 南京 210096
[ "赵健(1989—),男,山东枣庄人,博士,讲师,2019年于东南大学获得博士学位,主要从事新型显示技术的研究。E-mail:zhaojian@njit.edu.cn" ]
[ "夏军(1974—),男,江苏淮安人,博士,教授,2004年于东南大学获得博士学位,主要从事三维显示技术的研究。E-mail:xiajun@seu.edu.cn" ]
扫 描 看 全 文
赵健, 戴子尧, 丁义权, 等. 面向光场显示的虚拟视点生成技术进展[J]. 液晶与显示, 2023,38(10):1361-1371.
ZHAO Jian, DAI Zi-yao, DING Yi-quan, et al. Research progress of virtual view rendering technology for light field display[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(10):1361-1371.
赵健, 戴子尧, 丁义权, 等. 面向光场显示的虚拟视点生成技术进展[J]. 液晶与显示, 2023,38(10):1361-1371. DOI: 10.37188/CJLCD.2023-0228.
ZHAO Jian, DAI Zi-yao, DING Yi-quan, et al. Research progress of virtual view rendering technology for light field display[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(10):1361-1371. DOI: 10.37188/CJLCD.2023-0228.
随着元宇宙产业的发展,光场显示技术因其可以将数字世界与物理世界完美融合,已经成为信息显示技术领域的研究热点,并呈现出视点数量更多、视点密度更大、渲染速度更快的发展趋势。然而,现有的光场显示技术面临着分辨率低、深度受限、视疲劳等多种挑战。本文从人眼立体视觉感知原理出发,对现有的双视点、多视点和超多视点光场显示技术的成像原理和典型方案进行整理和总结。同时,对现有基于单信源和多信源虚拟视点生成技术进行归纳对比,着重分析各项技术在虚拟视点生成质量和渲染速度等方面的可行性和实现效率,并对光场显示技术的未来发展方向进行了展望。
With the advancement of the meta-universe industry, light field display technology has emerged as a research focal point in the realm of information display technology due to its seamless integration of digital and physical worlds. It is exhibiting an upward trend towards more viewpoints, higher viewpoint density, and faster rendering speed. However, extant optical field display technologies are confronted with numerous challenges such as low resolution, limited depth perception, and visual fatigue. Based on the principle of human stereoscopic vision perception, this paper presents a comprehensive overview of the imaging principles and typical schemes employed in current light field display technologies, including double view, multi-view and super-multi-view. Meanwhile, an overview and comparison of current virtual view generation technologies based on single-source and multi-source approaches is provided, with a focus on the feasibility and implementation efficiency of each technology in terms of virtual view quality and rendering speed. Additionally, future development directions for optical field display technology are also anticipated.
近眼显示光场显示虚拟视点神经网络
near-eye displaylight field displayvirtual viewpointneural network
HUA J Y, HUA E, ZHOU F B, et al. Foveated glasses-free 3D display with ultrawide field of view via a large-scale 2D-metagrating complex [J]. Light Sci. Appl., 2021, 10(1): 213. doi: 10.1038/s41377-021-00651-1http://dx.doi.org/10.1038/s41377-021-00651-1
CHEN G W, HUANG T Q, FAN Z C, et al. A naked eye 3D display and interaction system for medical education and training [J]. J. Biomed. Inform., 2019, 100: 103319. doi: 10.1016/j.jbi.2019.103319http://dx.doi.org/10.1016/j.jbi.2019.103319
HE Y, CHEN X H, LI X K, et al. Harnessing the plenoptic function for a directionally illuminated autostereoscopic display [J]. Opt. Express, 2022, 30(25): 45553-45568. doi: 10.1364/oe.476896http://dx.doi.org/10.1364/oe.476896
READ J C A, GODFREY A, BOHR I, et al. Viewing 3D TV over two months produces no discernible effects on balance, coordination or eyesight [J]. Ergonomics, 2016, 59(8): 1073-1088. doi: 10.1080/00140139.2015.1114682http://dx.doi.org/10.1080/00140139.2015.1114682
ZHAO Y, WEI Y T, CUI X Y, et al. 3D display technology in medical imaging field [C]//Proceedings of 2013 IEEE International Conference on Medical Imaging Physics and Engineering. Shenyang: IEEE, 2013. doi: 10.1109/icmipe.2013.6864536http://dx.doi.org/10.1109/icmipe.2013.6864536
SKIRNEWSKAJA J, WILKINSON T D. Automotive holographic head-up displays [J]. Adv. Mater., 2022, 34(19): 2110463. doi: 10.1002/adma.202110463http://dx.doi.org/10.1002/adma.202110463
ITAMIYA T, TO M, OGUCHI T, et al. A novel anatomy education method using a spatial reality display capable of stereoscopic imaging with the naked eye [J]. Appl. Sci., 2021, 11(16): 7323. doi: 10.3390/app11167323http://dx.doi.org/10.3390/app11167323
LEE S, JO Y, YOO D, et al. Tomographic near-eye displays [J]. Nat. Commun., 2019, 10(1): 2497. doi: 10.1038/s41467-019-10451-2http://dx.doi.org/10.1038/s41467-019-10451-2
PARK M, JEON H, HEO D, et al. 360-degree mixed reality volumetric display using an asymmetric diffusive holographic optical element [J]. Opt. Express, 2022, 30(26): 47375-47387. doi: 10.1364/oe.476965http://dx.doi.org/10.1364/oe.476965
HE K, WANG X L, WANG Z W, et al. Snapshot multifocal light field microscopy [J]. Opt. Express, 2020, 28(8): 12108-12120. doi: 10.1364/oe.390719http://dx.doi.org/10.1364/oe.390719
YOON K H, KANG M K, LEE H, et al. Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation [J]. Appl. Opt., 2018, 57(1): A101-A117. doi: 10.1364/ao.57.00a101http://dx.doi.org/10.1364/ao.57.00a101
AGHASI A, HESHMAT B, WEI L H, et al. Optimal allocation of quantized human eye depth perception for multi-focal 3D display design [J]. Opt. Express, 2021, 29(7): 9878-9896. doi: 10.1364/oe.412373http://dx.doi.org/10.1364/oe.412373
BANKS M S, HOFFMAN D M, KIM J, et al. 3D displays [J]. Annu. Rev. Vis. Sci., 2016, 2: 397-435. doi: 10.1146/annurev-vision-082114-035800http://dx.doi.org/10.1146/annurev-vision-082114-035800
MCGUIRT J T, COOKE N K, BURGERMASTER M, et al. Extended reality technologies in nutrition education and behavior: comprehensive scoping review and future directions [J]. Nutrients, 2020, 12(9): 2899. doi: 10.3390/nu12092899http://dx.doi.org/10.3390/nu12092899
DOUGHTY M, GHUGRE N R, WRIGHT G A. Augmenting performance: a systematic review of optical see-through head-mounted displays in surgery [J]. J. Imaging, 2022, 8(7): 203. doi: 10.3390/jimaging8070203http://dx.doi.org/10.3390/jimaging8070203
HOFFMAN D M, GIRSHICK A R, AKELEY K, et al. Vergence-accommodation conflicts hinder visual performance and cause visual fatigue [J]. J. Vis., 2008, 8(3): 33. doi: 10.1167/8.3.33http://dx.doi.org/10.1167/8.3.33
HUANG H K, HUA H. Systematic characterization and optimization of 3D light field displays [J]. Opt. Express, 2017, 25(16): 18508-18525. doi: 10.1364/oe.25.018508http://dx.doi.org/10.1364/oe.25.018508
TAKAKI Y. High-density directional display for generating natural three-dimensional images [J]. Proc. IEEE, 2006, 94(3): 654-663. doi: 10.1109/jproc.2006.870684http://dx.doi.org/10.1109/jproc.2006.870684
ZHANG Q T, SONG W T, LIU Y, et al. Design and implementation of an optical see-through near-eye display combining Maxwellian-view and light-field methods [J]. Opt. Commun., 2022, 510: 127833. doi: 10.1016/j.optcom.2021.127833http://dx.doi.org/10.1016/j.optcom.2021.127833
KIM D, LEE S, MOON S, et al. Hybrid multi-layer displays providing accommodation cues [J]. Opt. Express, 2018, 26(13): 17170-17184. doi: 10.1364/oe.26.017170http://dx.doi.org/10.1364/oe.26.017170
XU M H, HUA H. Systematic method for modeling and characterizing multilayer light field displays [J]. Opt. Express, 2020, 28(2): 1014-1036. doi: 10.1364/oe.381047http://dx.doi.org/10.1364/oe.381047
PADMANABAN N, PENG Y F, WETZSTEIN G. Holographic near-eye displays based on overlap-add stereograms [J]. ACM Trans. Graph., 2019, 38(6): 214. doi: 10.1145/3355089.3356517http://dx.doi.org/10.1145/3355089.3356517
YOO C, CHAE M, MOON S, et al. Retinal projection type lightguide-based near-eye display with switchable viewpoints [J]. Opt. Express, 2020, 28(3): 3116-3135. doi: 10.1364/oe.383386http://dx.doi.org/10.1364/oe.383386
LEE S, WANG M F, LI G, et al. Foveated near-eye display for mixed reality using liquid crystal photonics [J]. Sci. Rep., 2020, 10(1): 16127. doi: 10.1038/s41598-020-72555-whttp://dx.doi.org/10.1038/s41598-020-72555-w
ZHAO J, MA Q G, XIA J, et al. Hybrid computational near-eye light field display [J]. IEEE Photon. J., 2019, 11(1): 7000110. doi: 10.1109/jphot.2019.2893934http://dx.doi.org/10.1109/jphot.2019.2893934
ZHAO J, XIA J, MA Q G, et al. Spatial loss factor for the analysis of accommodation depth cue on near-eye light field displays [J]. Opt. Express, 2019, 27(24): 34582-34592. doi: 10.1364/oe.27.034582http://dx.doi.org/10.1364/oe.27.034582
ZHANG H L, MA X L, LIN X Y, et al. System to eliminate the graininess of an integral imaging 3D display by using a transmissive mirror device [J]. Opt. Lett., 2022, 47(18): 4628-4631. doi: 10.1364/ol.470442http://dx.doi.org/10.1364/ol.470442
WANG Y D, SANG X Z, XING S J, et al. Three-dimensional light-field display with enhanced horizontal viewing angle by introducing a new lenticular lens array [J]. Opt. Commun., 2020, 477: 126327. doi: 10.1016/j.optcom.2020.126327http://dx.doi.org/10.1016/j.optcom.2020.126327
XING Y, XIA Y P, LI S, et al. Annular sector elemental image array generation method for tabletop integral imaging 3D display with smooth motion parallax [J]. Opt. Express, 2020, 28(23): 34706-34716. doi: 10.1364/oe.409275http://dx.doi.org/10.1364/oe.409275
ŽBONTAR J, LECUN Y. Stereo matching by training a convolutional neural network to compare image patches [J]. J. Mach. Learn. Res., 2016, 17(1): 2287-2318.
TUCKER R, SNAVELY N. Single-view view synthesis with multiplane images [C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 548-557. doi: 10.1109/cvpr42600.2020.00063http://dx.doi.org/10.1109/cvpr42600.2020.00063
EVAIN S, GUILLEMOT C. A lightweight neural network for monocular view generation with occlusion handling [J]. IEEE Trans. Pattern Anal. Mach. Intell., 2021, 43(6): 1832-1844. doi: 10.1109/tpami.2019.2960689http://dx.doi.org/10.1109/tpami.2019.2960689
WILES O, GKIOXARI G, SZELISKI R, et al. SynSin: end-to-end view synthesis from a single image [C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 7465-7475. doi: 10.1109/cvpr42600.2020.00749http://dx.doi.org/10.1109/cvpr42600.2020.00749
MARTIN-BRUALLA R, RADWAN N, SAJJADI M S M, et al. NeRF in the wild: neural radiance fields for unconstrained photo collections [C]//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 7206-7215. doi: 10.1109/cvpr46437.2021.00713http://dx.doi.org/10.1109/cvpr46437.2021.00713
MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. NeRF: representing scenes as neural radiance fields for view synthesis [C]//Proceedings of the 16th European Conference on Computer Vision. Glasgow: Springer, 2020: 1-25. doi: 10.1007/978-3-030-58452-8_24http://dx.doi.org/10.1007/978-3-030-58452-8_24
CHEN A P, XU Z X, ZHAO F Q, et al. MVSNeRF: fast generalizable radiance field reconstruction from multi-view stereo [C]//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 14104-14113. doi: 10.1109/iccv48922.2021.01386http://dx.doi.org/10.1109/iccv48922.2021.01386
BARRON J T, MILDENHALL B, TANCIK M, et al. Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields [C]//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 5835-5844. doi: 10.1109/iccv48922.2021.00580http://dx.doi.org/10.1109/iccv48922.2021.00580
BARRON J T, MILDENHALL B, VERBIN D, et al. Mip-NeRF 360: unbounded anti-aliased neural radiance fields [C]//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 5460-5469. doi: 10.1109/cvpr52688.2022.00539http://dx.doi.org/10.1109/cvpr52688.2022.00539
VERBIN D, HEDMAN P, MILDENHALL B, et al. Ref-NeRF: structured view-dependent appearance for neural radiance fields [C]//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 5481-5490. doi: 10.1109/cvpr52688.2022.00541http://dx.doi.org/10.1109/cvpr52688.2022.00541
LIU L J, GU J T, LIN K Z, et al. Neural sparse voxel fields [C]//Proceedings of the 34th Conference on Neural Information Processing Systems. Vancouver: Microsoft, 2020.
HU T, LIU S, CHEN Y L, et al. EfficientNeRF-efficient neural radiance fields [C]//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 12892-12901. doi: 10.1109/cvpr52688.2022.01256http://dx.doi.org/10.1109/cvpr52688.2022.01256
REISER C, PENG S Y, LIAO Y Y, et al. KiloNeRF: speeding up neural radiance fields with thousands of tiny MLPs [C]//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 14315-14325. doi: 10.1109/iccv48922.2021.01407http://dx.doi.org/10.1109/iccv48922.2021.01407
MÜLLER T, EVANS A, SCHIED C, et al. Instant neural graphics primitives with a multiresolution hash encoding [J]. ACM Trans. Graph., 2022, 41(4): 102. doi: 10.1145/3528223.3530127http://dx.doi.org/10.1145/3528223.3530127
MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. NeRF: representing scenes as neural radiance fields for view synthesis [J]. Commun. ACM, 2022, 65(1): 99-106. doi: 10.1145/3503250http://dx.doi.org/10.1145/3503250
GARBIN S J, KOWALSKI M, JOHNSON M, et al. FastNeRF: high-fidelity neural rendering at 200FPS [C]//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 14326-14335. doi: 10.1109/iccv48922.2021.01408http://dx.doi.org/10.1109/iccv48922.2021.01408
SHEN S, XING S J, SANG X Z, et al. Virtual stereo content rendering technology review for light-field display [J]. Displays, 2023, 76: 102320. doi: 10.1016/j.displa.2022.102320http://dx.doi.org/10.1016/j.displa.2022.102320
MENG Q Y, YU H Y, JIANG X Y, et al. A sparse capture light-field coding algorithm based on target pixel matching for a multi-projector-type light-field display system [J]. Photonics, 2023, 10(2): 223. doi: 10.3390/photonics10020223http://dx.doi.org/10.3390/photonics10020223
SHI S X, ZHOU H Y, YU C S, et al. Enhanced light-field image resolution via MLA translation [J]. Opt. Express, 2023, 31(10): 17087-17097. doi: 10.1364/oe.492189http://dx.doi.org/10.1364/oe.492189
CHEN Y C, HSU W L, XIE M Q, et al. The miniature light-field camera with high spatial resolution [J]. Opt. Rev., 2023, 30(2): 246-251. doi: 10.1007/s10043-023-00794-zhttp://dx.doi.org/10.1007/s10043-023-00794-z
ZHU C, LI S. A new perspective on hole generation and filling in DIBR based view synthesis [C]//Proceedings of the 9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Beijing: IEEE, 2013: 607-610. doi: 10.1109/iih-msp.2013.156http://dx.doi.org/10.1109/iih-msp.2013.156
HE Z H, SUI X M, CAO L C. Holographic 3D display using depth maps generated by 2D-to-3D rendering approach [J]. Appl. Sci., 2021, 11(21): 9889. doi: 10.3390/app11219889http://dx.doi.org/10.3390/app11219889
ANANTRASIRICHAI N, GERAVAND M, BRAENDLER D, et al. Fast depth estimation for view synthesis [C]//Proceedings of the 28th European Signal Processing Conference. Amsterdam: IEEE, 2021: 575-579. doi: 10.23919/eusipco47968.2020.9287371http://dx.doi.org/10.23919/eusipco47968.2020.9287371
ZHAO J, XIA J. Virtual viewpoints reconstruction via Fourier slice transformation [J]. J. Soc. Inf. Dis., 2018, 26(8): 463-469. doi: 10.1002/jsid.680http://dx.doi.org/10.1002/jsid.680
LIN J P, WANG W X, YAO J M, et al. Fast multi-view image rendering method based on reverse search for matching [J]. Optik, 2019, 180: 953-961. doi: 10.1016/j.ijleo.2018.12.003http://dx.doi.org/10.1016/j.ijleo.2018.12.003
XING S J, SANG X Z, CAO L C, et al. A real-time super multiview rendering pipeline for wide viewing-angle and high-resolution 3D displays based on a hybrid rendering technique [J]. IEEE Access, 2020, 8: 85750-85759. doi: 10.1109/access.2020.2992511http://dx.doi.org/10.1109/access.2020.2992511
JUNG C, CAO L H, LIU H M, et al. Visual comfort enhancement in stereoscopic 3D images using saliency-adaptive nonlinear disparity mapping [J]. Displays, 2015, 40: 17-23. doi: 10.1016/j.displa.2015.05.006http://dx.doi.org/10.1016/j.displa.2015.05.006
周梦滔,楼益民,胡娟梅,等. 基于共轭透视相关相机的光场图像渲染与显示[J]. 光子学报,2023,52(4):0411002.
ZHOU M, LOU Y M, HU J H, et al. Rendering and display of light field images using conjugate perspective coherence cameras [J]. Acta Photon. Sin., 2023, 52(4): 0411002. (in Chinese)
DENG H, WANG Q H, LI D H. Method of generating orthoscopic elemental image array from sparse camera array [J]. Chin. Opt. Lett., 2012, 10(6): 061102. doi: 10.3788/col201210.061102http://dx.doi.org/10.3788/col201210.061102
WEN J, JIANG X Y, YAN X P, et al. Demonstration of a novel multi-cameras light field rendering system and its application [J]. Optik, 2022, 253: 167759. doi: 10.1016/j.ijleo.2021.167759http://dx.doi.org/10.1016/j.ijleo.2021.167759
MASOOD Z, ZHENG J B, IRFAN M, et al. High-performance virtual globe GPU terrain rendering using game engine [J]. Comput. Animat. Virt. Worlds, 2023, 34(2): e2108. doi: 10.1002/cav.2108http://dx.doi.org/10.1002/cav.2108
XIA T, LOU Y M, HU J M, et al. Simplified retinal 3D projection rendering method and system [J]. Appl. Opt., 2022, 61(9): 2382-2390. doi: 10.1364/ao.451482http://dx.doi.org/10.1364/ao.451482
QI S, SANG X Z, YAN B B, et al. Dense view synthesis for three-dimensional light-field display based on scene geometric reconstruction [J]. Opt. Commun., 2022, 522: 128679. doi: 10.1016/j.optcom.2022.128679http://dx.doi.org/10.1016/j.optcom.2022.128679
GUO X, SANG X Z, YAN B B, et al. Real-time dense-view imaging for three-dimensional light-field display based on image color calibration and self-supervised view synthesis [J]. Opt. Express, 2022, 30(12): 22260-22276. doi: 10.1364/oe.461789http://dx.doi.org/10.1364/oe.461789
YOSHINO H, KODAMA K, HAMAMOTO T. Dense view interpolation of 4D light fields for real-time augmented reality applications [C]//Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. Chiang Mai: IEEE, 2022: 1626-1631. doi: 10.23919/apsipaasc55919.2022.9979976http://dx.doi.org/10.23919/apsipaasc55919.2022.9979976
CHEN S, YAN B B, SANG X Z, et al. Fast virtual view synthesis for an 8K 3D light-field display based on cutoff-NeRF and 3D voxel rendering [J]. Opt. Express, 2022, 30(24): 44201-44217. doi: 10.1364/oe.473852http://dx.doi.org/10.1364/oe.473852
0
浏览量
121
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构