1.天津工业大学 天津市电气装备智能控制重点实验室, 天津 300387
[ "宋丽梅(1976—),女,河北秦皇岛人,博士,教授,2004年于天津大学获得博士学位,主要从事三维重建、机器视觉、人工智能等方面的研究。E-mail:songlimei@tiangong.edu.cn" ]
扫 描 看 全 文
宋丽梅, 张继鹏, 李云鹏, 等. 基于多视角红外传感器的三维重建方法[J]. 液晶与显示, 2023,38(6):759-769.
SONG Li-mei, ZHANG Ji-peng, LI Yun-peng, et al. 3D reconstruction method based on multi-view infrared sensor[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(6):759-769.
宋丽梅, 张继鹏, 李云鹏, 等. 基于多视角红外传感器的三维重建方法[J]. 液晶与显示, 2023,38(6):759-769. DOI: 10.37188/CJLCD.2023-0026.
SONG Li-mei, ZHANG Ji-peng, LI Yun-peng, et al. 3D reconstruction method based on multi-view infrared sensor[J]. Chinese Journal of Liquid Crystals and Displays, 2023,38(6):759-769. DOI: 10.37188/CJLCD.2023-0026.
为了实现有特征物体和无特征物体更精准的三维重建,本文研究了多视角传感器下三维点云的自动拼接算法。首先由不同视角的传感器双目标定后进行轴线数据的标定,接着在三维空间内对多条轴线数据进行分析并提出了一种基于多视角传感器轴线融合的点云拼接方法,从而计算出误差最小的最优轴线数据,最后以拟合出的轴线数据为轴心在世界坐标系内进行三维点云的拼接。实验结果表明,在1.3~1.9 m的测量范围内,本文所提出的拼接方法对直径为144.954 2 mm的标准球进行三维重建的误差在0.037 mm以内,重建无特征点物体和有特征点物体都能有较好的拼接效果且拼接时间不受点云总量大小的限制。该拼接方法基本满足三维重建的稳定性好、效率快、精度高等要求。
In order to realize more accurate 3D reconstruction of featureless and featureless objects, this paper studies the automatic splicing algorithm of 3D point cloud under multi-view sensor. Firstly, the axis data was calibrated by the binocular calibration of sensors from different perspectives, and then multiple axis data were analyzed in three-dimensional space. A point cloud splicing method based on the axis fusion of sensors from multiple perspectives was proposed, so as to calculate the optimal axis data with the minimum error. Finally, the fitted axis data was used as the axis center to splice the three-dimensional point cloud in the world coordinate system. The experimental results show that the error of the proposed method is less than 0.037 mm in the measurement range of 1.3~1.9 m for the standard ball with a diameter of 144.954 2 mm, and the reconstruction of objects without and with feature points can have a good stitching effect, and the stitching time is not limited by the size of the total point cloud. This method basically meets the requirements of good stability, fast efficiency and high precision of 3D reconstruction.
三维重建点云拼接轴线
three-dimensional reconstructionpoint cloud splicingaxis
杨佳琪,张世坤,范世超,等.多视图点云配准算法综述[J].华中科技大学学报(自然科学版),2022,50(11): 16-34+43.
YANG J Q, ZHANG S K, FAN S C, et al. Review of multi-view point cloud registration algorithms [J]. Journal of Central China Normal University (Natural Sciences), 2022,50(11): 16-34+43 . (in Chinese)
LI P, WANG R, WANG Y, et al. Evaluation of the ICP algorithm in 3D point cloud registration [J]. 2020, 8: 68030-68048. doi: 10.1109/access.2020.2986470http://dx.doi.org/10.1109/access.2020.2986470
MAKOVETSKII A, VORONIN S, KOBER V, et al. A regularized point cloud registration approach for orthogonal transformations [J]. Journal of Global Optimization, 2020: 1-23.
赵云涛, 齐佳祥, 李维刚, 等.基于改进三维形状上下文的点云配准[J]. 液晶与显示, 2022, 37(12): 1590-1597. doi: 10.37188/CJLCD.2022-0156http://dx.doi.org/10.37188/CJLCD.2022-0156
ZHAO Y T, QI J X, LI W G, et al. Point cloud registration based on improved 3DSC [J]. Chinese Journal of Liquid Crystals and Displays, 2022, 37(12): 1590-1597. (in Chinese). doi: 10.37188/CJLCD.2022-0156http://dx.doi.org/10.37188/CJLCD.2022-0156
ZUO C, QIAN J, FENG S, et al. Deep learning in optical metrology: a review [J]. Light: Science & Applications, 2022, 11(1): 39. doi: 10.1038/s41377-022-00714-xhttp://dx.doi.org/10.1038/s41377-022-00714-x
RIVENSON Y, WU Y, OZCAN A. Deep learning in holography and coherent imaging [J]. Light: Science & Applications, 2019, 8(1): 85. doi: 10.1038/s41377-019-0196-0http://dx.doi.org/10.1038/s41377-019-0196-0
LI S, YE Y, LIU J, et al. VPRNet: virtual points registration network for partial-to-partial point cloud registration [J]. Remote Sensing, 2022, 14(11): 2559. doi: 10.3390/rs14112559http://dx.doi.org/10.3390/rs14112559
XIN M, LI B, YAN X, et al. A robust cloud registration method based on redundant data reduction using backpropagation neural network and shift window [J]. Review of Scientific Instruments, 2018, 89(2): 024704. doi: 10.1063/1.4996628http://dx.doi.org/10.1063/1.4996628
CHANG W C, PHAM V T. 3-D point cloud registration using convolutional neural networks[J]. Applied Sciences, 2019, 9(16): 3273. doi: 10.3390/app9163273http://dx.doi.org/10.3390/app9163273
LI Y, QIAN J, FENG S, et al. Deep-learning-enabled dual-frequency composite fringe projection profilometry for single-shot absolute 3D shape measurement [J]. Opto-Electron. Adv, 2022, 5: 210021. doi: 10.29026/oea.2022.210021http://dx.doi.org/10.29026/oea.2022.210021
BONNEEL N, COEURJOLLY D. Spot: sliced partial optimal transport[J]. ACM Transactions on Graphics (TOG), 2019, 38(4): 1-13. doi: 10.1145/3306346.3323021http://dx.doi.org/10.1145/3306346.3323021
YAN L, WEI P, XIE H, et al. A new outlier removal strategy based on reliability of correspondence graph for fast point cloud registration [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. doi: 10.1109/tpami.2022.3226498http://dx.doi.org/10.1109/tpami.2022.3226498
孟晓亮, 张立晔, 房超. 基于电磁追踪系统的点云配准方法研究[J]. 液晶与显示, 2020, 35(12):1507-1519. doi: 10.37188/YJYXS20203512.1309http://dx.doi.org/10.37188/YJYXS20203512.1309
MENG X L, ZHANG L Y, FANG C. Point cloud registration method based on electro-magnetic tracking system [J]. Chinese Journal of Liquid Crystals and Displays, 2020, 35(12):1507-1519. doi: 10.37188/YJYXS20203512.1309http://dx.doi.org/10.37188/YJYXS20203512.1309
SHU Q, HE X, WANG C, et al. Fast point cloud registration in multidirectional affine transformation [J]. Optik, 2021, 229: 165884. doi: 10.1016/j.ijleo.2020.165884http://dx.doi.org/10.1016/j.ijleo.2020.165884
DIAO S, YANG H, XIANG Y, et al. Research on splicing method of point cloud with insufficient features based on spatial reference [J]. Journal of Electronic Imaging, 2021, 30(4): 043008. doi: 10.1117/1.jei.30.4.043008http://dx.doi.org/10.1117/1.jei.30.4.043008
ZHOU P, WANG Y, XU Y, et al. Phase-unwrapping-free 3D reconstruction in structured light field system based on varied auxiliary point [J]. Optics Express, 2022, 30(17): 29957-29968. doi: 10.1364/oe.468049http://dx.doi.org/10.1364/oe.468049
0
浏览量
48
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构