Faster-lio论文翻译

摘要

        本文提出了一种基于增量体素的激光惯性里程计(LIO)方法,用于快速跟踪旋转和固态激光雷达。为了实现快速的跟踪速度,我们既没有使用复杂的基于树的结构来划分空间点云,也没有使用严格的k最近邻(k-NN)查询计算点匹配。相反,我们使用增量体素(iVox)作为我们的点云空间数据结构(增量体素是对传统体素的修改、支持增量插入和并行近似k-NN查找)。我们把线性iVox和PHC(伪希尔伯特曲线)iVox作为我们算法两种可选的底层数据结构。实验结果表明,仅仅使用现代的CPU,iVox在固态激光雷达中的每次扫描中可以达到1000~2000hz,而32线激光雷达则超过了200hz,同时仍保持相同水平的精度。

1 简介

2 相关工作

3 IVox:Incremental Sparse Voxels

A. IVox的数据结构

B. k-NN搜索

C. 增量地图

4 IVox-PHC

A. iVox-PHC的底层结构

B. iVox-PHC的k-NN搜索

C. iVox-PHC和iVox的复杂度

5 实验

6 结论

7 参考文献

[1] C. Le Gentil, T. Vidal-Calleja, and S. Huang, “IN2LAAMA: Inertial LiDAR localization autocalibration and mapping,” IEEE Trans. Robot., vol. 37, no. 1, pp. 275–290, Feb. 2021.
[2] M. Yokozuka, K. Koide, S. Oishi, and A. Banno, “LiTAMIN: LiDARbased tracking and mapping by stabilized ICP for geometry approximation with normal distributions,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. 2020, pp. 5143–5150.
[3] G. Xiang et al., “Fully automatic large-scale point cloud mapping for low-speed self-driving vehicles in unstructured environments,” in Proc. IEEE Intell. Veh. Symp., 2021, pp. 881–888.
[4] P. Wei, X. Wang, and Y. Guo, “3D-LIDAR feature based localization for autonomous vehicles,” in Proc. IEEE 16th Int. Conf. Automat. Sci. Eng., 2020, pp. 288–293.
[5] X. Zheng and J. Zhu, “Efficient LiDAR odometry for autonomous driving,” IEEE Robot. Automat. Lett. vol. 6, no. 4, pp. 8458–8465, Oct. 2021, arXiv:2104.10879.
[6] J. Zhang and S. Singh, “LOAM: LiDAR odometry and mapping in realtime,” in Robot.: Sci. Syst., vol. 2, no. 9, pp. 1–9, 2014.
[7] T. Shan and B. Englot, “LeGO-LOAM: Lightweight and ground-optimized LiDAR odometry and mapping on variable terrain,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2018, pp. 4758–4765.
[8] Z. Liu and F. Zhang, “BALM: Bundle adjustment for LiDAR mapping,” IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 3184–3191, Apr. 2021.
[9] Z. Liu, F. Zhang, and X. Hong, “Low-cost retina-like robotic LiDARs based on incommensurable scanning,” IEEE/ASME Trans. Mechatronics, vol. 27, no. 1, pp. 58–68, Feb. 2022.
[10] D. Wang, C. Watkins, and H. Xie, “MEMS mirrors for LiDAR: A review,” Micromachines, vol. 11, no. 5, p. 456, 2020.
[11] K. Li, M. Li, and U. D. Hanebeck, “Towards high-performance solidstate-LiDAR-inertial odometry andmapping,” IEEE Robot. Automat. Lett., vol. 6, no. 3, pp. 5167–5174, Jul. 2021.
[12] D. V. Nam and K. Gon-Woo, “Solid-state LiDAR based-SLAM: A concise review and application,” in Proc. IEEE Int. Conf. Big Data Smart Comput., 2021, pp. 302–305.
[13] P. Geneva, K. Eckenhoff, Y. Yang, and G. Huang, “Lips: LiDAR-inertial 3D plane SLAM,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2018, pp. 123–130.
[14] N. Rufus, U. K. R. Nair, A. S. B. Kumar, V. Madiraju, and K. M. Krishna, “SROM: Simple real-time odometry and mapping using LiDAR data for autonomous vehicles,” in Proc. IEEE Intell. Veh. Symp., 2020, pp. 1867–1872.
[15] W. Wang, J. Liu, C. Wang, B. Luo, and C. Zhang, “DV-LOAM: Direct visual LiDAR odometry and mapping,” Remote Sens., vol. 13, no. 16, 2021, Art. no. 3340.
[16] S. Hening, C. A. Ippolito, K. S. Krishnakumar, V. Stepanyan, and M. Teodorescu, “3D LiDAR SLAM integration with GPS/INS for UAVs in urban GPS-degraded environments,” in Proc. AIAA Inf. Syst.-AIAA Infotech, Aerosp., 2017, Art. no. 0448.
[17] C. Qian et al., “An integrated GNSS/INS/LiDAR-SLAM positioning method for highly accurate forest stem mapping,” Remote Sens., vol. 9, no. 1, p. 3, 2017.
[18] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast direct LiDAR-inertial odometry,” IEEE Trans. Robot., 2022.
[19] X. Huang, G. Mei, J. Zhang, and R. Abbas, “A comprehensive survey on point cloud registration,” 2021, arXiv:2103.02690.
[20] N. Beckmann, H.-P. Kriegel, R. Schneider, and B. Seeger, “The R*-tree: An efficient and robust access method for points and rectangles,” in Proc. ACM SIGMOD Int. Conf. Manage. Data, 1990, pp. 322–331.
[21] M. Dolatshah, A. Hadian, and B. Minaei-Bidgoli, “Ball*-tree: Efficient spatial indexing for constrained nearest-neighbor search in metric spaces,” 2015, arXiv:1511.00628.
[22] K. Koide, M. Yokozuka, S. Oishi, and A. Banno, “Voxelized GICP for fast and accurate 3D point cloud registration,” in Proc. IEEE Int. Conf. Robot. Automat., 2021, pp. 11054–11059.
[23] Y. Cai, W. Xu, and F. Zhang, “ikd-Tree: An incremental KD tree for robotic applications,” 2021, arXiv:2102.10808.
[24] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “LIO-SAM: Tightly-coupled LiDAR inertial odometry via smoothing and mapping,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2020, pp. 5135–5142.
[25] X. Zuo, P. Geneva, W. Lee, Y. Liu, and G. Huang, “LIC-Fusion: LiDARinertial-camera odometry,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2019, pp. 5848–5854.
[26] W. Xu and F. Zhang, “FAST-LIO: A fast, robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter,” IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 3317–3324, Apr. 2021.
[27] Y. Pan, P. Xiao, Y. He, Z. Shao, and Z. Li, “MULLS: Versatile LiDAR SLAM viamulti-metric linear least square,” inProc.IEEEInt. Conf. Robot. Automat.2021, pp. 11633–11640, arXiv:2102.03771.
[28] S. Zhao, Z. Fang, H. Li, and S. Scherer, “A robust laser-inertial odometry and mapping method for large-scale highway environments,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2019, pp. 1285–1292.

[29] M. Yokozuka, K. Koide, S. Oishi, and A. Banno, “LITAMIN2: Ultra light LiDAR-based SLAM using geometric approximation applied with KL-divergence,” in Proc. IEEE Int. Conf. Robot. Automat. 2021, pp. 11619–11625, arXiv:2103.00784.
[30] K. Koide, M. Yokozuka, S. Oishi, and A. Banno, “Globally consistent 3D LiDAR mapping with GPU-accelerated GICP matching cost factors,” IEEE Robot. Automat. Lett., vol. 6, no. 4, pp. 8591–8598, Oct. 2021.
[31] M. Karimi, M. Oelsch, O. Stengel, E. Babaians, and E. Steinbach, “LoLaSLAM: Low-latency LiDAR SLAM using continuous scan slicing,” IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 2248–2255, Apr. 2021.
[32] C. Qu, S. S. Shivakumar, W. Liu, and C. J. Taylor, “LLOL: Low-latency odometry for spinning LiDARs,” 2021, arXiv:2110.01725.
[33] Q. Li et al., “LO-Net: Deep real-time LiDAR odometry,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 8473–8482.
[34] C. Choy, W. Dong, and V. Koltun, “Deep global registration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 2514–2523.
[35] M.Muglikar,Z.Zhang, andD. Scaramuzza, “Voxelmapfor visual SLAM,” in Proc. IEEE Int. Conf. Robot. Automat., 2020, pp. 4181–4187.
[36] M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3D reconstruction at scale using voxel hashing,” ACM Trans. Graph., vol. 32, no. 6, pp. 1–11, 2013.
[37] K. Daun, S. Kohlbrecher, J. Sturm, and O. von Stryk, “Large scale 2D laser SLAM using truncated signed distance functions,” in Proc. IEEE Int. Symp. Saf., Secur., Rescue Robot., 2019, pp. 222–228.
[38] M. Teschner, B. Heidelberger, M. Müller, D. Pomerantes, and M. H. Gross, “Optimized spatial hashing for collision detection of deformable objects,” in Proc. Vis., Model., Visual. Conf., 2003, vol. 3, pp. 47–54.
[39] H.-L. Chen and Y.-I. Chang, “Neighbor-finding based on space-filling curves,” Inf. Syst., vol. 30, no. 3, pp. 205–226, 2005.
[40] H.-L. Chen and Y.-I. Chang, “All-nearest-neighbors finding based on the Hilbert curve,” Expert Syst. Appl., vol. 38, no. 6, pp. 7462–7475, 2011.
[41] N. Carlevaris-Bianco, A. K. Ushani, and R. M. Eustice, “University of Michigan North Campus long-term vision and LiDAR dataset,” Int. J. Robot. Res., vol. 35, no. 9, pp. 1023–1035, 2015.
[42] W. Wen et al., “UrbanLoco: A full sensor suite dataset for mapping and localization in urban scenes,” in Proc. IEEE Int. Conf. Robot. Automat., 2020, pp. 2310–2316.
[43] Z. Yan, L. Sun, T. Krajnik, and Y. Ruichek, “EU long-term dataset with multiple sensors for autonomous driving,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2020, pp. 10697–10704.
[44] R. B. Rusu and S. Cousins, “3D is here: Point cloud library (PCL),” in Proc. IEEE Int. Conf. Robot. Automat., 2011, pp. 1–4.
[45] J. Johnson, M. Douze, and H. Jégou, “Billion-scale similarity search with GPUs,” IEEE Trans. Big Data, vol. 7, no. 3, pp. 535–547, Jul. 2017, arXiv:1702.08734.
[46] L. D. Boytsov, Y. Novak, A. Malkov, and E. Nyberg, “Off the beaten path: Let’s replace term-based retrieval with k-NN search,” in Proc. 25th ACM Int. Conf. Inf. Knowl. Manage., S. Mukhopadhyay, C. Zhai, E. F. Bertino, J. Crestani, J. Mostafa Tang, L. Si, X. Zhou, Y. Chang, Y. Li, and P. Sondhi, Eds., Indianapolis, IN, USA, ACM, 2016, pp. 1099–1108. [Online]. Available: https://doi.org/10.1145/2983323.2983815
[47] J. L. Blanco and P. K. Rai, “nanoflann: A C+ header-only fork of FLANN, a library for Nearest Neighbor (NN) with KD-trees,” 2014. [Online]. Available: https://github.com/jlblancoc/nanoflann

<think>嗯,我现在要了解一下速腾雷达如何实现Faster-LIO。首先,得弄清楚这两个概念:速腾雷达和Faster-LIO分别是什么,然后看它们是怎么结合在一起的。 首先,速腾雷达应该是指速腾聚创(RoboSense)生产的激光雷达,对吧?他们家的雷达产品在自动驾驶和机器人领域用得比较多,比如RS-LiDAR系列。这类雷达通常提供高精度的三维点云数据,用于环境感知和定位。 然后是Faster-LIOLIO是Lidar-Inertial Odometry的缩写,即激光雷达惯性里程计,结合了IMU(惯性测量单元)和激光雷达的数据进行实时定位与建图。Faster-LIO应该是一个更高效的版本,可能是在计算速度或资源利用上做了优化,比如改进算法结构或采用新的数据处理方法。 接下来,用户想知道速腾雷达如何实现Faster-LIO。这里可能需要了解Faster-LIO的工作原理,以及如何适配特定的雷达硬件。比如,Faster-LIO可能依赖特定的点云数据格式、时间同步、或者IMU和雷达的标定参数。速腾雷达需要满足这些要求才能顺利集成到Faster-LIO系统中。 可能的步骤包括: 1. **硬件配置**:将速腾雷达与IMU硬件连接,确保两者时间同步,可能需要硬件层面的同步信号。 2. **数据接口**:获取雷达的点云数据和IMU的惯性数据,可能需要使用ROS(机器人操作系统)中的驱动或SDK来读取数据。 3. **标定**:进行雷达和IMU之间的外参标定,确定两者之间的相对位置和姿态,这对传感器融合至关重要。 4. **算法集成**:将速腾雷达的数据输入到Faster-LIO算法中,可能需要调整参数以适应雷达的特性,比如扫描模式、视场角、点云密度等。 5. **优化与测试**:在实际环境中测试系统性能,调整算法参数以提高定位和建图的准确性和效率。 另外,Faster-LIO可能采用了一些优化技术,比如ikd-Tree(增量式k维树)来高效管理点云地图,减少计算量。速腾雷达的高扫描速率或高分辨率可能在这方面带来优势,比如更密集的点云帮助构建更精细的地图,但同时也需要算法能够处理更大的数据量而不影响实时性。 不过,我可能对Faster-LIO的具体实现细节不太清楚,比如它是否开源,或者是否已经有适配速腾雷达的配置文件和参数。可能需要查阅相关论文或GitHub仓库,比如Faster-LIO的作者是否提供了支持不同雷达型号的指南。 另外,传感器的时间同步问题也很重要,如果雷达和IMU的时间戳没有精确对齐,会导致融合算法出现误差。速腾雷达是否支持硬件时间同步,或者需要通过软件进行插值处理? 还有,IMU的数据频率通常比雷达高很多,Faster-LIO可能利用IMU的高频数据进行运动预测,然后在雷达数据到来时进行校正。这需要良好的传感器融合策略,比如紧耦合或松耦合的方式。 总结来说,实现速腾雷达的Faster-LIO需要硬件配置、数据获取、标定、算法集成和优化等多个步骤,确保传感器数据的准确性和算法的兼容性。可能需要参考Faster-LIO的文档和速腾雷达的技术规格,进行具体的配置和调试。</think>### 速腾雷达实现 Faster-LIO 的步骤解析 Faster-LIO 是一种高效的**激光雷达惯性里程计(Lidar-Inertial Odometry)**算法,通过融合激光雷达(LiDAR)和惯性测量单元(IMU)数据,实现实时定位与建图。速腾聚创(RoboSense)雷达(如 RS-LiDAR 系列)因其高精度点云数据,常用于此类系统。以下是实现的核心步骤: --- #### **1. 硬件配置与同步** - **硬件选择** 需配备速腾激光雷达(如 RS-LiDAR-16/32)和 IMU(如 Xsens MTi 系列),确保两者支持硬件时间同步(如通过 PTP 协议)。 - **时间同步** 雷达和 IMU 需严格同步时间戳,避免融合误差。速腾雷达若支持外部触发,可通过硬件信号(如 GPS PPS)实现微秒级同步。 --- #### **2. 数据接口与驱动** - **获取传感器数据** - **雷达数据**:通过速腾官方 SDK 或 ROS 驱动(如 `rslidar_sdk`)实时获取点云,格式为 `sensor_msgs/PointCloud2`。 - **IMU 数据**:通过串口或 ROS 节点读取角速度、加速度及时间戳。 - **数据传输频率** - 雷达通常为 10-20 Hz,IMU 为 100-1000 Hz,需通过算法异步处理多速率数据。 --- #### **3. 传感器标定** - **内参标定** - **IMU 内参**:标定零偏、噪声参数(使用 Allan 方差法或工具如 Kalibr)。 - **雷达内参**:校准点云畸变(如温度漂移补偿)。 - **外参标定** 标定雷达与 IMU 的相对位姿(旋转矩阵 $R$ 和平移向量 $t$),常用方法: - **手动标定**:基于已知靶标(如棋盘格)。 - **自动标定**:使用 LIO 算法本身在线优化外参(如 FAST-LIO2 的 `LI_Init` 模块)。 --- #### **4. 算法集成与参数配置** - **Faster-LIO 的核心优化** - **高效数据结构**:采用 **ikd-Tree** 管理增量式地图,支持动态插入/删除点云,复杂度降至 $O(\log n)$。 - **紧耦合融合**:通过误差状态卡尔曼滤波(ESKF)融合雷达观测与 IMU 预测,状态方程为: $$ \hat{x}_{k+1} = f(\hat{x}_k, u_k) + w_k $$ 其中 $u_k$ 为 IMU 输入,$w_k$ 为噪声。 - **适配速腾雷达的参数调整** - `lidar_type`:设置为速腾雷达型号(如 RS16)。 - `point_filter_num`:降采样率,适应雷达的高点云密度(如 1/2 采样)。 - `max_iteration`:迭代次数,影响收敛速度与精度。 --- #### **5. 实时处理与建图优化** - **运动补偿** 针对雷达扫描周期内的自身运动,利用 IMU 数据去畸变(Deskew),校正点云坐标。 公式: $$ p_{corrected} = R_{imu}^{t} \cdot p_{raw} + t_{imu}^{t} $$ 其中 $R_{imu}^{t}$ 和 $t_{imu}^{t}$ 为时间 $t$ 时刻的 IMU 位姿。 - **地图更新** 通过 ikd-Tree 快速检索最近邻点,匹配当前帧与地图,更新估计位姿。 --- #### **6. 性能调优与测试** - **资源分配** 调整线程数(如 `NUM_MATCHING_THREADS`)以适应 CPU 性能。 - **抗动态干扰** 启用 `remove_dynamic_objects` 滤除动态物体点云。 - **精度评估** 使用基准数据集(如 KITTI)或实地 SLAM 轨迹对比真值(如 RTK-GPS),计算绝对位姿误差(ATE)。 --- ### **关键挑战与解决方案** - **挑战1:时间同步偏差** **方案**:使用硬件同步或软件插值(如 B-spline 拟合 IMU 数据)。 - **挑战2:大规模点云实时处理** **方案**:启用 GPU 加速(如 CUDA)或进一步降采样。 - **挑战3:动态环境建图** **方案**:结合语义分割(如 RangeNet++)区分静态/动态点云。 --- ### **示例代码片段(ROS 节点)** ```cpp // 初始化 Faster-LIO 参数 std::shared_ptr<FAST_LIO::FastLio> lio_obj; lio_obj.reset(new FAST_LIO::FastLio()); lio_obj->set_ros_param(nh); // 读取 ROS 参数 lio_obj->init(imu_sub, cloud_sub); // 绑定 IMU 和雷达订阅 // 主要处理循环 while (ros::ok()) { lio_obj->run(); // 执行 ESKF 预测-更新 publish_odometry(); // 发布里程计 publish_map(); // 发布 ikd-Tree 地图 } ``` --- 通过以上步骤,速腾雷达可高效集成到 Faster-LIO 系统中,适用于自动驾驶、机器人导航等实时场景。实际部署时需根据硬件特性细致调参,并验证系统在复杂环境下的鲁棒性。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值