Large-Scale Place Recognition Based on Camera-LiDAR Fused Descriptor
Large-Scale Place Recognition Based on Camera-LiDAR Fused Descriptor
Blog Article
In the field of autonomous driving, carriers are equipped with a variety of sensors, including cameras and LiDARs.However, the camera suffers from problems of illumination and occlusion, and the LiDAR encounters motion distortion, degenerate environment and limited ranging distance.Therefore, fusing the information from these two sensors deserves to be First Aid explored.In this paper, we propose a fusion network which robustly captures both the image and point cloud descriptors to solve the place recognition problem.Our contribution can be summarized as: (1) applying the trimmed strategy in the point cloud global feature aggregation to improve the recognition performance, (2) building a compact fusion framework which captures Used Knee Guards both the robust representation of the image and 3D point cloud, and (3) learning a proper metric to describe the similarity of our fused global feature.
The experiments on KITTI and KAIST datasets show that the proposed fused descriptor is more robust and discriminative than the single sensor descriptor.