Abstract:
Environmental perception of Unmanned Surface Vehicle (USV) is one of the key technologies for intelligent navigation, which currently relies on LiDAR that can acquire the spatial position of the object and optical devices that provide precise category information of the object. In order to obtain multi-dimensional perception information of objects in complex maritime environments, we propose a fusion perception method of LiDAR-Camera on USV, which fuses PR-YOLOv8 visual detection results and LiDAR 3D point cloud to achieve high-precision recognition and spatial localization of maritime objects. Firstly, the calibration board is used for the joint calibration of LiDAR and camera, and the projection relationship between the two sensors is constructed. Secondly, for the LiDAR branch, the clustering method is used to fit the point cloud to extract the feature information of the object and project it to the image. For the camera branch, the PR-YOLOv8 detection model is proposed based on YOLOv8 to obtain a boundary box for object detection with high recognition accuracy. Finally, combining the detection results of the two branches, a new cost construction factor DSIoU (Distance-Scale Intersection over Union) is used to correlate the object and combined with Bayesian theory, the fusion of multi-source perception information is proposed. The feasibility and validity of the proposed method was verified using Qingdao inland sea and inland lake ship perception experiments.