Abstract:
Environmental perception is a pivotal technology for the intelligent navigation of Unmanned Surface Vehicles (USVs). Current systems primarily depend on LiDAR for spatial positioning and optical devices for precise object classification. To obtain multi-dimensional perception information of objects in complex maritime environments, we propose a fusion perception method of LiDAR-Camera for USVs, which fuses PR-YOLOv8 visual detection results and LiDAR 3D point cloud to achieve high-precision recognition and spatial localization of maritime objects. Firstly, the calibration board is used for the joint calibration of LiDAR and camera, and the projection relationship between the two sensors is constructed. Secondly, for the LiDAR branch, the clustering method is used to fit the point cloud to extract the feature information of the object and project it to the image. For the camera branch, the PR-YOLOv8 detection model is proposed based on YOLOv8 to obtain a boundary box for object detection with high recognition accuracy. Finally, combining the detection results of the two branches, a new cost construction factor DSIoU (Distance-Scale Intersection over Union) is used to correlate the object and combined with Bayesian theory, the fusion of multi-source perception information is proposed. The feasibility and validity of the proposed method was verified using Qingdao’s inland sea and lake ship perception experiments.