Joint project MuSEAS

  Boat on the water with experimental vehicle on the roof Copyright: © TITUS Research GmbH

Multi-sensor dataset with annotations for ship automatization

01/03/2023

Key Info

Basic Information

Duration:
01.03.2023 to 31.08.2024
Acronym:
MuSEAS
Group:
Navigation
Funding:
BMDV

Contact

Phone

work
+49 241 80-28033

Email

E-Mail
 

Motivation

In ship automation, there is a high demand for shipping-specific measurement datasets. However, data availability for ship automatization is seriously underdeveloped compared to road applications. In particular, the development of methods for robust estimation of ship state (esp. position, orientation, speed) and methods for reliable detection and classification of objects using optical sensor data are dependent on such datasets, including ground truth annotations.

 

Project Goals and Methods

Sponsored by the Federal Ministry of Digital Affairs and Transport

The goal of the project is to create and publish inland navigation measurement datasets with synchronized data from common ship sensors. Cleaned and anonymized raw data, as well as advanced reference solutions for vessel localization and environment detection obtained by means of novel methods, will be made available. The latter results are determined using newly developed and publicly available methods that can be used as baseline implementations.

 

Innovations and Perspectives

In the present project, a robust navigation solution is implemented using online factor graph optimization. This method was developed at RWTH Aachen University and has already been validated for inland navigation applications with a much-reduced sensor setup [1]. This state estimation is complemented by the incorporation of additional sensor data, in particular LiDAR and camera data. This supplement supports the state estimation, especially when other sensor data are unavailable, e.g., in case of disturbed GNSS data at bridge crossings. The automatic annotation process follows an approach first introduced in the automotive domain: LATTE [2]. In this approach, the low-resolution point cloud is projected onto the camera image. Using the object classifier from the predecessor project DataSOW, the image regions are semantically labeled. A network is generated that projects the labels back into the point cloud. A clustering algorithm (e.g., [3]) can be used to segment the point cloud. This provides e.g., the distance to the detected object.

  1. H. Zhang, X. Xia, M. Nitsch, D. Abel: Continuous-Time Factor Graph Optimization for Trajectory Smoothness of GNSS/INS Navigation in Temporarily GNSS-Denied Environments, IEEE Robotics and Automation Letters (RA-L), pp. 9115-9122, 2022.
  2. B. Wang, V. Wu, B. Wu, K. Keutzer: "LATTE: Accelerating LiDAR Point Cloud Annotation via Sensor Fusion, One-Click Annotation, and Tracking", IEEE Intelligent Transportation Systems Conference (ITSC), pp. 265-272, 2019.
  3. F. Wirth, J. Quchl, J. M. Ota, C. Stiller: "PointAtMe: Efficient 3D Point Cloud Labeling in Virtual Reality", IEEE Intelligent Vehicles Symposium (IV), pp. 1693-1698, 2019
 
Project partner