Abstract
Autonomously controlling the position of remotely operated underwater vehicles (ROVs) is of crucial importance for a wide range of underwater engineering applications, such as in the inspection and maintenance of underwater industrial structures. Consequently, studying vision-based underwater robot navigation and control has recently gained increasing attention to counter the numerous challenges faced in underwater conditions, such as lighting variability, turbidity, camera image distortions (due to bubbles), and ROV positional disturbances (due to underwater currents). In this article, we propose (to the best of the authors' knowledge) a first rigorous unified benchmarking of more than seven machine learning-based one-shot object tracking algorithms for vision-based position locking of ROV platforms. A position-locking system is proposed which processes images of an object of interest in front of which the ROV must be kept stable. Then, our proposed system uses the output result of different object tracking algorithms to automatically correct the position of the ROV against external disturbances. Numerous real-world experiments are conducted using a BlueROV2 platform within an indoor pool in order to provide clear demonstrations of the strengths and weaknesses of each tracking approach. Finally, to help alleviate the scarcity of underwater ROV data, we release our acquired data base as open-source with the hope of benefiting future research.
| Original language | English |
|---|---|
| Pages (from-to) | 2770-2781 |
| Number of pages | 12 |
| Journal | IEEE Journal of Oceanic Engineering |
| Volume | 50 |
| Issue number | 4 |
| Early online date | Aug 2025 |
| DOIs | |
| Publication status | Published - 2025 |
Keywords
- Object tracking
- Position locking
- Underwater robot control
- one-shot machine learning (ML)-based detection