Speakers shown in alphabetical order.
Download presentation slides HERE.


Andreas Birk

Prof. Andreas
Birk (TBC)

Jacobs University Bremen

Underwater Perception using Continuous System Integration & Human in the Loop

Deep-sea robot operations and diver missions demand a high level of safety, efficiency and reliability. As a consequence, measures within the development stage have to be implemented to extensively evaluate and benchmark system components ranging from data acquisition, perception and localization to control. This session will describe an approach based on high-fidelity simulation that embeds spatial and environmental conditions from recorded real-world data. This simulation in the loop (SIL) methodology allows for mitigating the discrepancy between simulation and real-world conditions, e.g. regarding sensor noise. As a result, this platform allows to thoroughly investigate and benchmark behaviors of system components concurrently under real and simulated conditions. In addition, a system that integrated the diver (human) in the loop for assistance in exploratory missions and safety checking is introduced.

Fabio Bonsignorio

Prof. Fabio
Bonsignorio

CEO of Heron Robots

Underwater multiagent environmental low-frequency sensing

The environmental chemical conditions - in term of substances with positive and negative impacts on the sustainability of local ecological networks - of confined environments with weak streams change over time in probabilistically predictable ways. We present and discuss a reproducible and measurable approach to the deployment of a network of underwater gliders, rovs and fixed or mobile sensors deposed on the sea floor. The approach exploits Voronoi maps, multisensory fusion of different kind of chemical and non chemical sensors and a multi agent Belief Space Planning methodology.

Yogesh A. Girdhar

Dr. Yogesh A. Girdhar

Woods Hole Oceanographic Institution

Co-robotic exploration in underwater environments

Vision-based exploration of extreme environments with communication bottlenecks, such as underwater, is challenging due to lack of availability of high-resolution mission state information to the human operator. One approach to enable co-robotic exploration in such conditions is to summarize visual data collected by the robot using semantic scene map. Our work explores the use of an unsupervised approach to learning a generative model of the sensor data that can grow with the size and complexity of the observed input, and produce compact scene maps that can be used for communicating the current mission state. This talk will discuss the generative model along with techniques to enable online, life-long learning of these models, and extensions of the work to multi-robot environments.

Michael Kaess

Prof. Michael Kaess

Carnegie Mellon University Robot Perception Lab

Localization and mapping with imaging sonar

Localization and mapping with imaging sonar, or forward-looking sonar, has so far been mostly limited to environments with locally planar surfaces, such as the seafloor and the central hull sections of large ships. This limitation arises from imaging sonar not providing full 3D measurements: while range and bearing angle are directly available, any surfaces along an elevation arc given by the aperture of the sonar project to the same image point. The ability to recover 3D geometry from multiple overlapping imaging sonar measurements is highly desirable. Unlike other sonar types, such as profiling or bathymetric sonar, imaging sonar covers a much larger volume of water in a single measurement, which is advantageous because of the relatively low sound speed in water. I will present our work on localizing the sonar and recovering 3D geometry of point features, which is inspired by the problem of structure-from-motion in computer vision. I will discuss our recent non-parametric formulation to address the non-Gaussian nature of distributions along the elevation arc. Finally, I will outline our ongoing exploratory work on dense surface reconstruction from imaging sonar and present some initial results.

Ayoung Kim

Dr. Ayoung
Kim

Korean Advanced Institute of Science and Technology

Optical image visibility enhancement for turbid water SLAM

As human heavily relies on optical imagery information for their perception, obtaining optical images from underwater provide virtue to many underwater applications (e.g., visual SLAM, object detection, and remote operation). However, the optical images are often deteriorated depending on the water turbidity preventing the wide application, requiring image enhancement as a preprocessing. In this talk, we present three categories for the optical image visibility enhancement. Three types of enhancement approaches are based on the 1) image projection model, 2) image processing and 3) deep learning respectively. Some experimental validations will be provided such as underwater hull inspection and operation video from a severely turbid underwater. We would like to share some interesting research findings that we learned while applying each method including the advantages and disadvantages of each approach.

Pere Ridao

Prof. Pere
Ridao

University of Girona (UdG) ViCOROB

Real-time Laser Scanner for Autonomous IMR applications

While AUVs are routinely used for survey mission, Inspection Maintenance and Repair (IMR) applications are nowadays carried out by ROVs due to their intervention requirements. In spite of the recent advances, a significant improvement in the sensing capabilities of current AUVs is required to achieve autonomous intervention capabilities. While mobile and aerial manipulators can take profit of commercial of the shelf 3D cameras, those systems can not work underwater at the required ranges. In this presentation, we will present the 3LS real-time laser scaner, a recently developed and highly reconfigurable system with programable resolution/speed capabilities, able to produce up to 500.000 points per second. The system has been designed with 2 applications in mind: 1) inspection and 2) 3D perception for subsea manipulation. Experimental results corresponding to both applications using GIRONA 500 AUV will be presented and discussed.

Dr. Jakob Schwendner

Dr. Jakob Schwendner

Kraken Robotik GmbH

Industry Speaker -- Real-time 3D inspection of underwater structures using the SeaVision system

Laser scanners are becoming increasingly interesting for industrial underwater applications. They offer higher resolution and accuracy compared to acoustic sensors, but also face limitations in challenging conditions. The optical properties of water are also a limitation on the transferability of technologies from the terrestrial domain. Laser stripers are the current state of the art for affordable underwater sensors, as time-of-flight systems do exist, but are still expensive and bulky. The Kraken SeaVision™ system combines a high-speed low-light color camera, with steerable red, green and blue lasers and a color LED light source. This allows for a variety of deployment options, and is especially suitable for vertical inspections of underwater structures. The content of this talk is to give an overview of the technological and operational challenges, and some examples of lab and field data.

Katherine A. Skinner

Katherine A. Skinner

University of Michigan Robotics Institute

Unsupervised Learning for Underwater Image Restoration

In recent years, deep learning has led to impressive advances in robotic perception. State-of-the-art methods rely on gathering large datasets with hand-annotated labels for network training. However, in underwater environments, dynamic environmental conditions or operational challenges hinder efforts to collect and manually label large training sets that are representative of all possible environmental conditions a robot might encounter. This limits the performance of existing learning-based approaches for robot vision in marine environments. This talk discusses approaches for unsupervised learning to advance perceptual capabilities of underwater robots. Specifically, a learning-based approach for underwater image restoration and dense depth estimation from raw underwater imagery. Physics-based models, cross-disciplinary knowledge about the physical environment and the data collection process is leveraged to provide constraints that relax the need for ground truth labels. This leads to a hybrid model-based, data-driven framework for unsupervised learning.