November 26, 2020 10:49
AIP Open Seminar #3: Geoinformatics Unit (PI: Naoto Yokoya) thumbnails

Description

Geoinformatics Unit (https://aip.riken.jp/labs/goalorient_tech/geoinf/) at RIKEN AIP

Speaker 1 (15:00-15:30): Naoto Yokoya
Title: Overview of Geoinformatics Unit
Abstract: The goal of Geoinformatics Unit is to understand what is happening on the Earth in a timely manner from a large and diverse set of geospatial data. This talk will present an overview of our recent progress in reconstruction and recognition of remote sensing images with applications to disaster management, dealing with data incompleteness, limited training data, and multimodality.

Speaker 2 (15:30-16:00): Tatsumi Uezato
Title: Unsupervised deep learning for image restoration
Abstract: In the field of image restoration, it is challenging to collect large amounts of training datasets because of hardware limitations or costs. Insufficient training data lead to poor performance of deep learning methods for image restoration. In this talk, we present an unsupervised deep learning method that does not require training with large amounts of datasets. We show that the unsupervised method is effective for a variety of applications in remote sensing where sufficient training data are not available.

Speaker 3 (16:00-16:30): Bruno Adriano
Title: Multimodal deep learning for disaster damage mapping
Abstract: Earth observation technologies, such as optical imagery and synthetic aperture radar, provided complementary information for remotely monitoring urban environments. Following large-scale disasters, both modalities can complement each other to convey the full damage condition accurately. However, due to several factors, such as weather conditions and satellite coverage, it is often uncertain which data modality will be the first available for rapid disaster response efforts. Hence, methodologies that can utilize all accessible modalities are essential for disaster management. Here, we first introduce the complex characteristics of remote sensing datasets acquired before and after disaster events. Then, we analyze different data modalities, such as cross- and fusion-mode, for damage mapping using deep convolutional neural networks.

Related Laboratories

last updated on March 24, 2024 07:07Laboratory