December 7, 2017 10:39

Abstract

Topic: Improve Low-Shot Visual Recognition by Bridging Visual-Semantic Gap

Abstract: In this talk, Yao-Hung will discuss learning visual and semantic embeddings for improving low-shot visual object recognition. First, Yao-Hung will introduce a learning architecture that combines unsupervised representation learning models (i.e., auto-encoders) with cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss). The learned architecture enables us to obtain more robust joint embeddings from visual and semantic features. Second, Yao-Hung will introduce another learning system that maximizes the dependency between semantic relationships between visual objects and the output embedding of any arbitrary deep regression model. If time permits, Yao-Hung will also talk about his recent work on recovering order in the non-sequenced data.

Short Bio: Yao-Hung Hubert Tsai is a second-year Ph.D. in Machine Learning Department at Carnegie Mellon University working with Ruslan Salakhutdinov. His research interests lie in general Deep Learning and its applications on Transfer Learning.

More Information

Date January 11, 2018 (Thu) 15:00 - 16:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/68447

Venue

Nihonbashi 1-chome Mitsui Building, 15th floor, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan(Google Maps)