2017/12/7 10:39

要旨

Topic: Improve Low-Shot Visual Recognition by Bridging Visual-Semantic Gap

Abstract: In this talk, Yao-Hung will discuss learning visual and semantic embeddings for improving low-shot visual object recognition. First, Yao-Hung will introduce a learning architecture that combines unsupervised representation learning models (i.e., auto-encoders) with cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss). The learned architecture enables us to obtain more robust joint embeddings from visual and semantic features. Second, Yao-Hung will introduce another learning system that maximizes the dependency between semantic relationships between visual objects and the output embedding of any arbitrary deep regression model. If time permits, Yao-Hung will also talk about his recent work on recovering order in the non-sequenced data.

Short Bio: Yao-Hung Hubert Tsai is a second-year Ph.D. in Machine Learning Department at Carnegie Mellon University working with Ruslan Salakhutdinov. His research interests lie in general Deep Learning and its applications on Transfer Learning.

詳細情報

日時 2018/01/11(木) 15:00 - 16:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/68447

場所

〒103-0027 東京都中央区日本橋1-4-1 日本橋一丁目三井ビルディング 15階(Google Maps)