Topic: Improve Low-Shot Visual Recognition by Bridging Visual-Semantic Gap
Abstract: In this talk, Yao-Hung will discuss learning visual and semantic embeddings for improving low-shot visual object recognition. First, Yao-Hung will introduce a learning architecture that combines unsupervised representation learning models (i.e., auto-encoders) with cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss). The learned architecture enables us to obtain more robust joint embeddings from visual and semantic features. Second, Yao-Hung will introduce another learning system that maximizes the dependency between semantic relationships between visual objects and the output embedding of any arbitrary deep regression model. If time permits, Yao-Hung will also talk about his recent work on recovering order in the non-sequenced data.
Short Bio: Yao-Hung Hubert Tsai is a second-year Ph.D. in Machine Learning Department at Carnegie Mellon University working with Ruslan Salakhutdinov. His research interests lie in general Deep Learning and its applications on Transfer Learning.
|Date||January 11, 2018 (Thu) 15:00 - 16:00|