Meeting ID: 945 652 3132
Out of distribution generalization
Supervised learning typically requires test data being drawn from the same distribution as the training data. This assumption does not hold in many real-world settings. As an example, consider a model trained in one hospital and used in another. In such a setting, the test distribution is related to the training distribution, but they are not the same, and models need to generalize out of distribution. In this talk, I will discuss our work on representation learning for out of distribution generalization. I will construct a family of representations that generalize under changing spurious correlations with applications to images and chest X-rays. Additionally, I will go into some geometric aspects of generalization.
Rajesh Ranganath is an assistant professor at NYU’s Courant Institute
of Mathematical Sciences and the Center for Data Science. He is also
affiliate faculty at the Department of Population Health at NYUMC. His
research focuses on approximate inference, causal inference,
probabilistic models, and machine learning for healthcare. Rajesh
completed his PhD at Princeton and BS and MS from Stanford University.
Rajesh has won several awards including the NDSEG graduate fellowship,
the Porter Ogden Jacobus Fellowship, given to the top four doctoral
students at Princeton University, the Savage Award in Theory and
Methods, and an NSF Career Award.
|Date||August 25, 2023 (Fri) 11:00 - 12:30|