Speaker: Zack Barnett-Howell (PhD student at University of Wisconsin – Madison)
Title: Approximate Contextual Bandits
Contextual bandit provide a powerful method of both describing strategic behavior within a system as well as estimating the parameters that shape that behavior. The limiting factor to the application of these algorithms to real world problems has been their reliance on prior knowledge of a reward function. I propose the use of likelihood-free estimation methods, namely approximate Bayesian computation (ABC), to bypass this reliance and recover the underlying parameters to bandit decisions.
Using ABC allows me to match algorithmic decisions rewarded with randomly drawn parameters to existing discrete choice data sets. For Bayesian bandits such as OFUL and Thompson Sampling, this creates a reasonable analogue to human decision-making methods. Importantly, this provides insight into the underlying parameters guiding discrete choices — which is of considerable interest to researchers in the social sciences. Once I recover the underlying parameters, this constitutes a unique method of simulating counterfactual scenarios with decision-makers that learn and adapt over time.
|Date||May 19, 2017 (Fri) 10:30 - 12:00|