Special Seminar w. Jonathan Huggins

Thursday March 9, 2017
12:30-1:30
Building 2, Room 426

Jonathan Huggins
PhD Candidate in Computer Science
Computer Science & Artificial Intelligence Laboratory
Massachusetts Institute of Technology

Scaling up Bayesian inference via principled model approximation

The use of Bayesian methods in large-scale data settings is attractive because of the rich hierarchical models, uncertainty quantification, and prior specification they provide. However, standard Bayesian inference algorithms can be computationally difficult or infeasible exactly in the settings of modern interest: large data sets and complex models. I propose to scale Bayesian inference by approximating the underlying model in a principled way. In finite-dimensional models, sufficient statistics can allow easy, large-scale computation. But not all models have sufficient statistics readily available. I propose instead to discover and compute fast summaries of the data, which I call approximate sufficient statistics, as a pre-processing step. I demonstrate the efficacy of this approach via theory and experiments. Not all models, however, are finite-dimensional. In applications ranging from information retrieval to social network analysis to healthcare, we would like to make more complex and detailed inferences from data as we collect more data. Bayesian nonparametric (BNP) models accomplish this goal by employing an infinite-dimensional parameter. In this case, I demonstrate how the careful choice of a practical, finite approximation can retain the flexibility and breadth of BNP models.