INFERENCE IN OBSERVATIONAL STUDIES
Research activities in observational studies has seen a recent surge owing to multiple factors. Among others, these factors include with our ability and need to collect large amounts of data along with the development of powerful machine learning algorithms. Consequently, researchers have focused on tailoring this powerful machinery to answer questions in studies with both large number of observations and potentially many covariates — a scenario common in medical research. Typical examples of such problems include, but are not limited to, estimating causal effects of administered treatments or understanding relevant parameters in missing data studies. However, this large array of research activity often comes with the burden of assumptions — the heart of which lie in justifying low bias for suggested statistical procedures. Driven by this need to reduce bias, in this talk, I will discuss a unified framework to obtain “optimal” inferential procedures for quantities of interest in common observational studies. This framework will be based on extending classical semiparametric theory to understand necessary and sufficient conditions under which efficient inference is possible. Such an understanding will be crucial to de-mystify the assumptions strewn across the literature of observational studies. Moreover, I will also demonstrate the power of this suggested methodology in producing valid inference (i.e. confidence intervals with correct coverage) under provably minimal conditions—where most other procedures in literature might fail. Finally, proceeding through theoretical and numerical analyses, I will also try to discuss broader research questions attached to this paradigm and elaborate on its relevance in scientific research.
This talk is based on several projects with multiple people, including—James Robins, Eric Tchetgen Tchetgen, Whitney Newey, Subhabrata Sen, Lingling Li, and Aad Van der Vaart.