My research is focused on methodology for causal inference, including comparative effectiveness of policy and clinical interventions.
In an ideal world, all policy and clinical decisions would be based on the findings of randomized experiments. For example, public health recommendations to avoid saturated fat or medical prescription of a particular painkiller would be based on the findings of long-term studies that compared the effectiveness of several randomly assigned interventions in large groups of people from the target population that adhered to the study interventions. Unfortunately, such randomized experiments are often unethical, impractical, or simply too lengthy for timely decisions.
My collaborators and I combine observational data, mostly untestable assumptions, and statistical methods to emulate hypothetical randomized experiments. We emphasize the need to formulate well defined causal questions, and use analytic approaches whose validity does not require assumptions that conflict with current subject-matter knowledge. For example, in settings in which experts suspect the presence of time-dependent confounders affected by prior treatment, we do not use adjustment methods (e.g., conventional regression analysis) that require the absence of such confounders.
While causal inferences from observational data are always risky, an appropriate analysis of observational studies often results in the best available evidence for policy or clinical decision-making. At the very least, the findings from well designed and properly analyzed observational studies may guide the design of future randomized experiments.
Our applied work is focused on optimal use of antiretroviral therapy in persons infected with HIV, lifestyle and pharmacological interventions to reduce the incidence of cardiovascular disease, and the effects of erythropoiesis-stimulating agents among dialysis patients.