- This event has passed.
The 6th Cutter Symposium: Epidemiology and Causes
May 6th @ 3:00 pm - 5:00 pm
George Davey Smith
Ubiquitous Causes: Can They Be Identified?
The substantial mismatch between the proportion of cancer cases that are preventable in theory and preventable in practice has been recognised for many decades and has little changed. This could reflect difficulties in identifying and establishing the causal nature of exposures that are ubiquitous in the environment. Engagement of epidemiologists in formulating notions of ubiquitous causes and the resonance of these with recent developments in cancer biology will be introduced, and a speculative framework for identifying (and thus potentially remediating) the effects of such causes will be outlined.
Testing Causal Claims: We Can Do Better
The revolution in causal inference in the past 40 years has moved us past the era when causal inference was treated as a near-impossible task, beyond the purview of most epidemiologic research. Improved understanding of the link between statistical associations and causal structures enables us to go beyond the simple aphorism “Correlation does not equal causation” and opens up new approaches to evaluating causal claims. But we have not yet realized the full potential of those advances in methodology. A major source of delay has been data: many of the most rigorous methods rely on types of data that were, until recently, rarely available. Emerging data sources give new relevance to these rigorous methods. Mendelian Randomization methods are but one special case of a broader collection of methods sometimes conceptualized as instrumental variables approaches. Recognizing the power of these methods enables us to use ‘found’ experiments and may facilitate more innovative and widespread use of randomized studies to deliver better evidence on what works to improve public health.
Make Up Your Mind: Is Causal Inference from Observational Data a Legitimate Scientific Task
Randomized experiments cannot answer many causal questions related to human health. Therefore, the options are 1) stop asking those causal questions or 2) use non-experimental data and try to answer those causal questions. A surprisingly large proportion of researchers, editors, and funders have yet to make up their mind about which of these two options they prefer. As a result, scientific progress is slowed down by ambiguous terminology, confusing data analyses, erroneous publications, and editorial double standards. This talk reviews a framework for causal inference from non-experimental (i.e., observational) data as a legitimate scientific activity and proposes quality standards for observational analyses.