DMBI Open Insights
Wednesday, January 29
12:00pm in Room 403 (4FL, Countway Library)
PhD Student, Clinical Machine Learning Group
Computer Science, MIT
Fairness and Robustness in Healthcare Algorithms
As machine learning models become more powerful and ubiquitous, researchers have raised concerns about bias and robustness. In sensitive applications like criminal justice or healthcare, we seek to quantify abstract concepts like fairness or robustness and improve any flawed models. Often times, researchers must think beyond the algorithm and consider the data collection process as well. In this talk, I will present two projects aimed at improving healthcare algorithms. First, I describe how we can diagnose sources of unfairness in an algorithm and decompose cost-based metrics of discrimination into bias, variance, and noise. I propose solutions aimed at estimating and reducing each component. Second, I present a health knowledge graph for diagnostic purposes and describe how to check for robust medical knowledge extraction in large datasets.