Research

Interests

Bayesian methodology.

Causal inference, missing data, selection.

Public health and medicine.

Philosophy

Bayesian statistics provides a unified and coherent framework for estimation and inference in probabilistic models. The Bayesian update is optimal from a decision theory perspective and it conforms to the maximum entropy and likelihood principles. Bayesian modeling easily accommodates combining data from multiple sources, pooling evidence, and quantifying uncertainty, even in non-standard models for which frequentist inference would require laborious ad hoc calculation or bootstrapping (the validity of which is not guaranteed and can be difficult to verify). Bayesian inference does not rely on asymptotics and Bayesian models often exhibit superior predictive performance (e.g., BART in nonparametric settings). While turning the Bayesian crank has never been easier, computational burden remains a key challenge in the competitiveness of Bayesian methods.

From a methodological perspective, I am interested in expanding Bayesian statistics into new frontiers (e.g., in causal inference and nonparametrics) both through principled modeling and by improving the efficiency and scalability of posterior inference algorithms (e.g., by incorporating machine learning methods into Bayesian workflow, finding useful parametrizations, or developing sampling algorithms tailored to specific models).

From a modeling perspective, I enjoy drawing on my training in physics to build scientifically-informed (and often mechanistic) models of complex data. I have extensive experience with statistical modeling of data described by differential equations, whether the SIR equations or those of Ornstein-Uhlenbeck, Hamilton, Euler-Lagrange, and Schrödinger. In applied work, I endeavor to provide decision-makers with statistical tools and actionable information by which to make informed choices.

Publications

Preprints

Software