Current Research

My recent work applies Statistical Decision Theory to study how to best allocate scarce experimental resources to screen potential innovations in online experiments (A/B tests). See this and this. This problem connects naturally with a classical literature on the ''value of information'' in decision problems, and also with Empirical Bayes methods (as we explain here).

I have also studied some tools that have received a lot of attention in the machine learning literature. For example, Variational Inference (a popular algorithm used to conduct approximate Bayesian inference in large-scale, parametric models), Dropout Training (a popular method used in the estimation of parameters of neural networks), the analysis of text data (in particular, trying to understand the identification of the parameters in the popular Latent Dirichlet Allocation model), and Statistical Learning Theory (with the hope of using this framework to provide computationally feasible approximations to identified sets in parametric models).

Click here to see a full list of my papers. Click here for a summary of my earlier work.