I initially became an economist to better understand how to evaluate and recommend policy actions, particularly at the macroeconomic level. Macroeconomic policy evaluation, however, turns out to be a quite difficult problem: macro data is not a random sample; sample sizes are small, policy relevant parameters are typically only weak or set-identified; macro models are misspecified. I think Econometrics is a great framework to think about these issues. Part of my earlier work focused on developing methods that helped applied researchers handle some of the complications I encountered in macroeconomic policy evaluation (which I believe are quite typical in empirical work in economics, and not exclusive of macroeconometrics).

My very first papers are about how to deal with weak instrumental variables, in both micro- and macro-econometric applications. This paper proposes a heteroskedasticity-and-autocorrelation-robust test for weak instruments in linear instrumental variable regression. This paper proposes a weak-instrument robust confidence intervals for impulse responses in Structural Vector Autoregressions identified with external instruments; and this paper applies these suggested procedures to evaluate the effects of changes in marginal tax rates associated with postwar tax reforms in the United States. This paper uses statistical decision theory to better understand what is a good test of hypothesis in the linear instrumental variables model. Most of these papers were written between my second year of graduate school, and my first years at New York University.

After graduate school, I started working on set-identified models. At the time, and due to the fragility of identifying assumptions in macroeconometric models, it was quite common to set-identify these model's parameters by using minimal restrictions (such as sign restrictions). This and this paper propose simple approaches (based on the delta method and the projection method) to conduct inference about impulse responses. It was a bit surprising to see that even inference about reduced-form impulse responses can be quite challenging because of the persistence of the data and the relevance of long horizons. This paper tries to explain how local projections are robust to these issues. And this paper presents some well-known results about simultaneous inference in linear models in the context of Structural Vector Autoregressions, and explains why using (a frequentist or Bayesian) sup-t band is a good idea. My work on set-identified models has found applications beyond macroeconometrics. This paper combines axiomatic decision theory and data obtained from multiple price lists questionnaires to estimate bounds for the distributions of long-run and short-run discount factors in the subject pool. This paper shows that the parameters of the Latent Dirichlet Allocation model (a popular model for text analysis) are set-identified.

My work on weak- and set-identified models, and my interest in macroeconometrics have been really important in determining my toolkit as an econometrician.

The work I did on weak instruments was very useful to understand the role of asymptotic approximations in econometric analysis. I have used non-standard asymptotics in other recent projects. For example, here I use nonstandard asymptotics to approximate the value of information in randomized experiments with small samples. Here, to capture the relevance of prior information in a potentially misspecified Bayesian Linear Regression. And here, to analyze the difference between reported and true posteriors in Bayesian models with a misspecified likelihood.

The first time I thought of text analysis came after constructing external instruments to identify the effects of marginal tax rates. A lot of information about monetary and tax policy is in the form of narrative records (the FOMC transcripts, the Economic Report of the president, etc). I think there should be a better way to process all this text to find instrumental variables for macro policy evaluation.

My current work has been more focused on Machine Learning, I am still hoping I can write a nice paper on how a policy maker (trained as an econometrician) uses observational data to decide macroeconomic policy. I will call this ''Mostly Harmless Macroeconometrics''.