Sunday, January 31, 2016

Shrinking VAR's Toward Theory: Supplanting the Minnesota Prior?


A recent post, On Bayesian DSGE Modeling with Hard and Soft Restrictions, ended with: "A related issue is whether 'theory priors' will supplant others, like the 'Minnesota prior'. I'll save that for a later post." This is that later post. Its title refers to Ingram and Whiteman's 1994 classic, entitled "Supplanting the 'Minnesota' Prior: Forecasting Macroeconomic Time Series Using Real Business Cycle Model Priors."

So, shrinking VAR's using DSGE theory priors improves VAR forecasts. Sounds like a victory for economics, with the headline "Using Economic Theory Improves Economic Forecasts!" We'd all like that. We all want that.

But the "victory" is misleading, and more than a little hollow. Lots of shrinkage directions improve forecasts. Indeed almost all shrinkage directions improve forecasts. Real victory would require theory-inspired priors to deliver clear extra improvement relative to other shrinkage directions, but they usually don't. In particular, the Minnesota prior, centered on a simple vector random walk, remains competitive. (See Del Negro and Schorfheide (2004) and Del Negro and Schorfheide (2007).) Sometimes theory priors beat the Minnesota prior by a little, sometimes they lose by a little. It depends on the dataset, the variable, the forecast horizon, etc.

The bottom line: Theory priors seem to be roughly as good as anything else, including the Minnesota prior, but certainly they've not led us to anything resembling wonderful new forecasting success. This seems at best a small forecasting victory for theory priors, but perhaps a victory nonetheless, particularly given the obvious appeal of using a theory prior for Bayesian VAR forecasting that coheres with the theory model used for policy analysis.

Saturday, January 23, 2016

Strippers, JFK, Stalin, and the Oxford Comma

Maybe everyone already knows about the Oxford comma and the crazy stripper thing. I just learned about them. Anyway, here goes.

Consider (1) "x, y and z" vs. (2) "x, y, and z". The difference is that (2) has an extra comma before "and". I always thought that (1) vs. (2) doesn't matter, so long as you pick one and stick with it, maintaining consistency. But some authorities feel strongly that (2) should always be used. Indeed the extra comma in (2) is called an "Oxford comma", because the venerable Oxford University Press has insisted on its use for as long as anyone can remember.

Oxford has a point. It turns out that use of the Oxford comma eliminates the possibility of confusion that can arise otherwise. For example, consider the sentence, "We invited two strippers, JFK and Stalin." It's not clear whether that means two strippers plus JFK and Stalin, for a total of four people, as in the left panel below, or whether the strippers are JFK and Stalin, as in the right panel.



In contrast, inclusion of an Oxford comma renders the meaning unambiguous: "We invited two strippers, JFK, and Stalin" clearly corresponds to the left panel. 


The wacky example and pictures were created by a Dallas high school teacher and used in class a few months ago. Local parents were suitably outraged. Read about it here.

(The pictures are CBS Dallas screenshots.  Thanks to Hannah Diebold for bringing them to my attention!)

Wednesday, January 20, 2016

Time-Varying Dynamic Factor Loadings

Check out Mikkelsen et al. (2015).  I've always wanted to try high-dimensional dynamic factor models (DFM's) with time-varying loadings as an approach to network connectedness measurement (e.g., increasing connectedness would correspond to increasing factor loadings...).  The problem for me was how to do time-varying parameter DFM's in (ultra) high dimensions.  Enter Mikkelsen et al.  I also like that it's MLE -- I'm still an MLE fan, per Doz, Giannone and Reichlin.  It might be cool and appropriate to endow the time-varying factor loadings with factor structure themselves, which might be a straightforward extension (application?) of Sevanovic (2015).  (Stevanovic paper here; supplementary material here.)

Maximum Likelihood Estimation of Time-Varying Loadings in High-Dimensional Factor Models


Jakob Guldbæk Mikkelsen (Aarhus University and CREATES) ; Eric Hillebrand (Aarhus University and CREATES) ; Giovanni Urga (Cass Business School)


2015


In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a two-step maximum likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with time-varying coefficients. We document the finite-sample properties of the maximum likelihood estimator through an extensive simulation study and illustrate the empirical relevance of the time-varying loadings structure using a large quarterly dataset for the US economy.

Sunday, January 17, 2016

Measuring Policy Uncertainty and its Effects

Fascinating work like Baker, Bloom and Davis (2015) has for some time had me interested in defining and measuring policy uncertainty and its effects. 

A plausible hypothesis is that policy uncertainty, like inflation uncertainty, reduces aggregate welfare by throwing sand in the Walrasian gears. An interesting new paper by Erzo Luttmer and Andrew Samwick, "The Welfare Cost of Policy Uncertainty: Evidence from Social Security," drills down to the micro decision-making level and shows how it reduces individual welfare by directly eroding the intended policy benefits. Nice work.

Thursday, January 14, 2016

"Secret Data" and Differential Privacy

See Cochrane's original post and followup.  The followup mentions the idea of differential privacy as a "technology" that may provide a potential solution when there are issues of data confidentiality.   Actually differential privacy is much more than an off-the-shelf technology.  Or much less, depending on your view, as it's very much an active and fascinating research area in algorithmic game theory.  Dwork and Roth (2014) provide a fine overview.  (Aaron Roth is my Penn colleague.) 

Monday, January 11, 2016

Brilliant Strategy and Stunning Results

Check out Justin Wolfers' latest:

From The New York Times:  “When Teamwork Doesn’t Work for Women”
A study finds that female economists get far less credit for collaborative work than their male colleagues do, often harming their career prospects.
http://www.nytimes.com/2016/01/10/upshot/when-teamwork-doesnt-work-for-women.html?mwrsm=Email

I am simply blown away.

I haven't yet read the underlying new paper by Heather Sarsons, but I look forward to it. 

Saturday, January 2, 2016

Endogeneity-Robust OLS Estimation (?)

Imagine White-style robust OLS inference, but with robustness to endogeneity as opposed to heteroskedasticity/autocorrelation (or maybe even robustness to all three).  It sounds too good to be true.  Actually, it sounds impossible, and even if somehow possible, it would of course require not only post-OLS tweaking of standard errors but also post-OLS tweaking of coefficient estimates.  But there are actually some emerging results – embryonic and requiring strict conditions -- but results nonetheless.  Quite intriguing.  Check out Jan Kiviet’s latest paper, "When is it Really Justifiable to Ignore Explanatory Variable Endogeneity in a Regression Model?"