Sunday, September 17, 2017

Machine Learning Meets Central Banking

Here's a nice new working paper from the Bank of England.  There's nothing new methodologically, but there are three fascinating and detailed applications / case studies (banking supervision under imperfect information, UK CPI inflation forecasting, unicorns in financial technology).  For your visual enjoyment I include their Figure 19 below.  (It's the network graph for global technology start-ups in 2014, not spin-art...)

Monday, September 11, 2017

2017 NBER-NSF Time Series Meeting

Just back from 2017 NBER-NSF Time Series at Northwestern.  Quite a feast -- my head is spinning.  Program dumped below; formatted version here.  Many thanks to the program committee for producing this event, and more generally for keeping the series going, year after year, stronger than ever.  (See here for some history and links to past locations, programs, etc.)

The papers were very strong.  Among those that I found particularly interesting are:

-- Moon.  Forecasting in short panels.  You'd think it would be impossible since you need the individual effects.  But it's not.

“Forecasting with Dynamic Panel Data Models”, Hyungsik Roger Moon (University of Southern California), Laura Liu, and Frank Schorfheide

-- Shephard.  Causal estimation meets time series.

“Time series experiments, causal estimands and exact p-values”, Neil Shephard (Harvard University) and Iavor Bojinov

-- The entire (and marvelously-coherent) "Lumsdaine Sesssion" (Pruitt, Pelger, Giglio).  Real progress on econometric methods for identifying financial-market risk factors, with sharp empirical results. 

“Instrumented Principal Component Analysis”, Seth Pruitt (Arizona State University), Bryan Kelly, and Yinan Su
“Estimating Latent Asset-Pricing Factors”, Markus Pelger (Stanford University) and Martin Lettau

“Inference on Risk Premia in the Presence of Omitted Factors”, Stefano Giglio (University of Chicago) and Dacheng Xiu



------------------

2017 NBER-NSF Time Series Conference
Friday, September 8 – Saturday, September 9
Kellogg School of Management
Kellogg Global Hub
2211 N Campus Drive; Evanston, IL 60208
Friday, September 8
Registration begins 10:20am (White Auditorium)
Welcome and opening remarks: 10:50am
Session 1: 11:00am – 12:30pm
Chair: Ruey S. Tsay (University of Chicago)
 “Egalitarian Lasso for Shrinkage and Selection in Forecast Combination” Francis X. Diebold (University of Pennsylvania) and Minchul Shin
 “Forecasting with Dynamic Panel Data Models” Hyungsik Roger Moon (University of Southern California), Laura Liu, and Frank Schorfheide
 “Large Vector Autoregressions with Stochastic Volatility and Flexible Priors” Andrea Carriero (Queen Mary University of London), Todd E. Clark, and Massimiliano Marcellino
12:30pm - 2:00pm: Lunch and Poster Session 1 (Faculty Summit, 4th Floor)
 “The Dynamics of Expected Returns: Evidence from Multi-Scale Time Series Modeling“ Daniele Bianchi (University of Warwick)
 “Testing for Unit-root Non-stationarity against Threshold Stationarity” Kung-Sik Chan (University of Iowa)
 “Group Orthogonal Greedy Algorithm for Change-point Estimation of Multivariate Time Series” Ngai Hang Chan (The Chinese University of Hong Kong)
 “The Impact of Waiting Times on Volatility Filtering and Dynamic Portfolio Allocation” Dobrislav Dobrev (Federal Reserve Board of Governors)
 “Testing for Mutually Exciting Jumps and Financial Flights in High Frequency Data” Mardi Dungey (University of Tasmania), Xiye Yang (Rutgers University) presenting
 “Pockets of Predictability” Leland E. Farmer (University of California, San Diego)
 “Factor Models of Arbitrary Strength” Simon Freyaldenhoven (Brown University)
 “Inference for VARs Identified with Sign Restrictions” Eleonora Granziera (Bank of Finland)
 “The Time-Varying Effects of Conventional and Unconventional Monetary Policy: Results from a New Identification Procedure” Atsushi Inoue (Vanderbilt University)
 “On spectral density estimation via nonlinear wavelet methods for non-Gaussian linear processes” Linyuan Li (University of New Hampshire)
 “Multivariate Bayesian Predictive Synthesis in Macroeconomic Forecasting” Kenichiro McAlinn (Duke University)
 “Periodic dynamic factor models: Estimation approaches and applications” Vladas Pipiras (University of North Carolina)
 “Canonical stochastic cycles and band-pass filters for multivariate time series” Thomas M. Trimbur (U. S. Census Bureau)
Session 2: 2:00pm - 3:30pm
Chair: Giorgio Primiceri (Northwestern University)
 “Understanding the Sources of Macroeconomic Uncertainty” Tatevik Sekhposyan (Texas A&M University), Barbara Rossi, and Matthieu Soupre
 “Safety, Liquidity, and the Natural Rate of Interest” Marco Del Negro (Federal Reserve Bank of New York), Domenico Giannone, Marc P. Giannoni, and Andrea Tambalotti
 “Structural Interpretation of Vector Autoregressions with Incomplete Identification: Revisiting the Role of Oil Supply and Demand Shocks” Christiane Baumeister (University of Notre Dame) and James D. Hamilton
Afternoon Break: 3:30pm-4:00pm
Session 3: 4:00pm – 5:30pm
Chair: Serena Ng (Columbia University)
 “Controlling the Size of Autocorrelation Robust Tests” Benedikt M. Pötscher (University of Vienna) and David Preinerstorfer
 “Heteroskedasticity Autocorrelation Robust Inference in Time Series” Regressions with Missing Data Timothy J. Vogelsang (Michigan State University) and Seung-Hwa Rho
 “Time series experiments, causal estimands and exact p-values” Neil Shephard (Harvard University) and Iavor Bojinov
5:30pm – 7pm: Cocktail Reception and Poster Session 2 (Faculty Summit, 4th Floor)
 “Macro Risks and the Term Structure of Interest Rates” Andrey Ermolov (Fordham University)
 “Holdings-based Fund Performance Measures: Estimation and Inference” Wayne E. Ferson (University of Southern California), Junbo L. Wang (Louisiana State University) presenting
 “Economic Predictions with Big Data: The Illusion of Sparsity” Domenico Giannone (Federal Reserve Bank of New York)
 “Estimation and Inference of Dynamic Structural Factor Models with Over-identifying Restrictions” Xu Han (City University of Hong Kong)
 “Bayesian Predictive Synthesis: Forecast Calibration and Combination” Matthew C. Johnson (Duke University)
 “Time Series Modeling on Dynamic Networks” Jonas Krampe (TU Braunschweig)
 “The Complexity of Bank Holding Companies: A Topological Approach” Robin L. Lumsdaine (American University)
 “Sieve Estimation of Option Implied State Price Density” Zhongjun Qu (Boston University) - Junwen Lu (Boston University) presenting
 “Linear Factor Models and the Estimation of Expected Returns” Cisil Sarisoy (Northwestern University)
 “Efficient Parameter Estimation for Multivariate Jump-Diffusions” Gustavo Schwenkler (Boston University)
 “News-Driven Uncertainty Fluctuations” Dongho Song (Boston College)
 “Contagion, Systemic Risk and Diagnostic Tests in Large Mixed Panels” Cindy S.H. Wang (National Tsing Hua University and CORE, University Catholique de Louvain)
7-10pm: Dinner (White Auditorium)
 Dinner speaker: Nobel Laureate Robert F. Engle
Saturday, September 9
Continental Breakfast: 8:00am – 8:30am
Registration begins 8:30am (White Auditorium)
Session 4: 9:00am – 10:30am
Chair: Thomas Severini (Northwestern University)
 “Estimation of time varying covariance matrices for large datasets” Liudas Giraitis (Queen Mary University of London), Y. Dendramis, and G. Kapetanios
 “Indirect Inference With(Out) Constraints” Eric Renault (Brown University) and David T. Frazier
 “Edgeworth expansions for a class of spectral density estimators and their applications to interval estimation” S.N. Lahiri (North Carolina State University) and A. Chatterjee
Morning Break: 10:30am-11:00am
Session 5: 11:00am-12:30pm
Chair: Robin L. Lumsdaine (American University)
 “Instrumented Principal Component Analysis” Seth Pruitt (Arizona State University), Bryan Kelly, and Yinan Su
 “Estimating Latent Asset-Pricing Factors” Markus Pelger (Stanford University) and Martin Lettau
 “Inference on Risk Premia in the Presence of Omitted Factors” Stefano Giglio (University of Chicago) and Dacheng Xiu
12:30pm-2pm: Lunch and Poster Session 3 (Faculty Summit, 4th Floor)
 “Regularizing Bayesian Predictive Regressions” Guanhao Feng (City University of Hong Kong)
 “Good Jumps, Bad Jumps, and Conditional Equity Premium” Hui Guo (University of Cincinnati)
 “High-dimensional Linear Regression for Dependent Observations with Application to Nowcasting” Yuefeng Han (The University of Chicago)
 “Maximum Likelihood Estimation for Integer-valued Asymmetric GARCH (INAGARCH) Models” Xiaofei Hu (BMO Harris Bank, N.A.)
 “Tail Risk in Momentum Strategy Returns” Soohun Kim (Georgia Institute of Technology)
 “The Perils of Counterfactual Analysis with Integrated Processes” Marcelo C. Medeiros (Pontifical Catholic University of Rio de Janeiro) and Ricardo Masini (Pontifical Catholic University of Rio de Janeiro)
 “Anxious unit root processes” Jon Michel (The Ohio State University)
 “Limiting Local Powers and Power Envelopes of Panel AR and MA Unit Root Tests” Katsuto Tanaka (Gakushuin University)
 “High-Frequency Cross-Market Trading: Model Free Measurement and Applications”
Ernst Schaumburg (AQR Capital Management, LLC) – Dobrislav Dobrev (Federal Reserve Board of Governors) presenting
 “A persistence-based Wold-type decomposition for stationary time series” Claudio Tebaldi (Bocconi University)
 “Necessary and Sufficient Conditions for Solving Multivariate Linear Rational Expectations Models and Factoring Matrix Polynomials” Peter A. Zadrozny (Bureau of Labor Statistics)
Session 6: 2:00pm – 3:30pm
Chair: Beth Andrews (Northwestern University)
 “Models for Time Series of Counts with Shape Constraints” Richard A. Davis (Columbia University) and Jing Zhang
 “Computationally Efficient Distribution Theory for Bayesian Inference of High-Dimensional Dependent Count-Valued Data” Scott H. Holan (University of Missouri, U.S. Census Bureau), Jonathan R. Bradley, and Christopher K. Wikle
 “Functional Autoregression for Sparsely Sampled Data”
Daniel R. Kowal (Cornell University, Rice University)

Monday, September 4, 2017

More on New p-Value Thresholds

I recently blogged on a new proposal heavily backed by elite statisticians to "redefine statistical significance", forthcoming in the elite journal Nature Human Behavior. (A link to the proposal appears at the end of this post.) 

I have a bit more to say. It's not just that I find the proposal counterproductive; I have to admit that I also find it annoying, bordering on offensive.

I find it inconceivable that the authors' p<.005 recommendation will affect their own behavior, or that of others like them. They're all skilled statisticians, hardly so naive as to declare a "discovery" simply because a p-value does or doesn't cross a magic threshold, whether .05 or .005. Serious evaluations and interpretations of statistical analyses by serious statisticians are much more nuanced and rich -- witness the extended and often-heated discussion in any good applied statistics seminar.

If the p<.005 threshold won't change the behavior of skilled statisticians like the proposal's authors, then whose behavior MIGHT it change? That is, reading between the lines, to whom is the proposal REALLY addressed?  Evidently those much less skilled, the proverbial "practitioners", who the authors evidently hope might be kept from trouble by a rule of thumb that can at least be followed mechanically.

How patronizing.


------


Redefine Statistical Significance

Date: 2017
By:
Daniel Benjamin ; James Berger ; Magnus Johannesson ; Brian Nosek ; E. Wagenmakers ; Richard Berk ; Kenneth Bollen ; Bjorn Brembs ; Lawrence Brown ; Colin Camerer ; David Cesarini ; Christopher Chambers ; Merlise Clyde ; Thomas Cook ; Paul De Boeck ; Zoltan Dienes ; Anna Dreber ; Kenny Easwaran ; Charles Efferson ; Ernst Fehr ; Fiona Fidler ; Andy Field ; Malcom Forster ; Edward George ; Tarun Ramadorai ; Richard Gonzalez ; Steven Goodman ; Edwin Green ; Donald Green ; Anthony Greenwald ; Jarrod Hadfield ; Larry Hedges ; Leonhard Held ; Teck Hau Ho ; Herbert Hoijtink ; James Jones ; Daniel Hruschka ; Kosuke Imai ; Guido Imbens ; John Ioannidis ; Minjeong Jeon ; Michael Kirchler ; David Laibson ; John List ; Roderick Little ; Arthur Lupia ; Edouard Machery ; Scott Maxwell; Michael McCarthy ; Don Moore ; Stephen Morgan ; Marcus Munafo ; Shinichi Nakagawa ; Brendan Nyhan ; Timothy Parker ; Luis Pericchi; Marco Perugini ; Jeff Rouder ; Judith Rousseau ; Victoria Savalei ; Felix Schonbrodt ; Thomas Sellke ; Betsy Sinclair ; Dustin Tingley; Trisha Zandt ; Simine Vazire ; Duncan Watts; Christopher Winship ; Robert Wolpert ; Yu Xie; Cristobal Young ; Jonathan Zinman ; Valen Johnson

Abstract: We propose to change the default P-value threshold for statistical significance for claims of new discoveries from 0.05 to 0.005.
http://d.repec.org/n?u=RePEc:feb:artefa:00612&r=ecm 

Sunday, August 27, 2017

New p-Value Thresholds for Statistical Significance

This is presently among the hottest topics / discussions / developments in statistics.  Seriously.  Just look at the abstract and dozens of distinguished authors of the paper below, which is forthcoming in one of the world's leading science outlets, Nature Human Behavior.

Of course data mining, or overfitting, or whatever you want to call it, has always been a problem, which has always warranted strong and healthy skepticism regarding alleged "new discoveries".  But the whole point of examining p-values is to AVOID anchoring on arbitrary significance thresholds, whether the old magic .05 or the newly-proposed magic .005.  Just report the p-value, and let people decide for themselves how they feel.  Why obsess over asterisks, and whether/when to put them next to things?

Postscript:

Reading the paper, which I had not done before writing the paragraph above (there's largely no need, as the wonderfully concise abstract says it all), I see that it anticipates my objection at the end of a section entitled "potential objections":
Changing the significance threshold is a distraction from the real solution, which is to replace null hypothesis significance testing (and bright-line thresholds) with more focus on effect sizes and confidence intervals, treating the P-value as a continuous measure, and/or a Bayesian method.
Here here! Marvelously well put.

The paper offers only a feeble refutation of that "potential" objection:
Many of us agree that there are better approaches to statistical analyses than null hypothesis significance testing, but as yet there is no consensus regarding the appropriate choice of replacement. ... Even after the significance threshold is changed, many of us will continue to advocate for alternatives to null hypothesis significance testing. 
I'm all for advocating alternatives to significance testing.  That's important and helpful.  As for continuing to promulgate significance testing with magic significance thresholds, whether .05 or .005, well, you can decide for yourself.

Redefine Statistical Significance
Date:2017
By:Daniel Benjamin ; James Berger ; Magnus Johannesson ; Brian Nosek ; E. Wagenmakers ; Richard Berk ; Kenneth Bollen ; Bjorn Brembs ; Lawrence Brown ; Colin Camerer ; David Cesarini ; Christopher Chambers ; Merlise Clyde ; Thomas Cook ; Paul De Boeck ; Zoltan Dienes ; Anna Dreber ; Kenny Easwaran ; Charles Efferson ; Ernst Fehr ; Fiona Fidler ; Andy Field ; Malcom Forster ; Edward George ; Tarun Ramadorai ; Richard Gonzalez ; Steven Goodman ; Edwin Green ; Donald Green ; Anthony Greenwald ; Jarrod Hadfield ; Larry Hedges ; Leonhard Held ; Teck Hau Ho ; Herbert Hoijtink ; James Jones ; Daniel Hruschka ; Kosuke Imai ; Guido Imbens ; John Ioannidis ; Minjeong Jeon ; Michael Kirchler ; David Laibson ; John List ; Roderick Little ; Arthur Lupia ; Edouard Machery ; Scott MaxwellMichael McCarthy ; Don Moore ; Stephen Morgan ; Marcus Munafo ; Shinichi Nakagawa ; Brendan Nyhan ; Timothy Parker ; Luis PericchiMarco Perugini ; Jeff Rouder ; Judith Rousseau ; Victoria Savalei ; Felix Schonbrodt ; Thomas Sellke ; Betsy Sinclair ; Dustin TingleyTrisha Zandt ; Simine Vazire ; Duncan WattsChristopher Winship ; Robert Wolpert ; Yu XieCristobal Young ; Jonathan Zinman ; Valen Johnson

Abstract:  
We propose to change the default P-value threshold for statistical significance for claims of new discoveries from 0.05 to 0.005.



http://d.repec.org/n?u=RePEc:feb:artefa:00612&r=ecm


Friday, August 25, 2017

Flipping the https Switch

I just flipped a switch to convert No Hesitations from http to https, which should be totally inconsequential to you -- you should not need to do anything, but obviously let me know if your browser chokes.  The switch will definitely solve one problem:  Chrome has announced that it will soon REQUIRE https.  Moreover, the switch may help with another problem.  There have been issues over the years with certain antivirus software blocking No Hesitations without a manual override.  The main culprit seems to be Kaspersky Antivirus.  Maybe that will now stop.

Sunday, August 20, 2017

Bayesian Random Projection (More on Terabytes of Economic Data)

Some additional thoughts related to Serena Ng's World Congress piece (earlier post here, with a link to her paper):

The key newish dimensionality-reduction strategies that Serena emphasizes are random projection and leverage score sampling.  In a regression context both are methods for optimally approximating an NxK "X matrix" with an Nxk X matrix, where k<<K. They are very different and there are many issues. Random projection delivers a smaller X matrix with columns that are linear combinations of those of the original X matrix, as for example with principal-component regression, which can sometimes make for difficult interpretation.  Leverage score sampling, in contrast, delivers a smaller X matrix with columns that are simply a subset of those of those of the original X matrix, which feels cleaner but has issues of its own.

Anyway, a crucial observation is that for successful predictive modeling we don't need deep interpretation, so random projection is potentially just fine -- if it works, it works, and that's an empirical matter.  Econometric extensions  (e.g., to VAR's) and evidence (e.g., to macro forecasting) are just now emerging, and the results appear encouraging.  An important recent contribution in that regard is Koop, Korobilis, and Pettenuzzo (in press), which significantly extends and applies earlier work of Guhaniyogi and Dunson (2015) on Bayesian random projection ("compression").  Bayesian compression fits beautifully in a MCMC framework (again see Koop et al.), including model averaging across multiple random projections, attaching greater weight to projections that forecast well.  Very exciting!

Monday, August 14, 2017

Analyzing Terabytes of Economic Data

Serena Ng's World Congress piece is out as an NBER w.p.  It's been floating around for a long time, but just in case you missed it, it's a fun and insightful read:

Opportunities and Challenges: Lessons from Analyzing Terabytes of Scanner Data
by Serena Ng  -  NBER Working Paper #23673.
http://papers.nber.org/papers/w23673


(Ungated copy at http://www.columbia.edu/~sn2294/papers/sng-worldcongress.pdf)

Abstract:

This paper seeks to better understand what makes big data analysis different, what we can and cannot do with existing econometric tools, and what issues need to be dealt with in order to work with the data efficiently.  As a case study, I set out to extract any business cycle information that might exist in four terabytes of weekly scanner data.  The main challenge is to handle the volume, variety, and characteristics of the data within the constraints of our computing environment. Scalable and efficient algorithms are available to ease the computation burden, but they often have unknown statistical properties and are not designed for the purpose of efficient estimation or optimal inference.  As well, economic data have unique characteristics that generic algorithms may not accommodate.  There is a need for computationally efficient econometric methods as big data is likely here to stay.

Saturday, August 12, 2017

On Theory, Measurement, and Lewbel's Assertion

Arthur Lewbel, insightful as always, asserts in a recent post that:
The people who argue that machine learning, natural experiments, and randomized controlled trials are replacing structural economic modeling and theory are wronger than wrong.
As ML and experiments uncover ever more previously unknown correlations and connections, the desire to understand these newfound relationships will rise, thereby increasing, not decreasing, the demand for structural economic theory and models.
I agree.  New measurement produces new theory, and new theory produces new measurement -- it's hard to imagine stronger complements.  And as I said in an earlier post,
Measurement and theory are rarely advanced at the same time, by the same team, in the same work. And they don't need to be. Instead we exploit the division of labor, as we should. Measurement can advance significantly with little theory, and theory can advance significantly with little measurement. Still each disciplines the other in the long run, and science advances.
The theory/measurement pendulum tends to swing widely.  If the 1970's and 1980's were a golden age of economic theory, recent decades have witnessed explosive advances in economic measurement linked to the explosion of Big Data.  But Big Data presents both measurement opportunities and pitfalls -- dense fogs of "digital exhaust" -- which fresh theory will help us penetrate.  Theory will be back.

[Related earlier posts:  "Big Data the Big Hassle" and "Theory gets too Much Respect, and Measurement Doesn't get Enough"]

Saturday, August 5, 2017

Commodity Connectedness


Forthcoming paper here
We study connectedness among the major commodity markets, summarizing and visualizing the results using tools from network science.

Among other things, the results reveal clear clustering of commodities into groups closely related to the traditional industry taxonomy, but with some notable differences.


Many thanks to Central Bank of Chile for encouraging and supporting the effort via its 2017 Annual Research Conference.

Sunday, July 30, 2017

Regression Discontinuity and Event Studies in Time Series

Check out the new paper, "Regression Discontinuity in Time [RDiT]: Considerations for Empirical Applications", by Catherine Hausman and David S. Rapson.  (NBER Working Paper No. 23602, July 2017.  Ungated copy here.)

It's interesting in part because it documents and contributes to the largely cross-section regression discontinuity design literature's awakening to time series. But the elephant in the room is the large time-series "event study" (ES) literature, mentioned but not emphasized by Hausman and Rapson.  [In a one-sentence nutshell, here's how an ES works: model the pre-event period, use the fitted pre-event model to predict the post-event period, and ascribe any systematic forecast error to the causal impact of the event.]  ES's trace to the classic Fama et al. (1969).  Among many others, MacKinlay's 1997 overview is still fresh, and Gürkaynak and Wright (2013) provide additional perspective.

One question is what the RDiT approach adds to the ES approach, and related, what it adds to well-developed time-series toolkit of other methods for assessing structural change. At present, and notwithstanding the Hausman-Rapson paper, my view is "little or nothing".  Indeed in most respects it would seem that a RDiT study *is* an ES, and conversely.  So call it what you will, "ES" or "RDiT"

But there are important open issues in ES / RDiT, and Hausman-Rapson correctly emphasize one of them, namely issues and difficulties associated with "wide" pre- and post-event windows, which is often the relevant case in time series.

Things are generally "easy" in cross sections, where we can usually take narrow windows (e.g., in the classic scholarship exam example, we use only test scores very close to the scholarship threshold).  Things are similarly "easy" in time series *IF* we can take similarly narrow windows (e.g., high-frequency asset return data facilitate taking narrow pre- and post-event windows in financial applications).  In such cases it's comparatively easy to credibly ascribe a post-event break to the causal impact of the event.

But in other time-series areas like macro and environmental, we might want (or need) to use wide pre- and post-event windows.  Then the trick becomes modeling the pre- and post-event periods successfully enough so that we can credibly assert that any structural change is due exclusively to the event -- very challenging, but not hopeless.

Hats off to Hausman and Rapson for beginning to bridge the ES and regression discontinuity literatures, and for implicitly helping to push the ES literature forward.

Tuesday, July 25, 2017

Time-Series Regression Discontinuity

I'll have something to say in next week's post.  Meanwhile check out the interesting new paper, "Regression Discontinuity in Time: Considerations for Empirical Applications", by Catherine Hausman and David S. Rapson, NBER Working Paper No. 23602, July 2017.  (Ungated version here.)

Sunday, July 23, 2017

On the Origin of "Frequentist" Statistics

Efron and Hastie note that the "frequentist" term "seems to have been suggested by Neyman as a statistical analogue of Richard von Mises' frequentist theory of probability, the connection being made explicit in his 1977 paper, 'Frequentist Probability and Frequentist Statistics'".  It strikes me that I may have always subconsciously assumed that the term originated with one or another Bayesian, in an attempt to steer toward something more neutral than "classical", which could be interpreted as "canonical" or "foundational" or "the first and best".  Quite fascinating that the ultimate "classical" statistician, Neyman, seems to have initiated the switch to "frequentist".

Sunday, July 9, 2017

On the Identification of Network Connectedness

I want to clarify an aspect of the Diebold-Yilmaz framework (e.g., here or here).  It is simply a method for summarizing and visualizing dynamic network connectedness, based on a variance decomposition matrix.  The variance decomposition is not a part of our technology; rather, it is the key input to our technology.  Calculation of a variance decomposition of course requires an identified model.  We have nothing new to say about that; numerous models/identifications have appeared over the years, and it's your choice (but you will of course have to defend your choice). 

For certain reasons (e.g., comparatively easy extension to high dimensions) Yilmaz and I generally use a vector-autoregressive model and Koop-Pesaran-Shin "generalized identification".  Again, however, if you don't find that appealing, you can use whatever model and identification scheme you want.  As long as you can supply a credible / defensible variance decomposition matrix, the network summarization / visualization technology can then take over.


Monday, July 3, 2017

Bayes, Jeffreys, MCMC, Statistics, and Econometrics

In Ch. 3 of their brilliant book, Efron and Tibshirani (ET) assert that:
Jeffreys’ brand of Bayesianism [i.e., "uninformative" Jeffreys priors] had a dubious reputation among Bayesians in the period 1950-1990, with preference going to subjective analysis of the type advocated by Savage and de Finetti. The introduction of Markov chain Monte Carlo methodology was the kind of technological innovation that changes philosophies. MCMC ... being very well suited to Jeffreys-style analysis of Big Data problems, moved Bayesian statistics out of the textbooks and into the world of computer-age applications.
Interestingly, the situation in econometrics strikes me as rather the opposite.  Pre-MCMC, much of the leading work emphasized Jeffreys priors (RIP Arnold Zellner), whereas post-MCMC I see uniform at best (still hardly uninformative as is well known and as noted by ET), and often Gaussian or Wishart or whatever.  MCMC of course still came to dominate modern Bayesian econometrics, but for a different reason: It facilitates calculation of the marginal posteriors of interest, in contrast to the conditional posteriors of old-style analytical calculations. (In an obvious notation and for an obvious normal-gamma regression problem, for example, one wants posterior(beta), not posterior(beta | sigma).) So MCMC has moved us toward marginal posteriors, but moved us away from uninformative priors.

Thursday, June 29, 2017

More Slides: Forecast Evaluation, DSGE Modeling, and Connectedness

The last post (slides from a recent conference discussion) reminded me of some slide decks that go along with some forthcoming papers.  I hope they're useful.

Diebold, F.X. and Shin, M. (in press), "Assessing Point Forecast Accuracy by Stochastic Error Distance," Econometric Reviews.  Slides here.

Diebold, F.X., Schorfheide, F. and Shin, M. (in press)"Real-Time Forecast Evaluation of DSGE Models with Stochastic Volatility," Journal of Econometrics.  Slides here.

Demirer, M., Diebold, F.X., Liu, L. and Yilmaz, K. (in press), "Estimating Global Bank Network Connectedness", Journal of Applied Econometrics.  Slides here.

Monday, June 26, 2017

Slides from SoFiE NYU Discussion

Here are the slides from my pre-conference discussion of Yang Liu's interesting paper, "Government Debt and Risk Premia", at the NYU SoFiE meeting. The key will be to see whether his result (that debt/GDP is a key driver of the equity premium) remains when he controls for expected future real activity. (See Campbell and Diebold, "Stock Returns and Expected Business Conditions: Half a Century of Direct Evidence," Journal of Business and Economic Statistics, 27, 266-278, 2009.)

Wednesday, June 7, 2017

Structural Change and Big Data

Recall the tall-wide-dense (T, K, m) Big Data taxonomy.  One might naively assert that tall data (big time dimension, T) are not really a part of the Big Data phenomenon, insofar as T has not started growing more quickly in recent years.  But a more sophisticated perspective on the "size" of T is whether it is big enough to make structural change a potentially serious concern.  And structural change is a serious concern, routinely, in time-series econometrics.  Hence structural change, in a sense, produces Big Data through the T channel.

Saturday, May 27, 2017

SoFiE 2017 New York

If you haven't yet been to the Society for Financial Econometrics (SoFiE) annual meeting, now's the time.  They're pulling out all the stops for the 10th anniversary at NYU Stern, June 21-23, 2017.  There will be a good mix of financial econometrics and empirical finance (invited speakers here; full program here). The "pre-conference" will also continue, this year June 20, with presentations by junior scholars (new/recent Ph.D.'s) and discussions by senior scholars. Lots of information here. See you there!

Monday, May 22, 2017

Big Data in Econometric Modeling

Here's a speakers' photo from last week's Penn conference, Big Data in Dynamic Predictive Econometric Modeling.  Click through to find the program, copies of papers and slides, a participant list, and a few more photos.  A good and productive time was had by all!


Monday, May 15, 2017

Statistics in the Computer Age

Efron and Tibshirani's Computer Age Statistical Inference (CASI) is about as good as it gets. Just read it. (Yes, I generally gush about most work in the Efron, Hastie, Tibshirani, Brieman, Friedman, et al. tradition.  But there's good reason for that.)  As with the earlier Hastie-Tibshirani Springer-published blockbusters (e.g., here), the CASI publisher (Cambridge) has allowed ungated posting of the pdf (here).  Hats off to Efron, Tibshirani, Springer, and Cambridge.

Monday, May 8, 2017

Replicating Anomalies

I blogged a few weeks ago on "the file drawer problem".  In that vein, check out the interesting new paper below. I like their term "p-hacking". 

Random thought 1:  
Note that reverse p-hacking can also occur, when an author wants low p-values.  In the study below, for example, the deck could be stacked with all sorts of dubious/spurious "anomaly variables" that no one ever took seriously.  Then of course a very large number would wind up with low p-values.  I am not suggesting that the study below is guilty of this; rather, I simply had never thought about reverse p-hacking before, and this paper led me to think of the possibility, so I'm relaying the thought.

Related random thought 2:  
It would be interesting to compare anomalies published in "top journals" and "non-top journals" to see whether the top journals are more guilty or less guilty of p-hacking.  I can think of competing factors that could tip it either way!

Replicating Anomalies
by Kewei Hou, Chen Xue, Lu Zhang - NBER Working Paper #23394
Abstract:
The anomalies literature is infested with widespread p-hacking. We replicate the entire anomalies literature in finance and accounting by compiling a largest-to-date data library that contains 447 anomaly variables. With microcaps alleviated via New York Stock Exchange breakpoints and value-weighted returns, 286 anomalies (64%) including 95 out of 102 liquidity variables (93%) are insignificant at the conventional 5% level. Imposing the cutoff t-value of three raises the number of insignificance to 380 (85%). Even for the 161 significant anomalies, their magnitudes are often much lower than originally reported. Out of the 161, the q-factor model leaves 115 alphas insignificant (150 with t < 3). In all, capital markets are more efficient than previously recognized.  


Thursday, May 4, 2017

Sunday, April 30, 2017

One Millionth Birthday...

Image result for 1 year birthday cake
 ...in event time.  It's true, yesterday No Hesitations passed 1,000,000 page views.  Totally humbling.  I am grateful for your interest and support.

Thursday, April 20, 2017

Automated Time-Series Forecasting at Google

Check out this piece on automated time-series forecasting at Google.  It's a fun and quick read. Several aspects are noteworthy.  

On the upside:

-- Forecast combination features prominently -- they combine forecasts from an ensemble of models.  

-- Uncertainty is acknowledged -- they produce interval forecasts, not just point forecasts.

On the downside:

-- There's little to their approach that wasn't well known and widely used in econometrics a quarter century ago (or more).  Might not something like Autobox, which has been around and evolving since the 1970's, do as well or better?

Friday, April 14, 2017

On Pseudo Out-of-Sample Model Selection

Great to see that Hirano and Wright (HW), "Forecasting with Model Uncertainty", finally came out in Econometrica. (Ungated working paper version here.)

HW make two key contributions. First, they characterize rigorously the source of the inefficiency in forecast model selection by pseudo out-of-sample methods (expanding-sample, split-sample, ...), adding invaluable precision to more intuitive discussions like Diebold (2015). (Ungated working paper version here.) Second, and very constructively, they show that certain simulation-based estimators (including bagging) can considerably reduce, if not completely eliminate, the inefficiency.


Abstract: We consider forecasting with uncertainty about the choice of predictor variables. The researcher wants to select a model, estimate the parameters, and use the parameter estimates for forecasting. We investigate the distributional properties of a number of different schemes for model choice and parameter estimation, including: in‐sample model selection using the Akaike information criterion; out‐of‐sample model selection; and splitting the data into subsamples for model selection and parameter estimation. Using a weak‐predictor local asymptotic scheme, we provide a representation result that facilitates comparison of the distributional properties of the procedures and their associated forecast risks. This representation isolates the source of inefficiency in some of these procedures. We develop a simulation procedure that improves the accuracy of the out‐of‐sample and split‐sample methods uniformly over the local parameter space. We also examine how bootstrap aggregation (bagging) affects the local asymptotic risk of the estimators and their associated forecasts. Numerically, we find that for many values of the local parameter, the out‐of‐sample and split‐sample schemes perform poorly if implemented in the conventional way. But they perform well, if implemented in conjunction with our risk‐reduction method or bagging.

Monday, April 10, 2017

BIg Data, Machine Learning, and the Macroeconomy

Coming soon at Bank of Norway:

CALL FOR PAPERS 
Big data, machine learning and the macroeconomy 
Norges Bank, Oslo, 2-3 October 2017 

Data, in both structured and unstructured form, are becoming easily available on an ever increasing scale. To find patterns and make predictions using such big data, machine learning techniques have proven to be extremely valuable in a wide variety of fields. This conference aims to gather researchers using machine learning and big data to answer challenges relevant for central banking. 

Examples of questions, and topics, of interest are: 

Forecasting applications and methods
-Can better predictive performance of key economic aggregates (GDP, inflation, etc.) be achieved by using alternative data sources? 
- Does the machine learning tool-kit add value to already well-established forecasting frameworks used at central banks? 

 Causal effects
- How can new sources of data and methods be used learn about the causal mechanism underlying economic fluctuations? 

Text as data
- Communication is at the heart of modern central banking. How does this affect markets? 
- How can textual data be linked to economic concepts like uncertainty, news, and sentiment? 

Confirmed keynote speakers are: 
- Victor Chernozhukov (MIT) 
- Matt Taddy (Microsoft, Chicago Booth) 

The conference will feature 10-12 papers. If you would like to present a paper, please send a draft or an extended abstract to mlconference@norges-bank.no by 31 July 2017. Authors of accepted papers will be notified by 15 August. For other questions regarding this conference, please send an e-mail to mlconference@norges-bank.no. Conference organizers are Vegard H. Larsen and Leif Anders Thorsrud.

13th Annual Real-Time Conference

Great news: The Bank of Spain will sponsor the 13th annual conference on real-time data analysis, methods, and applications in macroeconomics and finance, next October 19th and 20th , 2017, in its central headquarters in Madrid, c/ Alcalá, 48. 

The real-time conference has always been unique and valuable. I'm very happy to see the Bank of Spain confirming and promoting its continued vitality.

More information and call for papers here.

Topics include:

• Nowcasting, forecasting and real-time monitoring of macroeconomic and financial conditions.
• The use of real-time data in policy formulation and analysis.
• New real-time macroeconomic and financial databases.
• Real-time modeling and forecasting aspects of high-frequency financial data.
• Survey data, and its use in macro model analysis and evaluation.
• Evaluation of data revision and real-time forecasts, including point forecasts, probability forecasts, density forecasts, risk assessments and decompositions
.

Monday, April 3, 2017

The Latest on the "File Drawer Problem"

The term "file drawer problem" was coined long ago. It refers to the bias in published empirical studies toward "large", or "significant", or "good" estimates. That is, "small"/"insignificant"/"bad" estimates remain unpublished, in file drawers (or, in modern times, on hard drives). Correcting the bias is a tough nut to crack, since little is known about the nature or number of unpublished studies. For the latest, together with references to the relevant earlier literature, see the interesting new NBER working paper, IDENTIFICATION OF AND CORRECTION FOR PUBLICATION BIAS, by Isaiah AndrewsMaximilian Kasy. There's an ungated version and appendix here, and a nice set of slides here.

Abstract: Some empirical results are more likely to be published than others. Such selective publication leads to biased estimators and distorted inference. This paper proposes two approaches for identifying the conditional probability of publication as a function of a study's results, the first based on systematic replication studies and the second based on meta-studies. For known conditional publication probabilities, we propose median-unbiased estimators and associated confidence sets that correct for selective publication. We apply our methods to recent large-scale replication studies in experimental economics and psychology, and to meta-studies of the effects of minimum wages and de-worming programs.

Tuesday, March 28, 2017

Text as Data

"Text as data" is a vibrant and by now well-established field. (Just Google "text as data".)

For an informative overview geared toward econometricians, see the new paper, "Text as Data" by Matthew Gentzkow, Bryan T. Kelly, and Matt Taddy (GKT). (Ungated version here.)

"Text as data" has wide applications in economics. As GKT note:

... in finance, text from financial news, social media, and company filings is used to predict asset price movements and study the causal impact of new information. In macroeconomics, text is used to forecast variation in inflation and unemployment, and estimate the effects of policy uncertainty. In media economics, text from news and social media is used to study the drivers and effects of political slant. In industrial organization and marketing, text from advertisements and product reviews is used to study the drivers of consumer decision making. In political economy, text from politicians’ speeches is used to study the dynamics of political agendas and debate.

There are three key steps:

1. Represent the raw text D as a numerical array x

2. Map x into predicted values yhat of outcomes y

3. Use yhat in subsequent descriptive or causal analysis.

GKT emphasize the ultra-high dimensionality inherent in statistical text analyses, with connections to machine learning, etc.

Tuesday, March 21, 2017

Forecasting and "As-If" Discounting

Check out the fascinating and creative new paper, "Myopia and Discounting", by Xavier Gabaix and David Laibson.

From their abstract (slightly edited):
We assume that perfectly patient agents estimate the value of future events by generating noisy, unbiased simulations and combining those signals with priors to form posteriors. These posterior expectations exhibit as-if discounting: agents make choices as if they were maximizing a stream of known utils weighted by a discount function. This as-if discount function reflects the fact that estimated utils are a combination of signals and priors, so average expectations are optimally shaded toward the mean of the prior distribution, generating behavior that partially mimics the properties of classical time preferences. When the simulation noise has variance that is linear in the event's horizon, the as-if discount function is hyperbolic.
Among other things, then, they provide a rational foundation for the "myopia" associated with hyperbolic discounting.

Note that in the Gabaix-Laibson environment everything depends on how forecast error variance behaves as a function of forecast horizon \(h\). But we know a lot about that. For example, in linear covariance-stationary \(I(0)\) environments, optimal forecast error variance grows with \(h\) at a decreasing rate, approaching the unconditional variance from below. Hence it cannot grow linearly with \(h\), which is what produces hyperbolic as-if discounting. In contrast, in non-stationary \(I(1)\) environments, optimal forecast error variance does eventually grow linearly with \(h\). In a random walk, for example, \(h\)-step-ahead optimal forecast error variance is just \(h \sigma^2\), where \( \sigma^2\) is the innovation variance. It would be fascinating to put people in \(I(1)\) vs. \(I(0)\) laboratory environments and see if hyperbolic as-if discounting arises in \(I(1)\) cases but not in \(I(0)\) cases.