|The Future of Macro|
August 26, 2015|
There is an interesting set of recent blogs--- Paul Romer 1, Paul Romer 2, Brad DeLong, Paul Krugman, Simon Wren-Lewis, and Robert Waldmann---on the history of macro beginning with the 1978 Boston Fed conference, with Lucas and Sargent versus Solow. As Romer notes, I was at this conference and presented a 97-equation model. This model was in the Cowles Commission (CC) tradition, which, as the blogs note, quickly went out of fashion after 1978. (In the blogs, models in the CC tradition are generally called simulation models or structural econometric models or old fashioned models. Below I will call them CC models.)
I will not weigh in on who was responsible for what. Instead, I want to focus on what future direction macro research might take. There is unhappiness in the blogs, to varying degrees, with all three types of models: DSGE, VAR, CC. Also, Wren-Lewis points out that while other areas of economics have become more empirical over time, macroeconomics has become less. The aim is for internal theoretical consistency rather than the ability to track the data.
I am one of the few academics who has continued to work with CC models. They were rejected for basically three reasons: they do not assume rational expectations (RE), they are not identified, and the theory behind them is ad hoc. This sounds serious, but I think it is in fact not.
Expectations: The main expectational assumption that CC models make is that expectations are adaptive: they depend on current and past values. It is hard to test this assumption versus the RE assumption, but from the tests I have done and from others I have seen, there is no strong support for the RE assumption. The one case in which the RE assumption seems good is the effect of surprise announcements on asset prices. If, for example, there is a surprise positive payroll announcement, this results in an immediate decrease in long-term bond prices because the market expects that the Fed will tighten more in the future than it expected before the announcement. The effect on stock prices is ambiguous because there might also be a positive effect on expected future dividends. These, however, are very short-run effects, and changes in asset prices can't be modeled anyway because they are largely unpredictable. For dealing with, say, aggregate quarterly data, assuming adaptive expectations does not seem terrible. Keep in mind that the RE assumption is quite extreme in that it requires agents to know a lot.
Identification: Take a typical consumption function where consumption depends on current income and other things. Income is endogenous. In CC models using 2SLS, first stage regressors might include variables like government spending and tax rates, possibly lagged one quarter. Also, lagged endogenous variables might be used like lagged investment. If the error term in the consumption equation is serially correlated, it is easy to get rid of the serial correlation by estimating the serial correlation coefficients along with the structural coefficients in the equation. So assume that the remaining error term is iid. This error term is correlated with current income, but not with the first stage regressors, so consistent estimates can be obtained. This would not work and the equation would not be identified if all the first stage regressors were also explanatory variables in the equation, which is the identification criticism. However, it seems unlikely that all these variables are in the equation. Given that income is in the equation, why would government spending or tax rates or lagged investment also be in? In the CC framework, there are many zero restrictions for each structural equation, and so identification is rarely a problem. Theory rules out many variables per equation.
Theory: I think it is misleading to say that CC models are ad hoc. Theory is used to choose the left hand side and right hand side variables in the stochastic equations. Theory is not used in as restricted a way as it is for DSGE models, but it does guide the specification of the equations. The specifications can be microfounded. Also, regarding Wren-Lewis's observation about macro becoming less empirical, for CC models all the stochastic equations are estimated---usually 2SLS. There is no calibration, and the data are allowed to speak.
What does this imply about the best course for future research? I don't get a sense from the blog discussions that either the DSGE methodology or the VAR methodology is the way to go. Of course, no one seems to like the CC methodology either, but, as I argue above, I think it has been dismissed too easily. I have three recent methodological papers arguing for its use: Has Macro Progressed?, Reflections on Macroeconometric Modeling, and Information Limits of Aggregate Data. I also show in Household Wealth and Macroeconomic Activity: 2008--2013 that CC models can be used to examine a number of important questions about the 2008--2009 recession, questions that are hard to answer using DSGE or VAR models.
So my suggestion for future macro research is not more bells and whistles on DSGE models, but work specifying and estimating stochastic equations in the CC tradition. Alternative theories can be tested and hopefully progress can be made on building models that explain the data well. We have much more data now and better techniques than we did in 1978, and we should be able to make progress and bring macroeconomics back to it empirical roots.
For those who want more detail, I have gathered all of my research in macro in one place: Macroeconometric Modeling, November 11, 2013.