1. Introduction
This chapter gives a general view of where I think the type of work on this site fits into the literature. It is a rallying cry for the Cowles Commission approach, an approach I feel too many academic researchers abandoned in the 1970s. The chapter is reproduced almost exactly from the 1994 book. A few changes have been made to correspond to the MC3 version of the MC model. If you would like to read the original chapter in pdf format, the link is 1994 book.

1.1 Background
1.2 The Cowles Commission Approach
1.3 The Real Business Cycle Approach
1.4 The New Keynesian Economics
1.5 Looking Ahead

1.1 Background
[Footnote 1: The discussion in this section and some of the discussion in the rest of this chapter is taken from Fair (1993d), which has the same title as this book.]

Interest in research topics in different fields fluctuates over time, and the field of macroeconomics is no exception. From Tinbergen's (1939) model building in the late 1930s through the 1960s, there was considerable interest in the construction of structural macroeconomic models. The dominant methodology of this period was what I will call the "Cowles Commission" approach. [Footnote 2: See Arrow (1991) and Malinvaud (1991) for interesting historical discussions of econometric research at the Cowles Commission (later Cowles Foundation) and its antecedents.] Structural econometric models were specified, estimated, and then analyzed and tested in various ways. One of the major macroeconometric efforts of the 1960s, building on the earlier work of Klein (1950) and Klein and Goldberger (1955), was the Brookings model [Duesenberry, Fromm, Klein, and Kuh (1965, 1969) This model was a joint effort of many individuals, and at its peak it contained nearly 400 equations. Although much was learned from this exercise, the model never achieved the success that was initially expected, and it was laid to rest around 1972.

Two important events in the 1970s contributed to the decline in popularity of the Cowles Commission approach. The first was the commercialization of macroeconometric models. This changed the focus of research on the models. Basic research gave way to the day to day needs of keeping the models up to date, of subjectively adjusting the forecasts to make them "reasonable," and of meeting the special needs of clients. [Footnote 3: The commercialization of models has been less of a problem in the United Kingdom than in the United States. In 1983 the Macroeconomic Modelling Bureau of the Economic and Social Research Council was established at the University of Warwick under the direction of Kenneth F. Wallis. Various U.K. models and their associated databases are made available to academic researchers through the Bureau.] The second event was Lucas's (1976) critique, which argued that the models are not likely to be useful for policy purposes. The Lucas critique led to a line of research that culminated in real business cycle (RBC) theories, which in turn generated a counter response in the form of new Keynesian economics More will be said about these latter two areas later in this chapter.

My interest in structural macroeconomic model building began as a graduate student at M.I.T. in the mid 1960s. This was a period in which there was still interest in the Brookings model project and in which intensive work was being carried out on the MPS (M.I.T.-Penn-SSRC) model. Many hours were spent by many students in the basement of the Sloan building at M.I.T. working on various macroeconometric equations using an IBM 1620 computer (punch cards and all). This was also the beginning of the development of TSP (Time Series Processor), a computer program that provided an easy way of using various econometric techniques. The program was initiated by Robert Hall, and it soon attracted many others to help in its development. I played a minor role in this development.

Perhaps because of fond memories of my time in the basement of Sloan, I have never lost interest in structural models. I continue to believe that the Cowles Commission approach is the best way of trying to learn how the macroeconomy works, and I have continued to try to make progress using this approach. This book brings together my macroeconometric research of roughly the last decade. It presents the current version of my multicountry econometric model, including my U.S. model, and it discusses various econometric techniques. The book is a sequel to Fair (1984), which brought together my macroeconometric research through the early 1980s.

The theory behind the econometric model has changed very little from that described in the earlier book, and so the theory is only briefly reviewed in the present book. On the other hand, all the empirical work is new (because there is nearly a decade's worth of new data), and all of this work is discussed. In the choice of econometric techniques to discuss, I have been idiosyncratic in the present book, as I was in the earlier book. I have chosen techniques that I think are important for macroeconometric work, but these by no means exhaust all relevant techniques. Most of the techniques that are discussed are new since the earlier book was written.

Advances in computer hardware have considerably lessened the computational burden of working with large scale models. In particular, the availability of fast, inexpensive computers has made stochastic simulation routine, and this has greatly expanded the ways in which models can be tested and analyzed. Many of the techniques discussed in this book require the use of stochastic simulation.

All the techniques discussed in the earlier book and in the present book are programmed into the Fair-Parke (FP) program. This program is joint work with William R. Parke. The FP program expands on TSP in an important way. Whereas TSP was designed with single equation estimation in mind, FP was designed to treat all equations of a model at the same time. System wide techniques, such as FIML estimation, 3SLS estimation, deterministic and stochastic simulation, optimal control techniques, and techniques for rational expectations models, are much more straightforward to use in FP than they are in programs like TSP. The FP program is discussed in Fair and Parke (1993), and this discussion is not repeated in the present book.

There is considerable stress in this book on testing (hence the title of the book), both the testing of single equations and the testing of overall models. Much of my work in macroeconomics has been concerned with testing, and this is reflected in the current book. My primary aim in macroeconomics is to develop a model that is a good approximation of how the macroeconomy works, and testing is clearly an essential ingredient in this process.

The complete multicountry econometric model will be called the "MC" model. This model consists of estimated structural equations for 38 countries. There are also estimated trade share equations for 58 countries plus an "all other" category, labelled "AO." The trade share matrix is thus 59 x 59. The United States part of the MC model will be called the "US" model. It consists of estimated equations for the United States only, and it does not include the trade share equations. The non United States part of the MC model will be called the "ROW" (rest of world) model. Some of the more advanced techniques are applied only to the US model.

The rest of this chapter is a discussion and defense of the Cowles Commission approach and a criticism of the alternative approaches of real business cycle theorists and new Keynesian economists. It also partly serves as an outline of the book.

1.2 The Cowles Commission Approach
[Footnote 4: Part of material in this section and in Sections 1.3-1.5 is taken from Fair (1992). It should be noted that I am using the phrase "Cowles Commission approach" in a much broader way than it is sometimes used. Heckman (1992), for example, uses the phrase to mean the procedure of forming a hypothesis (from some theory), testing it, and then stopping. Heckman argues (correctly in my view) that this is a very rigid way of doing empirical work. I am using the phrase to mean the actual approach used by structural macro model builders, where there is much back and forth movement between specification and empirical results. Perhaps a better phrase would have been "traditional model building approach," but this is awkward. I will thus use "Cowles Commission approach" in a general way, but it should be kept in mind that there are narrower definitions in use.]

Specification

Some of the early macroeconometric models were linear, but this soon gave way to the specification of nonlinear models. Consequently, only the nonlinear case will be considered here. The model will be written as

(1.1) fi(yt, xt, ai) = uit, (i=1,...,n), (t=1,...,T)

where yt is an n-dimensional vector of endogenous variables, xt is a vector of predetermined variables (including lagged endogenous variables), ai is a vector of unknown coefficients, and uit is the error term for equation i for observation t. For equations that are identities, uit is identically zero for all t.

Specification consists of choosing 1) the variables that appear in each equation with nonzero coefficients, 2) the functional form of each equation, and 3) the probability structure for uit. [Footnote 5: In modern times one has to make sufficient stationarity assumptions about the variables to make time series econometricians happy. The assumption, either explicit or implicit, of most macroeconometric model building work is that the variables are trend stationary. If in fact some variables are not stationary, this may make the asymptotic distributions that are used for hypothesis testing inaccurate. Fortunately, the accuracy of the asymptotic distributions that are used in macroeconometric work can be examined, and this is done in Section 7.5. It will be seen that the asymptotic distributions appear fairly accurate.] Economic theory is used to guide the choice of variables. In most cases there is an obvious left hand side variable for the equation, where the normalization used is to set the coefficient of this variable equal to minus one. This is the variable considered to be "explained" by the equation.

Chapters 2, 5, and 6 form an example of the use of theory in the specification of an econometric model. The theory is discussed in Chapter 2, and the specification of the stochastic equations is discussed in Chapters 5 and 6. Before moving to the theory in Chapter 2, however, it will be useful to consider a simpler example.

Consider the following maximization problem for a representative household. Maximize

(1.2) E0U(C1, ... , Ct, L1, ... , Lt)

subject to

(1.3)

St = Wt(H - Lt) + rtAt-1 - PtCt
At = At-1 + St
At = A'

where C is consumption, L is leisure, S is saving, W is the wage rate, H is the total number of hours in the period, r is the one period interest rate, A is the level of assets, P is the price level, A' is the terminal value of assets, and t = 1,...,T. E0 is the expectations operator conditional on information available through time 0. Given A0 and the conditional distributions of the future values of W, P, and r, it is possible in principle to solve for the optimal values of C and L for period 1, denoted C1* and L1*. In general, however, this problem is not analytically tractable. In other words, it is not generally possible to find analytic expressions for C1* and L1*.

The approach that I am calling the Cowles Commission approach can be thought of as specifying and estimating approximations of the decision equations. This approach in the context of the present example is the following. First, the random variables, Wt, Pt, and rt, t = 1,...,T, are replaced by their expected values, E0Wt, E0Pt, and E0rt, t = 1,...,T. Given this replacement, one can write the expressions for C1* and L1* as

(1.4) C1* = g1(A0, A', E0W1, ... , E0Wt, E0P1, ... , E0Pt, E0r1, ... , E0rt, b)

(1.5) L1* = g2(A0, A', E0W1, ... , E0Wt, E0P1, ... ,E 0Pt, E0r1, ... , E0rt, b)

where b is the vector of parameters of the utility function. Equations 1.4 and 1.5 simply state that the optimal values for the first period are a function of 1) the initial and terminal values of assets, 2) the expected future values of the wage rate, the price level, and the interest rate, and 3) the parameters of the utility function. [Footnote 6: If information for period 1 is available at the time the decisions are made, then E0W1, E0P1, and E0r1 should be replaced by their actual values in equations 1.4 and 1.5.]

The functional forms of equations 1.4 and 1.5 are not in general known. The aim of the empirical work is to try to estimate equations that are approximations of equations 1.4 and 1.5. Experimentation consists in trying different functional forms and in trying different assumptions about how expectations are formed. Because of the large number of expected values in equations 1.4 and 1.5, the expectational assumptions usually restrict the number of free parameters to be estimated. For example, the parameters for E0W1, ... , E0Wt might be assumed to lie on a low order polynomial or to be geometrically declining. The error terms are usually assumed to be additive, as specified in equation 1.1, and they can be interpreted as approximation errors.

It is often the case when equations like 1.4 and 1.5 are estimated that lagged dependent variables are used as explanatory variables. Since C0 and L0 do not appear in 1.4 and 1.5, how can one justify the use of lagged dependent variables? A common procedure is to assume that C1* in 1.4 and L1* in 1.5 are long run "desired" values. It is then assumed that because of adjustment costs, there is only a partial adjustment of actual to desired values. The usual adjustment equation for consumption would be

(1.6) C1 - C0 = q(C1* - C0), 0 < q < 1

which adds C0 to the estimated equation. This procedure is ad hoc in the sense that the adjustment equation is not explicitly derived from utility maximization. One can, however, assume that there are utility costs to large changes in consumption and leisure and thus put terms like (C1 - C0)2, (C2 - C1)2, (L1 - L0)2, (L2 - L1)2, ... in the utility function 1.2. This would add the variables C0 and L0 to the right hand side of equations 1.4 and 1.5, which would justify the use of lagged dependent variables in the empirical approximating equations for 1.4 and 1.5.

This setup can handle the assumption of rational expectations in the following sense. Let Et-1y2t+1 denote the expected value of y2t+1, where the expectation is based on information through period t-1 , and assume that Et-1y2t+1 appears as an explanatory variable in equation i in 1.1. (This equation might be an equation explaining consumption, and y might be the wage rate.) If expectations are assumed to be rational, this equation and equations like it can be estimated by either a limited information or a full information technique. In the limited information case, Et-1y2t+1 is replaced by y2t+1, and the equation is estimated by Hansen's (1982) generalized method of moments (GMM) procedure. In the full information case, the entire model is estimated at the same time by full information maximum likelihood, where the restriction is imposed that the expectations of future values of variables are equal to the model's predictions of the future values. Again, the parameters of the expected future values might be restricted in order to lessen the number of free parameters to be estimated.

The specification that has just been outlined does not allow the estimation of "deep structural parameters," such as the parameters of utility functions, even under the assumption of rational expectations. Only approximations of the decision equations are being estimated. The specification is thus subject to the Lucas (1976) critique. More will be said about this below. The specification also uses the certainty equivalence procedure, which is strictly valid only in the linear quadratic setup.

Estimation

A typical macroeconometric model is dynamic, nonlinear, simultaneous, and has error terms that may be correlated across equations and with their lagged values. A number of techniques have been developed for the estimation of such models. Techniques that do not take account of the correlation of the error terms across equations (limited information techniques) include two stage least squares (2SLS) and two stage least absolute deviations (2SLAD). Techniques that do account for this correlation (full information techniques) include full information maximum likelihood (FIML) and three stage least squares (3SLS). These techniques are discussed in Fair (1984), including their modifications to handle the case in which the error terms follow autoregressive processes. They are used in the current book, although they are only briefly discussed here. 2SLS is discussed in Section 4.2, 2SLAD in Section 4.4, and 3SLS and FIML in Section 7.2.

As noted above, estimation techniques are available that handle the assumption of rational expectations. Hansen's method is discussed in Section 4.3, and FIML is discussed in Section 7.10. It will be seen in Section 7.10 that computational advances have made even the estimation of models with rational expectations by FIML computationally feasible. It is also possible, as discussed in Section 7.4, to obtain median unbiased (MU) estimates of the coefficients of macroeconometric models, and these estimates are also computed in this book.

Finally, it is now possible using stochastic simulation and reestimation to compute "exact" distributions of estimators that are used for macroeconometric models. These distributions can then be compared to the asymptotic distributions that are typically used for hypothesis testing. If some variables are not stationary, the asymptotic distributions may not be good approximations. The procedure for computing exact distributions is explained in Section 7.5 and applied to the 2SLS estimates of the US model in Section 8.4.

Testing

Testing has always played a major role in applied econometrics. When an equation is estimated, one examines how well it fits the data, if its coefficient estimates are significant and of the expected sign, if the properties of the estimated residuals are as expected, and so on. Equations are discarded or modified if they do not seem to approximate the process that generated the data very well. Sections 4.5-4.7 discuss the methods used in this book to test the individual equations, and Chapters 5 and 6 present the results of the tests.

Complete models can also be tested, but here things are more complicated. Before a complete model is tested, it must be solved. Given 1) a set of coefficient estimates, 2) values of the exogenous variables, 3) values of the error terms, and 4) lagged values of the endogenous variables, a model can be solved for the endogenous variables. If the solution (simulation) is "static," the actual values of the lagged endogenous variables are used for each period solved, and if the solution is "dynamic," the values of the lagged endogenous variables are taken to be the predicted values of the endogenous variables from the previous periods. If one set of values of the error terms is used, the simulation is said to be "deterministic." The expected values of the error terms are usually assumed to be zero, and so in most cases the error terms are set to zero for a deterministic solution. A "stochastic" simulation is one in which 1) the error terms are drawn from an estimated distribution, 2) the model is solved for each set of draws, and 3) the predicted value of each endogenous variable is taken to be the average of the solution values.

A standard procedure for evaluating how well a model fits the data is to solve the model by performing a dynamic, deterministic simulation and then to compare the predicted values of the endogenous variables with the actual values using the root mean squared error (RMSE) criterion. Other criteria include mean absolute error and Theil's inequality coefficient. If two models are being compared and model A has lower RMSEs for most of the variables than model B, this is evidence in favor of model A over model B.

There is always a danger in this business of "data mining," which means specifying and estimating different versions of a model until a good fit has been achieved (say in terms of the RMSE criterion). The danger with this type of searching is that one finds a model that fits well within the estimation period that is in fact a poor approximation of the economy. To guard against this, predictions are many times taken to be outside of the estimation period. If a model is poorly specified, it should not predict well outside the period for which it was estimated, even though it may fit well within the period. [Footnote 7: This is assuming that one does not search by 1) estimating a model up to a certain point, 2) solving the model for a period beyond this point, and 3) choosing the version that best fits the period beyond the point. This type of searching may lead to a model that predicts well outside the estimation period even though it is in fact a poor approximation. If this type of searching is done, then one has to wait for more observations to provide a good test of the model. Even if this type of searching is not formally done, it may be that information beyond the estimation period has been implicitly used in specifying a model. This might then lead to a better fitting model beyond the estimation period than is warranted. In this case, one would also have to wait for more observations to see how accurate the model is.]

One problem with the RMSE criterion (even if the predictions are outside of the estimation period) is that it does not take account of the fact that forecast error variances vary across time. Forecast error variances vary across time because of nonlinearities in the model, because of variation in the exogenous variables, and because of variation in the initial conditions. Although RMSEs are in some loose sense estimates of the averages of the variances across time, no rigorous statistical interpretation can be placed on them: they are not estimates of any parameters of a model.

A more serious problem with the RMSE criterion as a means of comparing models is that models may be based on different sets of exogenous variables. If, for example, one model takes investment as exogenous and a second does not, the first model has an unfair advantage when computing RMSEs.

I have developed a method, which uses stochastic simulation, that accounts for these RMSE difficulties. The method accounts for the four main sources of uncertainty of a forecast from a model: uncertainty due to 1) the error terms, 2) the coefficient estimates, 3) the exogenous variables, and 4) the possible misspecification of the model. The forecast error variance for each variable and each period that is estimated by the method accounts for all four sources of uncertainty, and so it can be compared across models. The estimated variances from different structural models can be compared, or the estimated variances from one structural model can be compared to those from an autoregressive or vector autoregressive model. If a particular model's estimated variances are in general smaller than estimated variances from other models, this is evidence in favor of the particular model.

A by-product of the method is an estimate of the degree of misspecification of a model for each endogenous variable. Any model is likely to be somewhat misspecified, and the method can estimate the quantitative importance of the misspecification.

The method can handle a variety of assumptions about exogenous variable uncertainty. One polar assumption is that there is no uncertainty attached to the exogenous variables. This might be true, for example, of some policy variables. The other polar assumption is that the exogenous variables are in some sense as uncertain as the endogenous variables. One can, for example, estimate autoregressive equations for each exogenous variable and add these equations to the model. This would produce a model with no exogenous variables, which could then be tested. An in between case is to estimate the variance of an exogenous variable forecast error from actual forecasting errors made by a forecasting service---say the errors made by a commercial forecasting service in forecasting defense spending.

This method was developed in Fair (1980), and it is also discussed in Fair (1984). It is briefly reviewed in Section 7.7 of the current book and then used in Section 8.6 to compare the US model to other models.

Another method of comparing complete models is to regress the actual value of an endogenous variable on a constant and forecasts of the variable from two or more models. This method, developed in Fair and Shiller (1990), is discussed in Section 7.8 and applied in Section 8.7. It is related to the literature on encompassing tests---see, for example, Davidson and MacKinnon (1981), Hendry and Richard (1982), and Chong and Hendry (1986).

Another test, developed in Fair (1993c), is discussed in Section 7.9 and applied in Section 8.8. It examines how well a model predicts various economic events, such as a recession or severe inflation. This test uses stochastic simulation to estimate event probabilities from macroeconometric models, where the estimated probabilities are then compared to the actual outcomes.

Tests of the sort just described seem clearly in the spirit of the Cowles Commission approach. A model to the Cowles Commission was a null hypothesis to be tested.

Analysis

Once a model has been estimated, there are a variety of ways in which it can be analyzed. Methods for analyzing the properties of models are discussed in Chapter 10. Again, stochastic simulation is used for many of these methods. The methods include computing multipliers and their standard errors, examining the sources of economic fluctuations, examining the optimal choice of monetary-policy instruments, and solving optimal control problems.

It is sometimes felt that analyzing the properties of a model is a way of testing it, but one must be very careful here. A model may be specified and constrained in ways that lead it to have "reasonable" properties from the point of view of the model builder, but this does not necessarily mean that it is a good approximation of the economy. Unless a model tests well, it is not likely to be a good approximation even if it has reasonable properties. If, on the other hand, a model has what seem to be bizarre properties, this may mean that the model is not a good approximation even if it has done well in the tests. This may indicate that the tests that were performed have low power.

In practice there is considerable movement back and forth from analysis to specification. If a model's properties do not seem reasonable, the model may be changed and then analyzed again. This procedure usually results in a model with "reasonable" properties, but again this is not a substitute for testing the model.

Use of the Cowles Commission Approach for the MC Model

To review, the use of the Cowles Commission approach for the MC model is as follows. The theory that has been used to guide the empirical specifications is discussed in Chapter 2. Chapter 3 presents the data to which the specifications are to be applied. It also briefly discusses the transition from theory to empirical specifications. Chapters 5 and 6 combine elements of specification, estimation, and testing. The individual stochastic equations are specified, estimated, and tested in these two chapters---Chapter 5 for the US model and Chapter 6 for the ROW model. Because specification, estimation, and testing are so closely linked, it is generally useful to discuss these together, and this is what is done in Chapters 5 and 6. The complete models are then tested in Chapters 8 and 9---the US model in Chapter 8 and the entire MC model in Chapter 9. There is no further specification in these two chapters. Finally, Chapters 11 and 12 examine the properties of the models. This is the analysis part of the Cowles Commission approach.

Before proceeding to a discussion of the theory, it will be useful to consider the real business cycle approach and the approach of new Keynesian economists from the perspective of the Cowles Commission approach, and this is the subject matter for the rest of this chapter.

1.3 The Real Business Cycle Approach
As noted in Section 1.1, the RBC approach is a culmination of a line of research that was motivated by the Lucas critique. In discussing this approach, it will be useful to begin with the utility maximization model in Section 1.2. The RBC approach to this model would be to specify a particular functional form for the utility function in equation 1.2. The parameters of this function would then be either estimated or simply chosen ("calibrated") to be in line with parameters estimated in the literature. Although there is some parameter estimation in the RBC literature, most of the studies calibrate rather than estimate, in the spirit of the seminal article by Kydland and Prescott (1982). If the parameters are estimated, they are estimated from the first order conditions. A recent example is Christiano and Eichenbaum (1990), where the parameters of their model are estimated using Hansen's (1982) GMM procedure. Altug (1989) estimates the parameters of her model using a likelihood procedure. Chow (1991) and Canova, Finn, and Pagan (1991) contain interesting discussions of the estimation of RBC models. There is also a slightly earlier literature in which the parameters of a utility function like the one in equation 1.2 are estimated from the first order conditions---see, for example, Hall (1978), Hansen and Singleton (1982), and Mankiw, Rotemberg, and Summers (1985).

The RBC approach meets the Lucas critique in the sense that, given the various assumptions, deep structural parameters are being estimated (or calibrated). It is hard to overestimate the appeal this has to many people. Anyone who doubts this appeal should read Lucas' 1985 Jahnsson lectures [Lucas (1987)], which are an elegant argument for dynamic economic theory. The tone of these lectures is that there is an exciting sense of progress in macro-economics and that there is hope that in the end there will be essentially no distinction between microeconomics and macroeconomics. There will simply be economic theory applied to different problems.

Once the coefficients are chosen, by whatever means, the overall model is solved. In the example in Section 1.2, one would solve the utility maximization problem for the optimal consumption and leisure paths. The properties of the computed paths of the decision variables are then compared to the properties of the actual paths of the variables. If the computed paths have similar properties to the actual paths (e.g., similar variances, covariances, and autocovariances), this is judged to be a positive sign for the model. If the parameters are chosen by calibration, there is usually some searching over parameters to find that set that gives good results in matching the computed paths to the actual paths in terms of the particular criterion used. In this sense the calibrated parameters are also estimated.

Is the RBC approach a good way of testing models? At first glance it might seem so, since computed paths are being compared to actual paths. But the paths are being compared in a very limited way from the way that the Cowles Commission approach would compare them. Take the simple RMSE procedure. This procedure would compute a prediction error for a given variable for each period and then calculate the RMSE from these prediction errors. This RMSE might then be compared to the RMSE from another structural model or from an autoregressive or vector autoregressive model.

I have never seen this type of comparison done for a RBC model. How would, say, the currently best fitting RBC model compare to a simple first order autoregressive equation for real GDP in terms of the RMSE criterion? Probably very poorly. Having the computed path mimic the actual path for a few selected moments is a far cry from beating even a first order autoregressive equation (let alone a structural model) in terms of fitting the observations well according to the RMSE criterion. The disturbing feature of the RBC literature is there seems to be no interest in computing RMSEs and the like. People generally seem to realize that the RBC models do not fit well in this sense, but they proceed anyway.

If this literature proceeds anyway, it has in my view dropped out of the race for the model that best approximates the economy. The literature may take a long time to play itself out, but it will eventually reach a dead end unless it comes around to developing models that can compete with other models in explaining the economy observation by observation.

One of the main reasons people proceed anyway is undoubtedly the Lucas critique, and the general excitement about deep structural parameters. Why waste one's time in working with models whose coefficients change over time as policy rules and other things change? The logic of the Lucas critique is certainly correct, but the key question for empirical work is the quantitative importance of this critique. Even the best econometric model is only an approximation of how the economy works. Another potential source of coefficient change is the use of aggregate data. As the age and income distributions of the population change, the coefficients in aggregate equations are likely to change, and this is a source of error in the estimated equations. This problem may be quantitatively much more important than the problem raised by Lucas. Put another way, the representative agent model that is used so much in macroeconomics has serious problems of its own, and these problems may swamp the problem of coefficients changing when policy rules change. The RBC literature has focused so much on solving one problem that it may have exacerbated the effects of a number of others. In what sense, for example, is the RBC literature estimating deep structural parameters if a representative agent utility function is postulated and used that is independent of demographic changes over time? (A way of examining the possible problem of coefficients in macroeconomic equations changing as the age distribution changes is discussed in Section 4.7 and applied in Chapter 5.)

When deep structural parameters have been estimated from the first order conditions, the results have not always been very good even when judged by themselves. The results in Mankiw, Rotemberg, and Summers (1985) for the utility parameters are not supportive of the approach. In a completely different literature---the estimation of production smoothing equations---Krane and Braun (1989), whose study uses quite good data, report that their attempts to estimate first order conditions were unsuccessful. It may simply not be sensible to use aggregate data to estimate utility function parameters and the like.

Finally, one encouraging feature regarding the Lucas critique is that it can be tested. Assume that for an equation or set of equations the parameters change considerably when a given policy variable changes. Assume also that the policy variable changes frequently. In this case the model is obviously misspecified, and so methods like those mentioned in Section 1.2 should be able to pick up this misspecification if the policy variable has changed frequently. If the policy variable has not changed or changed very little, then the model will be misspecified, but the misspecification will not have been given a chance to be picked up in the data. But otherwise, models that suffer in an important way from the Lucas critique ought to be weeded out by various tests.

1.4 The New Keynesian Economics
I come away from reading new Keynesian articles feeling uneasy. It's like coming out of a play that many of your friends liked and feeling that you did not really like it, but not knowing quite why. Given my views of how the economy works, many of the results of the new Keynesian literature seem reasonable, but something seems missing. One problem is that it is hard to get a big picture. There are many small stories, and it's hard to remember each one. In addition, many of the conclusions do not seem robust to small changes in the models.

Upon further reflection, however, I do not think this is my main source of uneasiness. The main problem is that this literature is not really empirical in the Cowles Commission sense. This literature has moved macroeconomics away from its econometric base. Consider, for example, the articles in the two volumes of New Keynesian Economics, edited by Mankiw and Romer (1991). By my count, of the 34 papers in these two volumes, only eight have anything to do with data. [Footnote 8: One might argue nine. Okun's article "Inflation: Its Mechanics and Welfare Costs," which I did not count in the eight, presents and briefly discusses data in one figure.] Of these eight, one (Carlton, "The Rigidity of Prices") is more industrial organization than macro and one (Krueger and Summers, "Efficiency Wages and the Interindustry Wage Structure") is more labor than macro. These two studies provide some interesting insights that might be of help to macroeconomists, but they are not really empirical macroeconomics.

It has been pointed out to me [Footnote 9: By Olivier Blanchard] that the Mankiw and Romer volumes may be biased against empirical papers because of space constraints imposed by the publisher. Nevertheless, it seems clear that there is very little in the new Keynesian literature similar to the structural modeling outlined in Section 1.2. As is also true in the RBC literature of RBC models, one does not see, say, predictions of real GDP from some new Keynesian model compared to predictions of real GDP from an autoregressive equation using a criterion like the RMSE criterion. But here one does not see it because no econometric models of real GDP are constructed! So this literature is in danger of dropping out of the race not because it is necessarily uninterested in serious tests but because it is uninterested in constructing econometric models.

I should hasten to add that I do not mean by the above criticisms that there is no interesting empirical work going on in macroeconomics. For example, the literature on production smoothing, which is largely empirical, has produced some important results and insights. It is simply that literature of this type is not generally classified as new Keynesian. Even if one wanted to be generous and put some of this empirical work in the new Keynesian literature, it is surely not the essence of new Keynesian economics.

One might argue that new Keynesian economics is just getting started and that the big picture (model) will eventually emerge to rival existing models of the economy. This is probably an excessively generous interpretation, given the focus of this literature on small theoretical models, but unless the literature does move in a more econometric and larger model direction, it is not likely to have much long run impact.

1.5 Looking Ahead
So I see the RBC and new Keynesian literatures passing each other like two runners in the night, both having left the original path laid out by the Cowles Commission and its predecessors. The RBC literature is only interested in testing in a very limited way, and the new Keynesian literature is not econometric enough to even talk about serious testing.

But I argue there is hope. Models can be tested, and there are procedures for weeding out inferior models. Even the quantitative importance of the Lucas critique can be tested. The RBC literature should entertain the possibility of testing models based on estimating deep structural parameters against models based on estimating approximations of decision equations. Also, the tests should be more than just observing whether a computed path mimics the actual path in a few ways. The new Keynesian literature should entertain the possibility of putting its various ideas together to specify, estimate, and test structural macroeconometric models.

Finally, both literatures ought to consider bigger models. I have always thought it ironic that one of the consequences of the Lucas critique was to narrow the number of endogenous variables in a model from many (say a hundred or more) to generally no more than three or four. If one is worried about coefficients in structural equations changing, it seems unlikely that getting rid of the structural detail in large scale models is going to get one closer to deep structural parameters.

At any rate, what follows is an application of the Cowles Commission approach. A structural macroeconomic model is specified, estimated, tested, and analyzed.