The Forecasting Record of the US Model: January 28, 2021

As discussed on the site, I did not make forecasts after the NIPA data were released on April 29, 2020, July 30, 2020, October 29, 2020, and January 28, 2021. These would have been forecasts 148, 149, 150, and 151. The following discussion is thus based only on forecasts through 147.

Data revisions make it difficult to compare forecasted and actual values. Should the forecasted values be compared to the first release of the data, the second release, the first major revision release, the last major revision release, etc.? How should one treat conceptual changes? Since the first US model forecast analyzed here (September 23, 1983), there have been many revisions of the data in the national income and product accounts and two major conceptual changes. The first major conceptual change was the shift in focus from GNP to GDP, and the second was moving to a chain-type measure of real GDP. The lastest major revision was released July 27, 2018, where the data were revised back to 1929.

Regarding GNP versus GDP, the model treats the difference between the two as exogenous, and for forecasting purposes the difference is taken to be roughly unchanged over time. This means that the forecasted growth rates of GNP and GDP are essentially the same. One can thus look at the past forecasts of GNP growth rates (before the shift in focus) as also forecasts of GDP growth rates, and they can thus be compared to the actual GDP growth rates. This is what is done in the following tables. The actual values for all quarters are taken to be the actual GDP growth rates, but the forecast values before the change in focus are taken to be GNP growth rates. The same is true for the price deflator: the actual values are taken to be the actual growth rates of the GDP deflator, but the forecast values before the change in focus are taken to be the growth rates of the GNP deflator.

Regarding the move to a chain-type measure, the following tables use the new measure. In other words, the model is being judged on how well it has forecasted the current estimates of the actual values based on the chain-type measure. If one thinks of these actual values as being the best current estimates of what actually happened, then the model is being judged on how it predicted what actually happened. This seems to be an interesting comparison, although there is clearly no right or wrong choice. Other choices could be made.

The data that were released April 29, 2020, are used for the actual values in the tables below. Again, the model is being judged on how well it has forecasted the current estimates of what actually happened.

Tables 1 through 5 present the forecasting record of the model for real GDP, the GDP deflator, and the unemployment rate. The tables are fairly self explanatory, and only Tables 4 and 5 will be discussed here. Tables 4 and 5 show how well the model forecasts one year ahead. Table 4 presents for each forecast the actual and predicted values of the growth rate of real GDP four quarters ahead, that is, the growth rate of real GDP over the first four quarters from the beginning of the forecast period. The results are as follows.

The model did extremely well in predicting the strong growth in the last half of 1983 and in 1984 and 1985. For example, the growth rate in the 1983:3-1984:2 period was 8.0 percent, and the model predicted 7.4 percent. The model overpredicted slightly the period around 1986 (forecasts 10-13), although the largest error is only 1.4 percentage points. It was then reasonably accurate until the recession of 1990-1991, which it missed. As is well known, the recession of 1990-1991 was hard to predict, and the model certainly did not predict it. The model did well in predicting the size of the recovery from the recession (forecasts 32-40). Forecasts 41-44 underpredicted the growth rate between 0.8 and 1.7 percentage points. Forecasts 45-48 were quite accurate. Between forecasts 49 and 65 (July 29, 1995, through July 30, 1999), the model underpredicted the growth rate by an average of 2.1 percentage points. The largest error in this period is for forecast 62 (November 3, 1998), where the model underpredicted by 3.8 percentage points. Much of the strong growth between 1995 and 1999 was due to the wealth effect from the stock market boom, and the stock market boom was not predicted by the model. This is the main reason for the underprediction of the growth rate between 1995 and 1999. Forecasts 66 and 67 were quite accurate, and forecasts 68-70 (April 28, 2000, July 31, 2000, and October 30, 2000) overpredicted by 1.2, 1.4, and 2.1 percentage points respectively. Forecasts 71-73 were quite accurate; forecast 74 underpredicted the growth rate by 1.7 percentage points; forecast 75 was quite accurate; forecast 76 overpredicted by 1.4 percentage points; forecasts 77 and 78 were quite accurate; and forecasts 79 and 80 underpredicted by 1.5 and 1.4 percentage points respectively. Forecasts 81 through 95 were fairly accurate except for forecast 92, which overpredicted by 1.4 percentage points. Forecasts 96 through 102 all overpredicted, by, respectively, 1.8, 2.5, 3.1, 5.5, 6.4, 7.1, and 3.1 percentage points. The recession that began in 2008 was clearly not predicted. The next four errors---forecasts 103, 104, 105, and 106---are small. The errors for forecasts 107 through 134 are all positive except for one, which means that the recovery since 2010 has been slower than predicted. The last ten errors---forecasts 134--143---are fairly small. The mean absolute error for the 143 forecasts in Table 4 is 1.27 percentage points.

Table 5 is the same as Table 4 except that it is for the growth rate of the GDP price deflator. The mean absolute error for the 143 inflation forecasts is only 0.78 percentage points, and so overall the inflation forecasts have been fairly accurate. Between forecasts 37 and 67 (August 5, 1992, through January 29, 2000) the error was never larger than 0.8 percentage points in absolute value. For forecasts 68-73 the model overpredicted inflation between 0.8 and 1.4 percentage points. Forecasts 74-89 were fairly accurate, with no error larger than 0.87 percentage points in absolute value. On the other hand, forecasts 90-103 overpredicted inflation by 1.3, 1.5, 1.3, 1.6, 2.0, 1.6, 1.9, 2.5, 2.4, 2.0, 2.4, 4.3, 4.7, and 2.1 percentage points, respectively. The 4.7 percentage point error for forecast 102 is the largest of all the forecasts. The model clearly overpredicted inflation for these 14 quarters. The errors for forecasts 104--143 are back to smaller errors, although inflation was persisently overpredicted between 2011 and 2014.