At a given point in time, a forecaster will have access to data on economic variables consisting of different degrees of maturity. Data relating to the very recent past will be first-release data,or data which has as yet been revised only a few times. Data relating to a decade ago will typically have been subject to many rounds of revisions. How should the forecaster use the data to generate forecasts of the future? The conventional approach would be to estimate the forecasting model using the latest vintage of data available at that time, implicitly ignoring the fact that the data for different observations are of different maturities - that is, have been subject to different numbers of rounds of revision. In this article we draw on recent research on real-time forecasting to consider whether there are better ways of forecasting.
One can regard the conventional approach as treating the data as given, or ignoring the fact that it will be revised. In some cases the costs of doing so are less accurate point predictions, and assessments of uncertainty, than are achievable using one of a number of approaches to forecasting which explicitly allow for data revisions. There are a number of ways to allow for data revisions, including modelling the data revisions explicitly, an agnostic or reduced form approach, and using only largely unrevised data. We survey these methods, and provide guidance on choosing one, depending on whether the aim is to forecast an earlier release or the fully-revised values.