db34857dad14dd0bd0d02c62b609d42d.ppt
- Количество слайдов: 12
Conditional Forecasts in DSGE models Author: Junior Maih Discussant: Alon Binyamini Central Bank Macroeconomic Modeling Workshop 2009, Jerusalem 1
Outline o Review: n n n o Motivation Technique Main results Comments on: n n Technique Implementation Conclusion Text 2
Review 3
Review – motivation o Background – improving DSGE forecast by conditioning information. n n n o DSGE offer interpretation. DSGE are forward looking – future info is relevant and useful. Yet, may fail to forecast some (central) endogenous variables. Questions: n n How to carry out (soft) conditional forecast? When hard is superior to soft condition? 4
Review – contributions The Junior Smoother o Technique for soft conditional forecast with fw-looking DSGE n n Nests hard and unconditional. Deals with two source of uncertainty o o o (present and future) state uncertainty. Structural uncertainty. Extract the most likely shocks-combination that satisfies the conditioning restrictions. n n E. g. : conditioning on future interest rate… The spirit of Kalman smoother. o Extension of earlier contribution - Waggoner & Zha (1999). o Utilized to show that relevant information can be too tight. 5
Comments 6
Comment on the methodology o Comment: n Backcasting doesn’t employ the conditioning information. o o n o Back-casting is inefficient. Starting values are inconsistent with conditioning information. Not an issue for unconditional forecast. Suggestion – iterate till convergence: n n n Compute conditional forecast using the Junior smoother. Extend sample by forecast. Update back-casting by Kalman smoother. Now, starting values forecast may differ. So, repeat to convergence. 7
Comment on the implementation o Verdict: Conditioning doesn’t necessarily improve forecast: n n o But, n n o Relevant information can be too tight. Disappointing result attributed to misspecification. Unconditional forecast is also misspecified. Ex-post realizations vis-à-vis ex-ante expectations. Two suggestions n n Repeat the analysis with ex-ante expectations. Divide and conquer by Monte-Carlo simulation: o o Forecast with “true” DGP (misspecification or irrelevant conditioning? ). Distinguish between state and structural uncertainties. 8
Comparison with Kalman Smoother If not equivalent, where the does the difference come from? o Is it equivalent to KS with conditioning restrictions as observables and nowcast for initialization? o “…allows for the possibility of agents reacting to anticipated future events beyond one step ahead. ” n o But this should be attributed to the state-space representation: “…it does not change the initial conditions of the state vector. ” n n n Can restrict the KS similarly. But, is it efficient? Keynes: “When the facts change, I change my mind. What do you do, sir? ” 9
Comments on the text (cont’) o Under similar treatment, would KS & JS extract similar or different shocks? Why? o Chapter 2 (the intuition-building example) Why tighter conditioning shifts expectations without increasing certainty? 10
To conclude o Two birds: n n o Interesting. Useful. Appealing the disappointing verdict: n n n Relevant and reliable hard conditioning might turn out to be too tight. But, ex-post realizations may not be relevant. Even if correct, misspecification might be the wrong guy to blame. 11
The End ! Thank you 12


