TEU Research

Resume


Publications

  • Bayesian Fan Charts for U.K. Inflation: Forecasting and Sources of Uncertainty in an Evolving Monetary System (joint with Timothy Cogley and Thomas J. Sargent),  Journal of Economic Dynamics and Control, Volume 29, Issue 11, November 2005, Pages 1893-1925
     
    We estimate a Bayesian vector autoregression for the U.K. with drifting coefficients and stochastic volatilities. We use it to characterize posterior densities for several objects that are useful for designing and evaluating monetary policy, including local approximations to the mean, persistence, and volatility of inflation. We present diverse sources of uncertainty that impinge on the posterior predictive density for inflation, including model uncertainty, policy drift, structural shifts and other shocks. We use a recently developed minimum entropy method to bring outside information to bear on inflation forecasts. We compare our predictive densities with the Bank of England's fan charts. [Link to Journal Version] [Link to Original]
  • Massively Parallel Computation Using Graphics Processors with Application to Optimal Experimentation in Dynamic Control (joint with Sudhanshu Mathur),  Computational Economics, Volume 40, Issue 2, August 2012, Pages 151-182
     
    The rapid growth in the performance of graphics hardware, coupled with recent improvements in its programmability has lead to its adoption in many non-graphics applications, including a wide variety of scientific computing fields. At the same time, a number of important dynamic optimal policy problems in economics are athirst of computing power to help overcome dual curses of complexity and dimensionality. We investigate if computational economics may benefit from new tools on a case study of imperfect information dynamic programming problem with learning and experimentation trade-off, that is, a choice between controlling the policy target and learning system parameters. Specifically, we use a model of active learning and control of a linear autoregression with the unknown slope that appeared in a variety of macroeconomic policy and other contexts. The endogeneity of posterior beliefs makes the problem difficult in that the value function need not be convex and the policy function need not be continuous. This complication makes the problem a suitable target for massively-parallel computation using graphics processors (GPUs). Our findings are cautiously optimistic in that new tools let us easily achieve a factor of 15 performance gain relative to an implementation targeting single-core processors. Further gains up to a factor of 26 are also achievable but lie behind a learning and experimentation barrier of their own. Drawing upon experience with CUDA programming architecture and GPUs provides general lessons on how to best exploit future trends in parallel computation in economics.[Link to Original Paper][Presentation slides][Source Code][Link to Journal Version]


Working Papers

  • Returns or Differences? Methods for Risk Functional Form Selection (May 2014):
    We describe and illustrate three categories methods that help decide on whether risk factors, underlying portfolio risk measurement framework such as VaR, should be represented as returns or differences (or some hybrid form). Methods in the first category rank alternative functional forms by their performance with respect to stationarity tests or measures of goodness of fit with respect to a particular unconditional residual distribution (e.g., Student t). These These methods are largely informal and must be handled with care, as the two representations are not nested, goodness-of-fit tests can be biased if their null distributions are themselves estimated traditional unit root tests of stationarity are designed to address somewhat different question, while homoscedasticity tests tend to have power only against specific alternatives, are sensitive to non-normality and often inconclusive. Second category of methods revolves around some form of elasticity of variance model which conveniently nests both return and difference representations. Depending on the time-series in question, and one's willingness to make distributional or prior assumptions, GMM, maximum likelihood or Bayesian estimation procedures can be called for. Third category nests return and difference representations in wider non-parametric method class and includes methods to estimate the volatility as a smooth function of the level, allowing for a possibility of functional representation switch depending on the level. The methods are illustrated using daily interest rate swap data, for which we find that the return formulation is preferable when rates are below roughly 2.8%, as is prevalent in most recent four year sub-sample, while the difference representation works better for rates above this cutoff. [PDF link] [Presentation][Source code and data]
  • Stochastic Elasticity of Volatility Model (March 2010):
    We model elasticity of volatility as a stochastic process with an eye to merge popular constant elasticity of variance (CEV) and stochastic volatility (SV) models in order to understand when it is appropriate to use absolute or relative changes or some intermediate transformation as well as to compare with more traditional autoregressive exponential stochastic volatility formulations. We describe Markov chain Monte Carlo algorithm to efficiently sample from posterior distribution as well as associated particle filter for likelihood analysis and model comparison. Application to short term interest rate data over 2006-2009 period indicate large swings in the elasticity of volatility with tight posterior confidence. [link to paper forthcoming]
  • Limits of Passive Learning in the Bayesian Dual Control of Drifting Coefficient Regression (April 2009):
    We study the quality of passively adaptive approximations to both passively adaptive optimal and actively adaptive optimal solutions to the Bayesian dual control problem when coefficients of the target state evolution drift continuously as in Beck and Wieland (2002). Amongst the passive learning approaches we compare the performance of certainty equivalent control, anticipated utility policy, limited lookahead and Markov jump-linear-quadratic approximation. Solutions featuring active experimentation are of two kinds - the solution to the original infinite horizon dual control problem found by Dynamic programming algorithm, and its one-period limited lookahead version. Certainty equivalent and actively optimal policies displays the largest amount of experimentation, accidental for the former and intentional for the latter. While we find only modest differences in expectation between more advanced passive policies on the one hand and either of the active policies on the other, the fully optimal active policy is the only one robust to unfortunate rare draws and prevents partitioning of the state space into two basins of attraction with escape-like dynamics between the two. In addition, anticipated utility policy and approximating Markov jump-linear-quadratic policy with small number of regimes are hard to distinguish, upholding computational advantages of anticipated utility.[PDF,part 1][PDF,part 2]
  • Bayesian Active Learning and Control with Uncertain Two-Period Impulse Response (December 2008):
    Bayesian decision making under parameter uncertainty confronts a difficult tradeoff between stabilization and experimentation. This paper is devoted to the study of aspects of the tradeoff in an environment where response to a policy action lasts for two periods but the magnitudes are unknown. We focus on the characterization of the policy activism over the space of inherited policies and beliefs, investigate limit beliefs and time-series properties of outcomes and quantify the value of intentional experimentation in comparison with an ensemble of alternative decision rules including certainty equivalent, cautionary myopic and limited lookahead policies.
  • Active Learning and Control of Stationary Autoregression with Uknown Slope and Persistence (June 2008):
    This paper studies control of a simple first-order autoregressive process where the policy impact and persistence parameters are both unknown with prior beliefs restricting permissible parameter configurations to the region of stationarity. We achieve two objectives. First,we provide the first characterization of the actively optimal solution to the simultaneous problem of learning and control that optimally balances exploration (information acquisition) and exploitation (target stabilization) by means of dynamic programming in the state space that is extended to include both physical and informational state variables. Our numerical findings indicate substantial degree of experimentation inherent in the optimal decision rule. Optimal policy is often discontinuous and takes on irregular shape. We identify regions in the state space where the optimizing decision-maker is prompted to explore more actively while foregoing some stabilization performance and regions where stabilization motive is the dominant one. For example, the degree of experimentation decreases as beliefs about persistence approach unit root. In contrast, the experimentation rises with the variance of belief about persistence unless it is so large that the sign of autoregressive coefficient becomes highly uncertain, so that further increases lead now to a reduction of the policy activism. Whether realized learning is noticeably altered depends the frequency of visits to experimentation-dominated regions, an issue explored via simulations. We also explore sensitivity of the solution to the changes in the model parameters. For instance, we find that the variance of state shock tends to make the optimal policy more cautious, except for some outlying regions in the belief space. Second, we contrast the features of the optimal control against an ensemble of suboptimal alternatives, in hope of identifying key strategies that help mimic the performance of the actively optimal solution without mounting computational complexity of exhaustive elaboration. These could potentially form an arsenal of good rules of thumb to attack problems of higher dimension where dynamic programming is not yet feasible. Our ensemble of approximate solutions includes myopic, certainty equivalent, anticipated utility, limited lookahead policies and assorted novel hybridizations and modifications, such as methods for the actively adaptive prediction of posterior variance. The comparison is done in terms of the analysis of policy and value functions as well as in terms of simulated Monte Carlo outcomes. We conclude that aligning the degree of experimentation, whether intentional or not, with that of the optimal policy is essential for good performance of suboptimal approximation. In this regard, the policy that incorporates unscented Kalman filter to project future beliefs in the indirect approximate limited lookahead often performs closest to the optimum. For robustness reasons, we study versions with continuous normal prior beliefs with and without truncation to the stationary region and also a version with discrete prior over persistence. Macroeconomic application is discussed as well.
  • Bayesian Dual Control: Review of the Literature (April 2008):
    Imperfect information in the form of model uncertainty in the dynamic intertemporal choice problems makes the optimizing decision maker face difficult tradeoff between simultaneously controlling the policy target and estimating the impact of policy action. Optimal dual control balances the dual objectives of exploitation and exploration but is difficult to compute. Extant literature on Bayesian dual control is reviewed here. [PDF]
  • Empirical Beliefs (joint with Victor Chernozhukov, August 2000):
    The paper proposes a method for construction, estimation, and testing the Rational Beliefs (RB) models. RB models, due to Kurz (1994), allow agents' beliefs to differ from the Rational Expectations (RE), but require that beliefs cannot be contradicted by past data. By implication, RB and RE must agree in strictly stationary worlds, while a disagreement is allowed in non-stationary setting. The estimation method involves sample counterparts to the conditional and unconditional moment restrictions formed from the Euler equations and rationality conditions. In essence, the method deduces systems of conditional beliefs consistent with the conditional moment restriction posed by the Euler equations. Consistent test statistics then discriminates the rationality from non-rationality. The attractive features are (i) the estimation and testing procedures are implemented without solving explicitly for RB equilibria, (ii) learning is permitted, and (iii) both the econometrician and the economic agents are put on the ``equal footing'' in the sense of Muth (1961) and ``down to earth''. Under flexible regularity conditions, the test statistics are shown to converge in distribution to the continuous functionals of generalized Brownian bridges, whose coordinates are projections on the space of moment functions that are used to phrase the rationality conditions. As a result, the limit distributions are non-standard or standard, depending on whether the test statistic is itself a function of finite-dimensional projection or a functional of the whole process, respectively. The resampling and simulation methods allow for valid approximation of either distribution. A simple estimated model of aggregate consumption and stock market behavior, populated by investors with rational beliefs, points to the variation in agents' sentiments as a dominant source of asset price volatility.[January 2000 version distributed as No 1654, Econometric Society World Congress 2000 Contributed Papers]

Presentations

  • Econometric Society World Congress, Seattle 2000
  • SCE conference on Computing in Economics and Finance, Paris 2008
  • SCE conference on Computing in Economics and Finance, Sydney 2009 [Presentation slides]
  • Far East and South Asia Meeting of Econometric Society, Tokyo 2009 [Presentation slides]
  • Econometric Society World Congress, Shanghai 2010 [Presentation slides]
  • SCE conference on Computing in Economics and Finance, Prague 2012

Research in Progress and Research Ideas

  • Learning
    • Active Learning and Macroeconomic Stabilization with Interest on Reserves
    • Optimal Actively Adaptive Monetary Policy with Forward Looking Variable
    • Bayesian Active Learning and Control with Two Policy Instruments
    • Active Learning in Dynamic Policy Games
    • Dimension of Uncertainty and Value of Experimentation
    • Learning Recurrent Hyperinflations
    • Bayesian Dual Control with Latent Volatility
    • Bayesian Dual Control with Data Missing at Random
  • Empirically Rational Beliefs
    • Beliefs as Factors
    • Empirically Rational Beliefs and the Cross-Section of Euler Equation Errors
    • Empirically Rational Beliefs with Active Experimentation
    • Good-deal Asset Price Bounds and Empirically Rational Aggregate Beliefs
  • Evolving Risk Premia
  • Dividend Yield Predictability
    • Bayesian Analysis of Dividend Yield Predictability: A Structural Drifting Parameter Approach
  • Computational Economics
    • Massively Parallel Computation using Graphics Processors
    • Parallel MCMC using Graphics Processors
    • Massively Parallel Complementary Multiply-With-Carry Random Number Generation
  • Elasticity of Volatility with Applications
    • Stochastic Elasticity of Variance
    • Autoregressive Conditional Elasticity of Volatility
  • Multivariate Stochastic Volatility
    • Multivariate Eigenfunction Stochastic Volatility
    • Restricted Stochastic Volatility for Evolving Term Structure of Interest Rates
    • WAR Processes and the Term Structure of Credit Spreads
    • Evolving Large-Scale Correlation Matrices
    • Multivariate Stochastic Elasticity of Volatility
  • Risk Measurement
    • Attribution of Portfolio Value-at-Risk: Nonparametric and Semiparametric Approaches
    • Relative vs. Absolute Change: Econometric Methods to Select Functional Form for Risk Measurement