Network Effects and Infrastructure Productivity in Developing Countries
Network Effects and Infrastructure Productivity in Developing Countries
By Bertrand Candelon, Gilbert Colletaz, and Christophe Hurlin
Maastricht University (2011)
Abstract Paper

Bertrand Candelon

Maastricht University

Netherlands

Coder Page  

Gilbert Colletaz

University of Orleans

France

Coder Page  

Christophe Hurlin

University of Orleans

France

Coder Page  

This code allows estimating the parameters of a Panel Threshold Regression (PTR) model with one or two thresholds parameters. The model does not exactly correspond to that proposed by Hansen (1999). All the slope parameters are affected by the regime. In this model, you can consider contemporary exogenous variable. The code does not automatically introduce some lags on the threshold variable and the explicative variables. If you want to introduce such lags, you have to introduce lagged data in the form. The results display the estimated slope parameters and thresholds parameters. The F-tests F1 and/or F2 (test on the number of regimes) are also displayed. The corresponding Boostrap pvalues are displayed if the chosen number of simulations is greater than 0.
Created
March 04, 2012
Software:
Matlab
Visits
431
Last update
March 14, 2013
Ranking
35
Runs
414
Code downloads
182
Abstract
This paper proposes to investigate the threshold effects of the productivity of infrastructure investment in developing countries within a panel data framework. Various speci.cations of an augmented production function that allow for endogenous thresholds are considered. The overwhelming outcome is the presence of strong threshold effects in the relationship between output and private and public inputs. Whatever the transition mechanism used, the testing procedures lead to strong rejection of the linearity of this relationship. In particular, the productivity of infrastructure investment generally exhibits some network effects. When the available stock of infrastructure is very low, investment in this sector has the same productivity as non-infrastructure investment. On the contrary, when a minimumnetwork is available, the marginal productivity of infrastructure investment is generally largely greater than the productivity of other investments. Finally, when the main network is achieved, its marginal productivity becomes similar to the productivity of other investment.
Candelon, B., G. Colletaz, and C. Hurlin, "Network Effects and Infrastructure Productivity in Developing Countries", Maastricht University.
Number of thresholds
Number of thresholds
Dependent variable
Dependent variable
Threshold variable
Threshold variable
Explicative variables
Explicative variables
Time dimension
Time dimension
Trimming parameter
Trimming parameter
Confidence level
Confidence level
Number of Bootstrap simulations
Number of Bootstrap simulations
Waiting time

Please cite the publication as :

Candelon, B., G. Colletaz, and C. Hurlin, "Network Effects and Infrastructure Productivity in Developing Countries", Maastricht University.

Please cite the companion website as :

Candelon, B., G. Colletaz, and C. Hurlin, "Network Effects and Infrastructure Productivity in Developing Countries", RunMyCode companion website, http://www.execandshare.org/CompanionSite/Site65

Reset data > >
Preview data > >
Load demo data > >
Variable/Parameters Description, constraint Comments
Number of thresholds
    This choice list allows choosing the number of thresholds (i.e. regimes). You can choose a model with one threshold (two regimes), two thresholds (three regimes) or three thresholds (four regimes).
    Dependent variable
      This (TxN,1) vector corresponds to the values of the dependent variable y. The data are stacked cross-unit by cross unit, such as y=(y1’,..,yN’)’.
      Threshold variable
        This (TxN,1) vector corresponds to the values of the threshold variable q. The data are stacked cross-unit by cross unit, such as q=(q1’,..,qN’)’.
        Explicative variables
          This (TxN,K) matrix corresponds to the values of the K regressors for which the parameters are affected by the regimes. The data are stacked cross-unit by cross units, such as Z=(Z1’,..,Z N’)’. K must be larger than one.
          Time dimension
            T denotes the time dimension of the panel. Warning: this code only consider balanced panel, so the individual dimension is computed by dividing the total length of the series by T. If this value is not an integer, the code will fail.
            Trimming parameter
              The percentage corresponds to the trimming on the threshold variable. Since it is undesirable to have a small number of observations in any given regime, we restrict the search of c so that a minimum number of observations fall in each of the regimes. This percentage must be strictly larger than 0 and lesser than 0.2.
              Confidence level
                Confidence level for the LR tests on the number of regimes.
                Number of Bootstrap simulations
                  The number of Bootstrap replications determines the number of replications used to compute the p-value of the LR tests on the number of regimes (LR1, LR2, LR3). If this number is equal to 0, the tests statistics are not displayed.
                  Variable/Parameters Description Visualisation
                  Number of thresholds The number of thresholds is fixed to one (i.e. two regimes).
                  Dependent variable All data are issue from Candelon, Colletaz and Hurlin (2009), “Network Effects and Infrastructure Productivity in Developing Countries”, Maastricht University Working Paper. The dependent variable corresponds to the purchasing power parity GDP per worker (chain index) from 76 developing countries between 1961 and 1995.
                  Threshold variable The data are issue from Candelon, Colletaz and Hurlin (2009), “Network Effects and Infrastructure Productivity in Developing Countries”, Maastricht University Working Paper. The threshold variable corresponds to kilowatts of electricity generating capacity (Canning, 1999) for 76 developing countries between 1961 and 1995.
                  Explicative variables There are three explicative variables: the private stock of capital, the human capital and the infrastructure stock (electricity capacity). The private capital physical stocks are constructed using a perpetual inventory method. The initial stock is obtained by assuming a capital-output ratio of 3% in the base year. The flows of investments are taken from the Penn World Tables 6.00. As Canning, we assume a constant depreciation rate of 7% for the private capital. Human capital per worker is taken to be the average years of schooling of the workforce. Given the data availability, the average years of schooling of the workforce is approximated here by the average years of schooling of the total population aged 15 and above 25, from Barro and Lee (2000). See Candelon et al. (2009) for more details.
                  Time dimension All data are issue from Candelon, Colletaz and Hurlin (2009), “Network Effects and Infrastructure Productivity in Developing Countries”, Maastricht University Working Paper. The total number of countries is 76 and T is equal to 35 (1961-1995). The panel is balanced.
                  Trimming parameter In our application, smallest and largest 5% values are eliminated. The same kind of procedure is used for models with three or four regimes. Thus, for each model, at least 5% of the total of the NT observations is available to estimate the elasticities in each regime.
                  Confidence level The confidence level is set to 0.95 for the IC on the threshold parameter.
                  Number of Bootstrap simulations The number of simulations of fixed to 100.
                  Network Effects and Infrastructure Productivity in Developing Countries
                  B. Candelon, G. Colletaz, and C. Hurlin (2012)
                  Computing Date Status Actions
                  Coders:

                  Bertrand Candelon also created these companion sites

                  Backtesting Value-at-Risk: A GMM Duration-based Test
                  Abstract
                  This paper proposes a new duration-based backtesting procedure for VaR forecasts. The GMM test framework proposed by Bontemps (2006) to test for the distributional assumption (i.e. the geometric distribution) is applied to the case of the VaR forecasts validity. Using simple J-statistic based on the moments defined by the orthonormal polynomials associated with the geometric distribution, this new approach tackles most of the drawbacks usually associated to duration based backtesting procedures. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypothesis (Christoffersen, 1998). Third, Monte-Carlo simulations show that for realistic sample sizes, our GMM test outperforms traditional duration based test. Besides, we study the consequences of the estimation risk on the duration-based backtesting tests and propose a sub-sampling approach for robust inference derived from Escanciano and Olmo (2009). An empirical application for Nasdaq returns confirms that using GMM test leads to major consequences for the ex-post evaluation of the risk by regulation authorities.
                  Colletaz, G., B. Candelon, C. Hurlin, and S. Tokpavi, "Backtesting Value-at-Risk: A GMM Duration-based Test", Journal of Financial Econometrics, 9(2), 314-343 .
                  Authors: Candelon
                  Colletaz
                  Hurlin
                  Tokpavi
                  Coders: Colletaz
                  Candelon
                  Hurlin
                  Tokpavi
                  Last update
                  06/28/2012
                  Ranking
                  55
                  Runs
                  23
                  Visits
                  291
                  Currency Crises Early Warning Systems: why they should be Dynamic
                  Abstract
                  This paper introduces a new generation of Early Warning Systems (EWS) which takes into account the dynamics, i.e. the persistence in the binary crisis indicator. We elaborate on Kauppi and Saikonnen (2008), which allows to consider several dynamic specifications by re- lying on an exact maximum likelihood estimation method. Applied so as to predict currency crises for fifteen countries, this new EWS turns out to exhibit significantly better predic- tive abilities than the existing models both within and out of the sample, thus vindicating dynamic models in the quest for optimal EWS.
                  Candelon, B., E. Dumitrescu, and C. Hurlin, "Currency Crises Early Warning Systems: why they should be Dynamic", Maastricht University.
                  Authors: Candelon
                  Dumitrescu
                  Hurlin
                  Coders: Candelon
                  Dumitrescu
                  Hurlin
                  Last update
                  06/04/2012
                  Ranking
                  45
                  Runs
                  64
                  Visits
                  112
                  How To Evaluate an Early Warning System? Towards a unified Statistical Framework for Assessing Financial Crises Forecasting Methods
                  Abstract
                  This paper proposes an original and unified toolbox to evaluate financial crisis Early Warning Systems (EWS). It presents four main advantages. First, it is a model-free method which can be used to asses the forecasts issued from different EWS (probit, logit, markov switching models, or combinations of models). Second, this toolbox can be applied to any type of crisis EWS (currency, banking, sovereign debt, etc.). Third, it does not only provide various criteria to evaluate the (absolute) validity of EWS forecasts but also proposes some tests to compare the relative performance of alternative EWS. Fourth, our toolbox can be used to evaluate both in-sample and out-of-sample forecasts. Applied to a logit model for twelve emerging countries we show that the yield spread is a key variable for predicting currency crises exclusively for South-Asian countries. Besides, the optimal cut-off correctly allows us to identify now on average more than 2/3 of the crisis and calm periods.
                  Candelon, B., E. Dumitrescu, and C. Hurlin, "How To Evaluate an Early Warning System? Towards a unified Statistical Framework for Assessing Financial Crises Forecasting Methods", IMF Economic Review, 60.
                  Authors: Candelon
                  Dumitrescu
                  Hurlin
                  Coders: Candelon
                  Dumitrescu
                  Hurlin
                  Last update
                  07/23/2012
                  Ranking
                  34
                  Runs
                  24
                  Visits
                  269

                  Gilbert Colletaz also created these companion sites

                  Backtesting Value-at-Risk: A GMM Duration-based Test
                  Abstract
                  This paper proposes a new duration-based backtesting procedure for VaR forecasts. The GMM test framework proposed by Bontemps (2006) to test for the distributional assumption (i.e. the geometric distribution) is applied to the case of the VaR forecasts validity. Using simple J-statistic based on the moments defined by the orthonormal polynomials associated with the geometric distribution, this new approach tackles most of the drawbacks usually associated to duration based backtesting procedures. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypothesis (Christoffersen, 1998). Third, Monte-Carlo simulations show that for realistic sample sizes, our GMM test outperforms traditional duration based test. Besides, we study the consequences of the estimation risk on the duration-based backtesting tests and propose a sub-sampling approach for robust inference derived from Escanciano and Olmo (2009). An empirical application for Nasdaq returns confirms that using GMM test leads to major consequences for the ex-post evaluation of the risk by regulation authorities.
                  Colletaz, G., B. Candelon, C. Hurlin, and S. Tokpavi, "Backtesting Value-at-Risk: A GMM Duration-based Test", Journal of Financial Econometrics, 9(2), 314-343 .
                  Authors: Candelon
                  Colletaz
                  Hurlin
                  Tokpavi
                  Coders: Colletaz
                  Candelon
                  Hurlin
                  Tokpavi
                  Last update
                  06/28/2012
                  Ranking
                  55
                  Runs
                  23
                  Visits
                  291
                  Asymptotic Distribution-Free Diagnostic Tests For Heteroskedastic Time Series
                  Abstract
                  This article investigates model checks for a class of possibly nonlinear heteroskedastic time series models, including but not restricted to ARMA-GARCH models. We propose omnibus tests based on functionals of certain weighted standardized residual empirical processes. The new tests are asymptotically distribution-free, suitable when the conditioning set is infinite-dimensional, and consistent against a class of Pitman’s local alternatives converging at the parametric rate n-1/2, with n the sample size. A Monte Carlo study shows that the simulated level of the proposed tests is close to the asymptotic level already for moderate sample sizes and that tests have a satisfactory power performance. Finally, we illustrate our methodology with an application to the well-known S&P 500 daily stock index. The paper also contains an asymptotic uniform expansion for weighted residual empirical processes when initial conditions are considered, a result of independent interest.
                  Colletaz, G., "Asymptotic Distribution-Free Diagnostic Tests For Heteroskedastic Time Series", Econometric Theory, 26(03), 744-773.
                  Authors: Escanciano
                  Coders: Colletaz
                  Last update
                  12/06/2013
                  Ranking
                  18
                  Runs
                  39
                  Visits
                  218
                  The Risk Map: A New Tool for Validating Risk Models
                  Abstract
                  This paper presents a new tool for validating risk models. This tool, called the Risk Map, jointly accounts for the number and the magnitude of extreme losses and graphically summarizes all information about the performance of a risk model. It relies on the concept of Value-at-Risk (VaR) super exception, which is defined as a situation in which the loss exceeds both the standard VaR and a VaR defined at an extremely low coverage probability. We then formally test whether the sequences of exceptions and super exceptions is rejected by standard model validation tests. We show that the Risk Map can be used to validate market, credit, operational, or systemic (e.g. CoVaR) risk estimates or to assess the performance of the margin system of a clearing house.
                  Colletaz, G., C. Hurlin, and C. Perignon, "The Risk Map: A New Tool for Validating Risk Models", SSRN.
                  Authors: Colletaz
                  Hurlin
                  Perignon
                  Coders: Colletaz
                  Hurlin
                  Perignon
                  Last update
                  07/25/2013
                  Ranking
                  51
                  Runs
                  146
                  Visits
                  431
                  A Theoretical and Empirical Comparison of Systemic Risk Measures: MES versus CoVaR
                  Abstract
                  In this paper, we propose a theoretical and empirical comparison of two popular systemic risk measures - Marginal Expected Shortfall (MES) and Delta Conditional Value at Risk (ΔCoVaR) - that can be estimated using publicly available data. First, we assume that the time-varying correlation completely captures the dependence between firm and market returns. Under this assumption, we derive three analytical results: (i) we show that the MES corresponds to the product of the conditional ES of market returns and the time-varying beta of this institution, (ii) we give an analytical expression of the ΔCoVaR and show that the CoVaR corresponds to the product of the VaR of the firm's returns and the time-varying linear projection coefficient of the market returns on the firm's returns and (iii) we derive the ratio of the MES to the ΔCoVaR. Second, we relax this assumption and propose an empirical comparison for a panel of 61 US financial institutions over the period from January 2000 to December 2010. For each measure, we propose a cross-sectional analysis, a time-series comparison and rankings analysis of these institutions based on the two measures.
                  Benoit, S., G. Colletaz, C. Hurlin, and C. Perignon, "A Theoretical and Empirical Comparison of Systemic Risk Measures: MES versus CoVaR", SSRN.
                  Authors: Benoit
                  Colletaz
                  Hurlin
                  Perignon
                  Coders: Benoit
                  Colletaz
                  Hurlin
                  Perignon
                  Last update
                  10/25/2012
                  Ranking
                  53
                  Runs
                  181
                  Visits
                  398
                  A Generalized Asymmetric Student-t Distribution with Application to Financial Econometrics
                  Abstract
                  This paper proposes a new class of asymmetric Student-t (AST) distributions, and investigates its properties, gives procedures for estimation, and indicates applications in financial econometrics. We derive analytical expressions for the cdf, quantile function, moments, and quantities useful in financial econometric applications such as the Expected Shortfall. A stochastic representation of the distribution is also given. Although the AST density does not satisfy the usual regularity conditions for maximum likelihood estimation, we establish consistency, asymptotic normality and efficiency of ML estimators and derive an explicit analytical expression for the asymptotic covariance matrix. A Monte Carlo study indicates generally good finite-sample conformity with these asymptotic properties.
                  Colletaz, G., "A Generalized Asymmetric Student-t Distribution with Application to Financial Econometrics", Journal of Econometrics, 157, 297-305.
                  Authors: Zhu
                  Galbraith
                  Coders: Colletaz
                  Last update
                  05/05/2012
                  Ranking
                  38
                  Runs
                  6
                  Visits
                  95
                  Panel Smooth Transition Regression Models
                  Abstract
                  We develop a non-dynamic panel smooth transition regression model with fixed individual effects. The model is useful for describing heterogenous panels, with re- gression coefficients that vary across individuals and over time. Heterogeneity is allowed for by assuming that these coefficients are continuous functions of an ob- servable variable through a bounded function of this variable and fluctuate between a limited number (often two) of “extreme regimes”. The model can be viewed as a generalization of the threshold panel model of Hansen (1999). We extend the modelling strategy for univariate smooth transition regression models to the panel context. This comprises of model specification based on homogeneity tests, parame- ter estimation, and diagnostic checking, including tests for parameter constancy and no remaining nonlinearity. The new model is applied to describe firms’ investment decisions in the presence of capital market imperfections.
                  Colletaz, G., "Panel Smooth Transition Regression Models", SSE/EFI working paper series in economics and finance, n° 604..
                  Authors: Gonzalez
                  van Dijk
                  Terasvirta
                  Coders: Colletaz
                  Last update
                  07/16/2015
                  Ranking
                  9999
                  Runs
                  205
                  Visits
                  N.A.
                  Testing for Unit Roots in the Presence of Uncertainty Over Both the Trend and Initial Condition
                  Abstract
                  In this paper we provide a joint treatment of two major problems that surround testing for a unit root in practice: uncertainty as to whether or not a linear deterministic trend is present in the data, and uncertainty as to whether the initial condition of the process is (asymptotically) negligible or not. We suggest decision rules based on the union of rejections of four standard unit root tests (OLS and quasi-differenced demeaned and detrended ADF unit root tests), along with information regarding the magnitude of the trend and initial condition, to allow simultaneously for both trend and initial condition uncertainty.
                  Colletaz, G., "Testing for Unit Roots in the Presence of Uncertainty Over Both the Trend and Initial Condition", Journal of Econometrics, 169, 188-95.
                  Authors: Harvey
                  Leybourne
                  Taylor
                  Coders: Colletaz
                  Last update
                  10/08/2012
                  Ranking
                  48
                  Runs
                  20
                  Visits
                  44

                  Christophe Hurlin also created these companion sites

                  The Risk Map: A New Tool for Validating Risk Models
                  Abstract
                  This paper presents a new tool for validating risk models. This tool, called the Risk Map, jointly accounts for the number and the magnitude of extreme losses and graphically summarizes all information about the performance of a risk model. It relies on the concept of Value-at-Risk (VaR) super exception, which is defined as a situation in which the loss exceeds both the standard VaR and a VaR defined at an extremely low coverage probability. We then formally test whether the sequences of exceptions and super exceptions is rejected by standard model validation tests. We show that the Risk Map can be used to validate market, credit, operational, or systemic (e.g. CoVaR) risk estimates or to assess the performance of the margin system of a clearing house.
                  Colletaz, G., C. Hurlin, and C. Perignon, "The Risk Map: A New Tool for Validating Risk Models", SSRN.
                  Authors: Colletaz
                  Hurlin
                  Perignon
                  Coders: Colletaz
                  Hurlin
                  Perignon
                  Last update
                  07/25/2013
                  Ranking
                  51
                  Runs
                  146
                  Visits
                  431
                  A New Approach to Comparing VaR Estimation Methods
                  Abstract
                  We develop a novel backtesting framework based on multidimensional Value-at-Risk (VaR) that focuses on the left tail of the distribution of the bank trading revenues. Our coverage test is a multivariate generalization of the unconditional test of Kupiec (Journal of Derivatives, 1995). Applying our method to actual daily bank trading revenues, we find that non-parametric VaR methods, such as GARCH-based methods or filtered Historical Simulation, work best for bank trading revenues.
                  Perignon, C., and D. Smith, C. Hurlin, "A New Approach to Comparing VaR Estimation Methods", Journal of Derivatives , 15, 54-66.
                  Authors: Perignon
                  Smith
                  Coders: Perignon
                  Smith
                  Hurlin
                  Last update
                  07/16/2012
                  Ranking
                  54
                  Runs
                  11
                  Visits
                  270
                  Backtesting Value-at-Risk: A GMM Duration-based Test
                  Abstract
                  This paper proposes a new duration-based backtesting procedure for VaR forecasts. The GMM test framework proposed by Bontemps (2006) to test for the distributional assumption (i.e. the geometric distribution) is applied to the case of the VaR forecasts validity. Using simple J-statistic based on the moments defined by the orthonormal polynomials associated with the geometric distribution, this new approach tackles most of the drawbacks usually associated to duration based backtesting procedures. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypothesis (Christoffersen, 1998). Third, Monte-Carlo simulations show that for realistic sample sizes, our GMM test outperforms traditional duration based test. Besides, we study the consequences of the estimation risk on the duration-based backtesting tests and propose a sub-sampling approach for robust inference derived from Escanciano and Olmo (2009). An empirical application for Nasdaq returns confirms that using GMM test leads to major consequences for the ex-post evaluation of the risk by regulation authorities.
                  Colletaz, G., B. Candelon, C. Hurlin, and S. Tokpavi, "Backtesting Value-at-Risk: A GMM Duration-based Test", Journal of Financial Econometrics, 9(2), 314-343 .
                  Authors: Candelon
                  Colletaz
                  Hurlin
                  Tokpavi
                  Coders: Colletaz
                  Candelon
                  Hurlin
                  Tokpavi
                  Last update
                  06/28/2012
                  Ranking
                  55
                  Runs
                  23
                  Visits
                  291
                  Testing Interval Forecasts: A GMM-Based Approach
                  Abstract
                  This paper proposes a new evaluation framework of interval forecasts. Our model free test can be used to evaluate intervals forecasts and/or High Density Region, potentially discontinuous and/or asymmetric. Using simple J-statistic based on the moments defined by the orthonormal polynomials associated with the Binomial distribution, this new approach presents many advantages. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypothesis. Third, Monte-Carlo simulations show that for realistic sample sizes, our GMM test outperforms traditional LR test. These results are corroborated by an empirical application on SP500 and Nikkei stock market indexes. The empirical application for financial returns confirms that using a GMM test leads to major consequences for the ex-post evaluation of interval forecasts produced by linear versus non linear models.
                  Dumitrescu, E., C. Hurlin, "Testing Interval Forecasts: A GMM-Based Approach", Journal of Forecasting, -.
                  Authors: Dumitrescu
                  Hurlin
                  Madkour
                  Coders: Dumitrescu
                  Hurlin
                  Last update
                  06/05/2012
                  Ranking
                  10
                  Runs
                  29
                  Visits
                  340
                  Backtesting Value-at-Risk: A Duration-Based Approach
                  Abstract
                  Financial risk model evaluation or backtesting is a key part of the internal model’s approach to market risk management as laid out by the Basle Committee on Banking Supervision. However, existing backtesting methods have relatively low power in realistic small sample settings. Our contribution is the exploration of new tools for backtesting based on the duration of days between the violations of the Value-at-Risk. Our Monte Carlo results show that in realistic situations, the new duration-based tests have considerably better power properties than the previously suggested tests.
                  Hurlin, C., and C. Perignon, "Backtesting Value-at-Risk: A Duration-Based Approach", Journal of Financial Econometrics, 2, 84-108.
                  Authors: Pelletier
                  Christoffersen
                  Coders: Hurlin
                  Perignon
                  Last update
                  07/23/2012
                  Ranking
                  26
                  Runs
                  17
                  Visits
                  207
                  Threshold Effects of the Public Capital Productivity : An International Panel Smooth Transition Approach
                  Abstract
                  Using a non linear panel data model we examine the threshold effects in the productivity of the public capital stocks for a panel of 21 OECD countries observed over 1965-2001. Using the so-called "augmented production function" approach, we estimate various specifications of a Panel Smooth Threshold Regression (PSTR) model recently developed by Gonzalez, Teräsvirta and Van Dijk (2004). One of our main results is the existence of strong threshold effects in the relationship between output and private and public inputs : whatever the transition mechanism specified, tests strongly reject the linearity assumption. Moreover this model allows cross-country heterogeneity and time instability of the productivity without specification of an ex-ante classification over individuals. Consequently it is posible to give estimates of productivity coefficients for both private and public capital stocks at any time and for each countries in the sample. Finally we proposed estimates of individual time varying elasticities that are much more reasonable than those previously published.
                  Hurlin, C., "Threshold Effects of the Public Capital Productivity : An International Panel Smooth Transition Approach", University of Orléans.
                  Authors: Colletaz
                  Hurlin
                  Coders: Hurlin
                  Last update
                  07/22/2014
                  Ranking
                  31
                  Runs
                  2290
                  Visits
                  677
                  Evaluating Interval Forecasts
                  Abstract
                  A complete theory for evaluating interval forecasts has not been worked out to date. Most of the literature implicitly assumes homoskedastic errors even when this is clearly violated and proceed by merely testing for correct unconditional coverage. Consequently, the author sets out to build a consistent framework for conditional interval forecast evaluation, which is crucial when higher-order moment dynamics are present. The new methodology is demonstrated in an application to the exchange rate forecasting procedures advocated in risk management.
                  Hurlin, C., C. Perignon, "Evaluating Interval Forecasts", International Economic Review, 39, 841-862.
                  Authors: Christoffersen
                  Coders: Hurlin
                  Perignon
                  Last update
                  03/09/2012
                  Ranking
                  32
                  Runs
                  57
                  Visits
                  167
                  Are Public Investment Efficient in Creating Capital Stocks in Developing Countries? Estimates of Government Net Capital Stocks for 26 Developing Countries, 1970-2002
                  Abstract
                  We provide various estimates of the government net capital stocks for a panel of 26 developing countries over the period 1970-2001. These internationally comparable series of public capital are proposed as a complementary solution to the use of public investment flows and to the use of physical measures of infrastructure when one comes to evaluate the productivity of the public capital formation in developing countries. In these estimates based on various assumptions, we attempt to take into account the potential inefficiency of public investment in creating capital.
                  Arestoff, F., and C. Hurlin, "Are Public Investment Efficient in Creating Capital Stocks in Developing Countries? Estimates of Government Net Capital Stocks for 26 Developing Countries, 1970-2002", Economics Bulletin, 30, 1-11.
                  Authors: Arestoff
                  Hurlin
                  Coders: Arestoff
                  Hurlin
                  Last update
                  07/23/2012
                  Ranking
                  46
                  Runs
                  2
                  Visits
                  49
                  The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk
                  Abstract
                  The hybrid approach combines the two most popular approaches to VaR estimation: RiskMetrics and Historical Simulation. It estimates the VaR of a portfolio by applying exponentially declining weights to past returns and then finding the appropriate percentile of this time-weighted empirical distribution. This new approach is very simple to implement. Empirical tests show a significant improvement in the precision of VaR forecasts using the hybrid approach relative to these popular approaches.
                  Hurlin, C., C. Perignon, "The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk", Risk, 1, 64-67.
                  Authors: Boudoukh
                  Richardson
                  Whitelaw
                  Coders: Hurlin
                  Perignon
                  Last update
                  07/17/2012
                  Ranking
                  52
                  Runs
                  4
                  Visits
                  67
                  A Theoretical and Empirical Comparison of Systemic Risk Measures: MES versus CoVaR
                  Abstract
                  In this paper, we propose a theoretical and empirical comparison of two popular systemic risk measures - Marginal Expected Shortfall (MES) and Delta Conditional Value at Risk (ΔCoVaR) - that can be estimated using publicly available data. First, we assume that the time-varying correlation completely captures the dependence between firm and market returns. Under this assumption, we derive three analytical results: (i) we show that the MES corresponds to the product of the conditional ES of market returns and the time-varying beta of this institution, (ii) we give an analytical expression of the ΔCoVaR and show that the CoVaR corresponds to the product of the VaR of the firm's returns and the time-varying linear projection coefficient of the market returns on the firm's returns and (iii) we derive the ratio of the MES to the ΔCoVaR. Second, we relax this assumption and propose an empirical comparison for a panel of 61 US financial institutions over the period from January 2000 to December 2010. For each measure, we propose a cross-sectional analysis, a time-series comparison and rankings analysis of these institutions based on the two measures.
                  Benoit, S., G. Colletaz, C. Hurlin, and C. Perignon, "A Theoretical and Empirical Comparison of Systemic Risk Measures: MES versus CoVaR", SSRN.
                  Authors: Benoit
                  Colletaz
                  Hurlin
                  Perignon
                  Coders: Benoit
                  Colletaz
                  Hurlin
                  Perignon
                  Last update
                  10/25/2012
                  Ranking
                  53
                  Runs
                  181
                  Visits
                  398
                  Value-at-Risk (Chapter 7: Portfolio Risk - Analytical Methods)
                  Abstract
                  Book description: To accommodate sweeping global economic changes, the risk management field has evolved substantially since the first edition of Value at Risk, making this revised edition a must. Updates include a new chapter on liquidity risk, information on the latest risk instruments and the expanded derivatives market, recent developments in Monte Carlo methods, and more. Value at Risk, Second Edition, will help professional risk managers understand, and operate within, today’s dynamic new risk environment.
                  Hurlin, C., C. Perignon, "Value-at-Risk (Chapter 7: Portfolio Risk - Analytical Methods)", McGraw-Hill, Second edition.
                  Authors: Jorion
                  Coders: Hurlin
                  Perignon
                  Last update
                  03/16/2012
                  Ranking
                  33
                  Runs
                  9
                  Visits
                  303
                  Currency Crises Early Warning Systems: why they should be Dynamic
                  Abstract
                  This paper introduces a new generation of Early Warning Systems (EWS) which takes into account the dynamics, i.e. the persistence in the binary crisis indicator. We elaborate on Kauppi and Saikonnen (2008), which allows to consider several dynamic specifications by re- lying on an exact maximum likelihood estimation method. Applied so as to predict currency crises for fifteen countries, this new EWS turns out to exhibit significantly better predic- tive abilities than the existing models both within and out of the sample, thus vindicating dynamic models in the quest for optimal EWS.
                  Candelon, B., E. Dumitrescu, and C. Hurlin, "Currency Crises Early Warning Systems: why they should be Dynamic", Maastricht University.
                  Authors: Candelon
                  Dumitrescu
                  Hurlin
                  Coders: Candelon
                  Dumitrescu
                  Hurlin
                  Last update
                  06/04/2012
                  Ranking
                  45
                  Runs
                  64
                  Visits
                  112
                  Unit Root Tests in Panel Data: Asymptotic and Finite-Sample Properties
                  Abstract
                  We consider pooling cross-section time series data for testing the unit root hypothesis. The degree of persistence in individual regression error, the intercept and trend coefficient are allowed to vary freely across individuals. As both the cross-section and time series dimensions of the panel grow large, the pooled t-statistic has a limiting normal distribution that depends on the regression specification but is free from nuisance parameters. Monte Carlo simulations indicate that the asymptotic results provide a good approximation to the test statistics in panels of moderate size, and that the power of the panel-based unit root test is dramatically higher, compared to performing a separate unitroottest for each individual time series.
                  Hurlin, C., "Unit Root Tests in Panel Data: Asymptotic and Finite-Sample Properties", Journal of Econometrics, 108, 1-24.
                  Authors: Levin
                  Lin
                  Chu
                  Coders: Hurlin
                  Last update
                  06/28/2012
                  Ranking
                  37
                  Runs
                  220
                  Visits
                  322
                  Backtesting Value-at-Risk: From Dynamic Quantile to Dynamic Binary Tests
                  Abstract
                  In this paper we propose a new tool for backtesting that examines the quality of Value-at-Risk (VaR) forecasts. To date, the most distinguished regression-based backtest, proposed by Engle and Manganelli (2004), relies on a linear model. However, in view of the dichotomic character of the series of violations, a non-linear model seems more appropriate. In this paper we thus propose a new tool for backtesting (denoted DB) based on a dynamic binary regression model. Our discrete-choice model, e.g. Probit, Logit, links the sequence of violations to a set of explanatory variables including the lagged VaR and thelagged violations in particular. It allows us to separately test the unconditional coverage, the independence and the conditional coverage hypotheses and it is easy to implement. Monte-Carlo experiments show that the DB test exhibits good small sample properties in realistic sample settings (5% coverage rate with estimation risk). An application on a portfolio composed of three assets included in the CAC40 market index is finally proposed.
                  Hurlin, C., and E. Dumitrescu, "Backtesting Value-at-Risk: From Dynamic Quantile to Dynamic Binary Tests", Finance, 33.
                  Authors: Hurlin
                  Pham
                  Coders: Hurlin
                  Dumitrescu
                  Last update
                  07/05/2012
                  Ranking
                  20
                  Runs
                  46
                  Visits
                  168
                  Appendices for the article "Is Public Capital Really Productive? A Methodological Reappraisal"
                  Abstract
                  We present an evaluation of the main empirical approaches used in the literature to estimate the contribution of public capital stock to growth and private factors’ productivity. Based on a simple stochastic general equilibrium model, built as to reproduce the main long-run relations observed in US post-war historical data, we show that the production function approach may not be reliable to estimate this contribution. Our analysis reveals that this approach largely overestimates the public capital elasticity, given the presence of a common stochastic trend shared by all non-stationary inputs.
                  Hurlin, C., and A. Minea, "Appendices for the article "Is Public Capital Really Productive? A Methodological Reappraisal" ", Université d'Orléans.
                  Authors: Hurlin
                  Minea
                  Coders: Hurlin
                  Minea
                  Last update
                  09/26/2012
                  Ranking
                  21
                  Runs
                  N.A.
                  Visits
                  28
                  How To Evaluate an Early Warning System? Towards a unified Statistical Framework for Assessing Financial Crises Forecasting Methods
                  Abstract
                  This paper proposes an original and unified toolbox to evaluate financial crisis Early Warning Systems (EWS). It presents four main advantages. First, it is a model-free method which can be used to asses the forecasts issued from different EWS (probit, logit, markov switching models, or combinations of models). Second, this toolbox can be applied to any type of crisis EWS (currency, banking, sovereign debt, etc.). Third, it does not only provide various criteria to evaluate the (absolute) validity of EWS forecasts but also proposes some tests to compare the relative performance of alternative EWS. Fourth, our toolbox can be used to evaluate both in-sample and out-of-sample forecasts. Applied to a logit model for twelve emerging countries we show that the yield spread is a key variable for predicting currency crises exclusively for South-Asian countries. Besides, the optimal cut-off correctly allows us to identify now on average more than 2/3 of the crisis and calm periods.
                  Candelon, B., E. Dumitrescu, and C. Hurlin, "How To Evaluate an Early Warning System? Towards a unified Statistical Framework for Assessing Financial Crises Forecasting Methods", IMF Economic Review, 60.
                  Authors: Candelon
                  Dumitrescu
                  Hurlin
                  Coders: Candelon
                  Dumitrescu
                  Hurlin
                  Last update
                  07/23/2012
                  Ranking
                  34
                  Runs
                  24
                  Visits
                  269
                  Techniques for Verifying the Accuracy of Risk Management Models
                  Abstract
                  Risk exposures are typically quantified in terms of a "Value at Risk" (VaR) estimate. A VaR estimate corresponds to a specific critical value of a portfolio's potential one-day profit and loss probability distribution. Given their function both as internal risk management tools and as potential regulatory measures of risk exposure, it is important to quantify the accuracy of an institution's VaR estimates. This study shows that the formal statistical procedures that would typically be used in performance-based VaR verification tests require large samples to produce a reliable assessment of a model's accuracy in predicting the size and likelihood of very low probability events. Verification test statistics based on historical trading profits and losses have very poor power in small samples, so it does not appear possible for a bank or its supervisor to verify the accuracy of a VaR estimate unless many years of performance data are available. Historical simulation-based verification test statistics also require long samples to generate accurate results: Estimates of 0.01 critical values exhibit substantial errors even in samples as large as ten years of daily data.
                  Hurlin, C., C. Perignon, "Techniques for Verifying the Accuracy of Risk Management Models", Journal of Derivatives, 3, 73-84.
                  Authors: Kupiec
                  Coders: Hurlin
                  Perignon
                  Last update
                  04/17/2012
                  Ranking
                  57
                  Runs
                  26
                  Visits
                  339
                  A Comparative Study of Unit Root Tests with Panel Data and a New Simple Test
                  Abstract
                  The panel data unit root test suggested by Levin and Lin (LL) has been widely used in several applications, notably in papers on tests of the purchasing power parity hypothesis. This test is based on a very restrictive hypothesis which is rarely ever of interest in practice. The Im–Pesaran–Shin (IPS) test relaxes the restrictive assumption of the LL test. This paper argues that although the IPS test has been offered as a generalization of the LL test, it is best viewed as a test for summarizing the evidence from a number of independent tests of the sample hypothesis. This problem has a long statistical history going back to R. A. Fisher. This paper suggests the Fisher test as a panel data unit root test, compares it with the LL and IPS tests, and the Bonferroni bounds test which is valid for correlated tests. Overall, the evidence points to the Fisher test with bootstrap-based critical values as the preferred choice. We also suggest the use of the Fisher test for testing stationarity as the null and also in testing for cointegration in panel data.
                  Hurlin, C., "A Comparative Study of Unit Root Tests with Panel Data and a New Simple Test", Oxford Bulletin of Economics and Statistics, 61, 631-652.
                  Authors: Maddala
                  Wu
                  Coders: Hurlin
                  Last update
                  10/08/2012
                  Ranking
                  59
                  Runs
                  217
                  Visits
                  129
                  Is Public Capital Really Productive? A Methodological Reappraisal
                  Abstract
                  We present an evaluation of the main empirical approaches used in the literature to estimate the contribution of public capital stock to growth and private factors' productivity. Based on a simple stochastic general equilibrium model, built as to reproduce the main long-run relations observed in US post-war historical data, we show that the production function approach may not be reliable to estimate this contribution. Our analysis reveals that this approach largely overestimates the public capital elasticity, given the presence of a common stochastic trend shared by all non-stationary inputs.
                  Minea, A., and C. Hurlin, "Is Public Capital Really Productive? A Methodological Reappraisal", University of Orleans.
                  Authors: Minea
                  Hurlin
                  Coders: Minea
                  Hurlin
                  Last update
                  09/10/2012
                  Ranking
                  43
                  Runs
                  3
                  Visits
                  42
                  Margin Backtesting
                  Abstract
                  This paper presents a validation framework for collateral requirements or margins on a derivatives exchange. It can be used by investors, risk managers, and regulators to check the accuracy of a margining system. The statistical tests presented in this study are based either on the number, frequency, magnitude, or timing of margin exceedances, which are de…ned as situations in which the trading loss of a market participant exceeds his or her margin. We also propose an original way to validate globally the margining system by aggregating individual backtesting statistics ob- tained for each market participant.
                  Hurlin, C., and C. Perignon, "Margin Backtesting", University of Orleans, HEC Paris.
                  Authors: Hurlin
                  Perignon
                  Coders: Hurlin
                  Perignon
                  Last update
                  07/23/2014
                  Ranking
                  36
                  Runs
                  377
                  Visits
                  433
                  Testing for Granger Non-causality in Heterogeneous Panels
                  Abstract
                  This paper proposes a very simple test of Granger (1969) non-causality for hetero- geneous panel data models. Our test statistic is based on the individual Wald statistics of Granger non causality averaged across the cross-section units. First, this statistic is shown to converge sequentially to a standard normal distribution. Second, the semi- asymptotic distribution of the average statistic is characterized for a fixed T sample. A standardized statistic based on an approximation of the moments of Wald statistics is hence proposed. Third, Monte Carlo experiments show that our standardized panel statistics have very good small sample properties, even in the presence of cross-sectional dependence.
                  Dumitrescu, E., and C. Hurlin, "Testing for Granger Non-causality in Heterogeneous Panels", Economic Modelling, Forthcoming.
                  Authors: Dumitrescu
                  Hurlin
                  Coders: Dumitrescu
                  Hurlin
                  Last update
                  07/12/2017
                  Ranking
                  40
                  Runs
                  451
                  Visits
                  502
                  Value-at-Risk (Chapter 5: Computing VaR)
                  Abstract
                  Book description: To accommodate sweeping global economic changes, the risk management field has evolved substantially since the first edition of Value at Risk, making this revised edition a must. Updates include a new chapter on liquidity risk, information on the latest risk instruments and the expanded derivatives market, recent developments in Monte Carlo methods, and more. Value at Risk will help professional risk managers understand, and operate within, today’s dynamic new risk environment.
                  Hurlin, C., C. Perignon, "Value-at-Risk (Chapter 5: Computing VaR)", MacGraw-Hill, Third Edition.
                  Authors: Jorion
                  Coders: Hurlin
                  Perignon
                  Last update
                  03/19/2012
                  Ranking
                  44
                  Runs
                  63
                  Visits
                  328
                  Backtesting Value-at-Risk Accuracy: A Simple New Test
                  Abstract
                  This paper proposes a new test of value-at-risk (VAR) validation. Our test exploits the idea that the sequence of VAR violations (hit function) – taking value 1 - α if there is a violation, and -α otherwise – for a nominal coverage rate α verifies the properties of a martingale difference if the model used to quantify risk is adequate (Berkowitz et al., 2005). More precisely, we use the multivariate portmanteau statistic of Li and McLeod (1981), an extension to the multivariate framework of the test of Box and Pierce (1970), to jointly test the absence of autocorrelation in the vector of hit sequences for various coverage rates considered relevant for the management of extreme risks. We show that this shift to a multivariate dimension appreciably improves the power properties of the VAR validation test for reasonable sample sizes.
                  Hurlin, C., and S. Tokpavi, "Backtesting Value-at-Risk Accuracy: A Simple New Test", Journal of Risk, 9, 19-37.
                  Authors: Hurlin
                  Tokpavi
                  Coders: Hurlin
                  Tokpavi
                  Last update
                  03/13/2012
                  Ranking
                  49
                  Runs
                  2
                  Visits
                  240
                  Unit Root Tests for Panel Data
                  Abstract
                  This paper develops unit root tests for panel data. These tests are devised under more general assumptions than the tests previously proposed. First, the number of groups in the panel data is assumed to be either finite or infinite. Second, each group is assumed to have different types of nonstochastic and stochastic components. Third, the time series spans for the groups are assumed to be all different. Fourth, the alternative where some groups have a unit root and others do not can be dealt with by the tests. The tests can also be used for the null of stationarity and for cointegration, once relevant changes are made in the model, hypotheses, assumptions and underlying tests. The main idea for our unit root tests is to combine p-values from a unit root test applied to each group in the panel data. Combining p-values to formulate tests is a common practice in meta-analysis. This paper also reports the finite sample performance of our combination unit root tests and Im et al.'s [Mimeo (1995)] t-bar test. The results show that most of the combination tests are more powerful than the t-bar test in finite samples. Application of the combination unit root tests to the post-Bretton Woods US real exchange rate data provides some evidence in favor of the PPP hypothesis.
                  Hurlin, C., "Unit Root Tests for Panel Data", Journal of International Money and Finance, 20, 249-272.
                  Authors: Choi
                  Coders: Hurlin
                  Last update
                  10/08/2012
                  Ranking
                  60
                  Runs
                  62
                  Visits
                  261
                  Testing for Unit Roots in Heterogeneous Panels
                  Abstract
                  This paper proposes unit root tests for dynamic heterogeneous panels based on the mean of individual unit root statistics. In particular it proposes a standardized t-bar test statistic based on the (augmented) Dickey–Fuller statistics averaged across the groups. Under a general setting this statistic is shown to converge in probability to a standard normal variate sequentially with T (the time series dimension) →∞, followed by N (the cross sectional dimension) →∞. A diagonal convergence result with T and N→∞ while N/T→k,k being a finite non-negative constant, is also conjectured. In the special case where errors in individual Dickey–Fuller (DF) regressions are serially uncorrelated a modified version of the standardized t-bar statistic is shown to be distributed as standard normal as N→∞ for a fixed T, so long as T>5 in the case of DF regressions with intercepts and T>6 in the case of DF regressions with intercepts and linear time trends. An exact fixed N and T test is also developed using the simple average of the DF statistics. Monte Carlo results show that if a large enough lag order is selected for the underlying ADF regressions, then the small sample performances of the t-bar test is reasonably satisfactory and generally better than the test proposed by Levin and Lin (Unpublished manuscript, University of California, San Diego, 1993).
                  Hurlin, C., "Testing for Unit Roots in Heterogeneous Panels ", Journal of Econometrics, 115, 53-74.
                  Authors: Im
                  Pesaran
                  Shin
                  Coders: Hurlin
                  Last update
                  10/08/2012
                  Ranking
                  61
                  Runs
                  57
                  Visits
                  106
                  Testing for a Unit Root in Panels with Dynamic Factors
                  Abstract
                  This paper studies testing for a unit root for large n and T panels in which the cross-sectional units are correlated. To model this cross-sectional correlation, we assume that the data are generated by an unknown number of unobservable common factors. We propose unit root tests in this environment and derive their (Gaussian) asymptotic distribution under the null hypothesis of a unit root and local alternatives. We show that these tests have significant asymptotic power when the model has no incidental trends. However, when there are incidental trends in the model and it is necessary to remove heterogeneous deterministic components, we show that these tests have no power against the same local alternatives. Through Monte Carlo simulations, we provide evidence on the finite sample properties of these new tests.
                  Hurlin, C., "Testing for a Unit Root in Panels with Dynamic Factors", Journal of Econometrics, 122, 81-126.
                  Authors: Moon
                  Perron
                  Coders: Hurlin
                  Last update
                  10/08/2012
                  Ranking
                  62
                  Runs
                  399
                  Visits
                  124
                  Determining the Number of Factors in Approximate Factors Models
                  Abstract
                  In this paper we develop some econometric theory for factor models of large dimensions. The focus is the determination of the number of factors (r), which is an unresolved issue in the rapidly growing literature on multifactor models. We first establish the convergence rate for the factor estimates that will allow for consistent estimation of r. We then propose some panel criteria and show that the number of factors can be consistently estimated using the criteria. The theory is developed under the framework of large cross-sections (N) and large time dimensions (T). No restriction is imposed on the relation between N and T. Simulations show that the proposed criteria have good finite sample properties in many configurations of the panel data encountered in practice.
                  Hurlin, C., "Determining the Number of Factors in Approximate Factors Models", Econometrica, 70, 191-221.
                  Authors: Bai
                  Ng
                  Coders: Hurlin
                  Last update
                  01/29/2013
                  Ranking
                  39
                  Runs
                  66
                  Visits
                  230
                  Maximum Likelihood Methods for Models of Markets in Disequilibrium
                  Abstract
                  For the abstract, please click on: http://www.jstor.org/discover/10.2307/1914215?uid=3738016&uid=2&uid=4&sid=56146953873
                  Hurlin, C., "Maximum Likelihood Methods for Models of Markets in Disequilibrium", Econometrica, 42, 1013-1030.
                  Authors: Maddala
                  Nelson
                  Coders: Hurlin
                  Last update
                  02/15/2013
                  Ranking
                  50
                  Runs
                  85
                  Visits
                  391
                  Why don’t Banks Lend to the Private Sector in Egypt?
                  Abstract
                  Bank credit to the private sector fell as a share to GDP during the last decade, in spite of a successful bank recapitalization in the middle of the 2000s and high and stable growth before the recent macroeconomic turmoil. This paper explains this trend based on both bank supply factors and demand for credit from the private sector. First the paper describes the evolution of the banks’ sources and uses of funds in the period 2005-2011, characterized by two different cycles of external capital flows. Then it estimates supply and demand equations of credit to the private sector, using quarterly data for the period 1999-2011. First, the system of simultaneous equations is estimated assuming continuous market clearing. Then the system is estimated allowing for transitory disequilibrium. In general, the main results are robust to the market clearing assumption. Our main findings show that, while real industrial production and the stock market have a significant impact on credit demand, deposits and claims on government affected the supply of credit in Egypt. Finally, both models yield similar results for the most recent period of private credit contraction: the single most important factor explaining the largest share of the decline is the expansion of banking credit to the public sector. The slowdown in economic activity and the contraction of bank deposits explain the remainder of the predicted contraction in bank credit to the private sector.
                  Herrera, S., C. Hurlin, and C. Zaki, "Why don’t Banks Lend to the Private Sector in Egypt? ", World Bank Working Paper Series.
                  Authors: Herrera
                  Hurlin
                  Zaki
                  Coders: Herrera
                  Hurlin
                  Zaki
                  Last update
                  10/17/2013
                  Ranking
                  56
                  Runs
                  252
                  Visits
                  112
                  Are Public Investment Efficient in Creating Capital Stocks in Developing Countries? Estimates of Government Net Capital Stocks for 26 Developing Countries, 1970-2002
                  Abstract
                  In many poor countries, the problem is not that governments do not invest, but that these investments do not create productive capital. So, the cost of public investments does not correspond to the value of the capital stocks. In this paper, we propose an original non parametric approach to evaluate the efficiency function that links variations (net of depreciation) of stocks to public investments. We consider four sectors (electricity, telecommunications, roads and railways) of two Latin American countries (Mexico and Colombia). We show that there is a large discrepancy between the amount of investments and the value of increases in stocks.
                  Arestoff, F., and C. Hurlin, "Are Public Investment Efficient in Creating Capital Stocks in Developing Countries? Estimates of Government Net Capital Stocks for 26 Developing Countries, 1970-2002 ", Economics Bulletin, 30, 1515-1531.
                  Authors: Arestoff
                  Hurlin
                  Coders: Arestoff
                  Hurlin
                  Last update
                  10/08/2012
                  Ranking
                  11
                  Runs
                  N.A.
                  Visits
                  26

                  Other Companion Sites on same paper

                  Network Effects and Infrastructure Productivity in Developing Countries

                  Other Companion Sites relative to similar papers

                  Backtesting Value-at-Risk: From Dynamic Quantile to Dynamic Binary Tests
                  Abstract
                  In this paper we propose a new tool for backtesting that examines the quality of Value-at-Risk (VaR) forecasts. To date, the most distinguished regression-based backtest, proposed by Engle and Manganelli (2004), relies on a linear model. However, in view of the dichotomic character of the series of violations, a non-linear model seems more appropriate. In this paper we thus propose a new tool for backtesting (denoted DB) based on a dynamic binary regression model. Our discrete-choice model, e.g. Probit, Logit, links the sequence of violations to a set of explanatory variables including the lagged VaR and thelagged violations in particular. It allows us to separately test the unconditional coverage, the independence and the conditional coverage hypotheses and it is easy to implement. Monte-Carlo experiments show that the DB test exhibits good small sample properties in realistic sample settings (5% coverage rate with estimation risk). An application on a portfolio composed of three assets included in the CAC40 market index is finally proposed.
                  Hurlin, C., and E. Dumitrescu, "Backtesting Value-at-Risk: From Dynamic Quantile to Dynamic Binary Tests", Finance, 33.
                  Authors: Hurlin
                  Pham
                  Coders: Hurlin
                  Dumitrescu
                  Last update
                  07/05/2012
                  Ranking
                  20
                  Runs
                  46
                  Visits
                  168
                  The Risk Map: A New Tool for Validating Risk Models
                  Abstract
                  This paper presents a new tool for validating risk models. This tool, called the Risk Map, jointly accounts for the number and the magnitude of extreme losses and graphically summarizes all information about the performance of a risk model. It relies on the concept of Value-at-Risk (VaR) super exception, which is defined as a situation in which the loss exceeds both the standard VaR and a VaR defined at an extremely low coverage probability. We then formally test whether the sequences of exceptions and super exceptions is rejected by standard model validation tests. We show that the Risk Map can be used to validate market, credit, operational, or systemic (e.g. CoVaR) risk estimates or to assess the performance of the margin system of a clearing house.
                  Colletaz, G., C. Hurlin, and C. Perignon, "The Risk Map: A New Tool for Validating Risk Models", SSRN.
                  Authors: Colletaz
                  Hurlin
                  Perignon
                  Coders: Colletaz
                  Hurlin
                  Perignon
                  Last update
                  07/25/2013
                  Ranking
                  51
                  Runs
                  146
                  Visits
                  431
                  Why Simple Shrinkage is Still Relevant for Redundant Representations?
                  Abstract
                  Shrinkage is a well known and appealing denoising technique, introduced originally by Donoho and Johnstone in 1994. The use of shrinkage for denoising is known to be optimal for Gaussian white noise, provided that the sparsity on the signal’s representation is enforced using a unitary transform. Still, shrinkage is also practiced with non-unitary, and even redundant representations, typically leading to very satisfactory results. In this paper we shed some light on this behavior. The main argument in this paper is that such simple shrinkage could be interpreted as the first iteration of an algorithm that solves the basis pursuit denoising (BPDN) problem. While the desired solution of BPDN is hard to obtain in general, we develop in this paper a simple iterative procedure for the BPDN minimization that amounts to step-wise shrinkage. We demonstrate how the simple shrinkage emerges as the first iteration of this novel algorithm. Furthermore, we show how shrinkage can be iterated, turning into an effective algorithm that minimizes the BPDN via simple shrinkage steps, in order to further strengthen the denoising effect.
                  Elad, M., "Why Simple Shrinkage is Still Relevant for Redundant Representations?", IEEE Transactions on Information Theory , 52, 5559-5569.
                  Authors: Elad
                  Coders: Elad
                  Last update
                  07/06/2012
                  Ranking
                  28
                  Runs
                  4
                  Visits
                  30
                  Backtesting Value-at-Risk: A GMM Duration-based Test
                  Abstract
                  This paper proposes a new duration-based backtesting procedure for VaR forecasts. The GMM test framework proposed by Bontemps (2006) to test for the distributional assumption (i.e. the geometric distribution) is applied to the case of the VaR forecasts validity. Using simple J-statistic based on the moments defined by the orthonormal polynomials associated with the geometric distribution, this new approach tackles most of the drawbacks usually associated to duration based backtesting procedures. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypothesis (Christoffersen, 1998). Third, Monte-Carlo simulations show that for realistic sample sizes, our GMM test outperforms traditional duration based test. Besides, we study the consequences of the estimation risk on the duration-based backtesting tests and propose a sub-sampling approach for robust inference derived from Escanciano and Olmo (2009). An empirical application for Nasdaq returns confirms that using GMM test leads to major consequences for the ex-post evaluation of the risk by regulation authorities.
                  Colletaz, G., B. Candelon, C. Hurlin, and S. Tokpavi, "Backtesting Value-at-Risk: A GMM Duration-based Test", Journal of Financial Econometrics, 9(2), 314-343 .
                  Authors: Candelon
                  Colletaz
                  Hurlin
                  Tokpavi
                  Coders: Colletaz
                  Candelon
                  Hurlin
                  Tokpavi
                  Last update
                  06/28/2012
                  Ranking
                  55
                  Runs
                  23
                  Visits
                  291
                  A New Approach to Comparing VaR Estimation Methods
                  Abstract
                  We develop a novel backtesting framework based on multidimensional Value-at-Risk (VaR) that focuses on the left tail of the distribution of the bank trading revenues. Our coverage test is a multivariate generalization of the unconditional test of Kupiec (Journal of Derivatives, 1995). Applying our method to actual daily bank trading revenues, we find that non-parametric VaR methods, such as GARCH-based methods or filtered Historical Simulation, work best for bank trading revenues.
                  Perignon, C., and D. Smith, C. Hurlin, "A New Approach to Comparing VaR Estimation Methods", Journal of Derivatives , 15, 54-66.
                  Authors: Perignon
                  Smith
                  Coders: Perignon
                  Smith
                  Hurlin
                  Last update
                  07/16/2012
                  Ranking
                  54
                  Runs
                  11
                  Visits
                  270
                  Structural Sign Patterns and Reduced Form Restrictions
                  Abstract
                  This paper reconsiders the degree to which the signpatterns of hypothesized structural arrays limit the possible outcomes for the signpattern of the corresponding estimated reducedform. The conditions under which any structuralrestrictions would apply were believed to be very narrow, rarely found to apply, and virtually never investigated. As a result, current practice does not test the structural hypothesis in terms of the comparison of the estimated reducedform and the permissible reducedformsignpatterns. This paper shows that such tests are always possible. Namely, that the signpatterns of the hypothesized structural arrays always limit the signpatterns that can be taken on by the estimated reducedform. Given this, it is always possible to falsify a structural hypothesis based only upon the signpattern proposed. Necessary conditions, algorithmic principles, and examples are provided to illustrate the analytic principle and the means of its application.
                  Buck, J. A., and G. M. Lady, "Structural Sign Patterns and Reduced Form Restrictions", Economic Modelling, 29, 462-470.
                  Authors: Buck
                  Lady
                  Coders: Buck
                  Lady
                  Last update
                  07/18/2012
                  Ranking
                  23
                  Runs
                  N.A.
                  Visits
                  20
                  Structural Models, Information and Inherited Restrictions
                  Abstract
                  The derived structural estimates of the system βY=γZ|δU impose identifying restrictions on the reduced form estimates ex post. Some or all of the derived structural estimates are presented as evidence of the model’s efficacy. In fact, the reduced form inherits a great deal of information from the structure’s restrictions and hypothesized sign patterns, limiting the allowable signs for the reduced form. A method for measuring a structural model’s statistical information content is proposed. Further, the paper develops a method for enumerating the allowable reduced form outcomes which can be used to falsify the proposed model independently of significant coefficients found for the structural relations.
                  Buck, J. A., and G. M. Lady, "Structural Models, Information and Inherited Restrictions", Economic Modelling, 28, 2820-2831.
                  Authors: Buck
                  Lady
                  Coders: Buck
                  Lady
                  Last update
                  07/18/2012
                  Ranking
                  24
                  Runs
                  N.A.
                  Visits
                  28
                  Asymptotic Distribution-Free Diagnostic Tests For Heteroskedastic Time Series
                  Abstract
                  This article investigates model checks for a class of possibly nonlinear heteroskedastic time series models, including but not restricted to ARMA-GARCH models. We propose omnibus tests based on functionals of certain weighted standardized residual empirical processes. The new tests are asymptotically distribution-free, suitable when the conditioning set is infinite-dimensional, and consistent against a class of Pitman’s local alternatives converging at the parametric rate n-1/2, with n the sample size. A Monte Carlo study shows that the simulated level of the proposed tests is close to the asymptotic level already for moderate sample sizes and that tests have a satisfactory power performance. Finally, we illustrate our methodology with an application to the well-known S&P 500 daily stock index. The paper also contains an asymptotic uniform expansion for weighted residual empirical processes when initial conditions are considered, a result of independent interest.
                  Colletaz, G., "Asymptotic Distribution-Free Diagnostic Tests For Heteroskedastic Time Series", Econometric Theory, 26(03), 744-773.
                  Authors: Escanciano
                  Coders: Colletaz
                  Last update
                  12/06/2013
                  Ranking
                  18
                  Runs
                  39
                  Visits
                  218
                  Volatility Forecast Comparison Using Imperfect Volatility Proxies
                  Abstract
                  The use of a conditionally unbiased, but imperfect, volatility proxy can lead to undesirable outcomes in standard methods for comparing conditional variance forecasts. We motivate our study with analytical results on the distortions caused by some widely used loss functions, when used with standard volatility proxies such as squared returns, the intra-daily range or realised volatility. We then derive necessary and sufficient conditions on the functional form of the loss function for the ranking of competing volatility forecasts to be robust to the presence of noise in the volatility proxy, and derive some useful special cases of this class of “robust” loss functions. The methods are illustrated with an application to the volatility of returns on IBM over the period 1993 to 2003.
                  Patton, J. A., "Volatility Forecast Comparison Using Imperfect Volatility Proxies", Journal of Econometrics, 160, 246-256.
                  Authors: Patton
                  Coders: Patton
                  Last update
                  11/17/2012
                  Ranking
                  1
                  Runs
                  90
                  Visits
                  993
                  Are Public Investment Efficient in Creating Capital Stocks in Developing Countries? Estimates of Government Net Capital Stocks for 26 Developing Countries, 1970-2002
                  Abstract
                  We provide various estimates of the government net capital stocks for a panel of 26 developing countries over the period 1970-2001. These internationally comparable series of public capital are proposed as a complementary solution to the use of public investment flows and to the use of physical measures of infrastructure when one comes to evaluate the productivity of the public capital formation in developing countries. In these estimates based on various assumptions, we attempt to take into account the potential inefficiency of public investment in creating capital.
                  Arestoff, F., and C. Hurlin, "Are Public Investment Efficient in Creating Capital Stocks in Developing Countries? Estimates of Government Net Capital Stocks for 26 Developing Countries, 1970-2002", Economics Bulletin, 30, 1-11.
                  Authors: Arestoff
                  Hurlin
                  Coders: Arestoff
                  Hurlin
                  Last update
                  07/23/2012
                  Ranking
                  46
                  Runs
                  2
                  Visits
                  49
                  The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk
                  Abstract
                  The hybrid approach combines the two most popular approaches to VaR estimation: RiskMetrics and Historical Simulation. It estimates the VaR of a portfolio by applying exponentially declining weights to past returns and then finding the appropriate percentile of this time-weighted empirical distribution. This new approach is very simple to implement. Empirical tests show a significant improvement in the precision of VaR forecasts using the hybrid approach relative to these popular approaches.
                  Hurlin, C., C. Perignon, "The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk", Risk, 1, 64-67.
                  Authors: Boudoukh
                  Richardson
                  Whitelaw
                  Coders: Hurlin
                  Perignon
                  Last update
                  07/17/2012
                  Ranking
                  52
                  Runs
                  4
                  Visits
                  67
                  Testing for Granger Causality in Heterogeneous Mixed Panels
                  Abstract
                  In this paper, we propose a simple Granger causality procedure based on Meta analysis in heterogeneous mixed panels. Firstly, we examine the finite sample properties of the causality test through Monte Carlo experiments for panels characterized by both cross-section independency and cross-section dependency. Then, we apply the procedure for investigating the export led growth hypothesis in a panel data of twenty OECD countries.
                  Emirmahmutoglu, F., "Testing for Granger Causality in Heterogeneous Mixed Panels ", Economic Modelling, 28, 870-876.
                  Authors: Emirmahmutoglu
                  Kose
                  Coders: Emirmahmutoglu
                  Last update
                  03/19/2013
                  Ranking
                  9999
                  Runs
                  N.A.
                  Visits
                  N.A.
                  How to Forecast Long-Run Volatility: Regime Switching and the Estimation of Multifractal Processes
                  Abstract
                  We propose a discrete-time stochastic volatility model in which regime switching serves three purposes. First, changes in regimes capture low-frequency variations. Second, they specify intermediate-frequency dynamics usually assigned to smooth autoregressive transitions. Finally, high-frequency switches generate substantial outliers. Thus a single mechanism captures three features that are typically viewed as distinct in the literature. Maximum-likelihood estimation is developed and performs well in finite samples. Using exchange rates, we estimate a version of the process with four parameters and more than a thousand states. The multifractal outperforms GARCH, MS-GARCH, and FIGARCH in- and out-of-sample. Considerable gains in forecasting accuracy are obtained at horizons of 10 to 50 days.
                  Calvet, E. L., and A. J. Fisher, "How to Forecast Long-Run Volatility: Regime Switching and the Estimation of Multifractal Processes", Journal of Financial Econometrics, 2, 49-83.
                  Authors: Calvet
                  Fisher
                  Coders: Calvet
                  Fisher
                  Last update
                  07/23/2012
                  Ranking
                  6
                  Runs
                  118
                  Visits
                  470
                  Testing for Unit Roots in the Presence of Uncertainty Over Both the Trend and Initial Condition
                  Abstract
                  In this paper we provide a joint treatment of two major problems that surround testing for a unit root in practice: uncertainty as to whether or not a linear deterministic trend is present in the data, and uncertainty as to whether the initial condition of the process is (asymptotically) negligible or not. We suggest decision rules based on the union of rejections of four standard unit root tests (OLS and quasi-differenced demeaned and detrended ADF unit root tests), along with information regarding the magnitude of the trend and initial condition, to allow simultaneously for both trend and initial condition uncertainty.
                  Colletaz, G., "Testing for Unit Roots in the Presence of Uncertainty Over Both the Trend and Initial Condition", Journal of Econometrics, 169, 188-95.
                  Authors: Harvey
                  Leybourne
                  Taylor
                  Coders: Colletaz
                  Last update
                  10/08/2012
                  Ranking
                  48
                  Runs
                  20
                  Visits
                  44
                  Bartlett's Formula for a General Class of Non Linear Processes
                  Abstract
                  A Bartlett-type formula is proposed for the asymptotic distribution of the sample autocorrelations of nonlinear processes. The asymptotic covariances between sample autocorrelations are expressed as the sum of two terms. The first term corresponds to the standard Bartlett's formula for linear processes, involving only the autocorrelation function of the observed process. The second term, which is specific to nonlinear processes, involves the autocorrelation function of the observed process, the kurtosis of the linear innovation process and the autocorrelation function of its square. This formula is obtained under a symmetry assumption on the linear innovation process. It is illustrated on ARMA–GARCH models and compared to the standard formula. An empirical application on financial time series is proposed.
                  Francq, C., and J. Zakoian, "Bartlett's Formula for a General Class of Non Linear Processes", Journal of Time Series Analysis, 30, 449-465.
                  Authors: Francq
                  Zakoian
                  Coders: Francq
                  Zakoian
                  Last update
                  07/23/2012
                  Ranking
                  8
                  Runs
                  65
                  Visits
                  522
                  Forcasting Expected Shortfall with a Generalized Asymetric Student-t Distribution
                  Abstract
                  Financial returns typically display heavy tails and some skewness, and conditional variance models with these features often outperform more limited models. The difference in performance may be espe- cially important in estimating quantities that depend on tail features, including risk measures such as the expected shortfall. Here, using a recent generalization of the asymmetric Student-t distribution to allow separate parameters to control skewness and the thickness of each tail, we fit daily financial returns and forecast expected shortfall for the S&P 500 index and a number of individual company stocks; the generalized distribution is used for the standardized innovations in a nonlinear, asymmetric GARCH-type model. The results provide empirical evidence for the usefulness of the generalized distribution in improving prediction of downside market risk of financial assets.
                  Galbraith, W. J., and D. Zhu, "Forcasting Expected Shortfall with a Generalized Asymetric Student-t Distribution", Centre interuniversitaire de recherche en analyse des organisations.
                  Authors: Galbraith
                  Zhu
                  Coders: Galbraith
                  Zhu
                  Last update
                  07/27/2012
                  Ranking
                  41
                  Runs
                  9
                  Visits
                  143
                  Margin Backtesting
                  Abstract
                  This paper presents a validation framework for collateral requirements or margins on a derivatives exchange. It can be used by investors, risk managers, and regulators to check the accuracy of a margining system. The statistical tests presented in this study are based either on the number, frequency, magnitude, or timing of margin exceedances, which are de…ned as situations in which the trading loss of a market participant exceeds his or her margin. We also propose an original way to validate globally the margining system by aggregating individual backtesting statistics ob- tained for each market participant.
                  Hurlin, C., and C. Perignon, "Margin Backtesting", University of Orleans, HEC Paris.
                  Authors: Hurlin
                  Perignon
                  Coders: Hurlin
                  Perignon
                  Last update
                  07/23/2014
                  Ranking
                  36
                  Runs
                  377
                  Visits
                  433
                  Testing Interval Forecasts: A GMM-Based Approach
                  Abstract
                  This paper proposes a new evaluation framework of interval forecasts. Our model free test can be used to evaluate intervals forecasts and/or High Density Region, potentially discontinuous and/or asymmetric. Using simple J-statistic based on the moments defined by the orthonormal polynomials associated with the Binomial distribution, this new approach presents many advantages. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypothesis. Third, Monte-Carlo simulations show that for realistic sample sizes, our GMM test outperforms traditional LR test. These results are corroborated by an empirical application on SP500 and Nikkei stock market indexes. The empirical application for financial returns confirms that using a GMM test leads to major consequences for the ex-post evaluation of interval forecasts produced by linear versus non linear models.
                  Dumitrescu, E., C. Hurlin, "Testing Interval Forecasts: A GMM-Based Approach", Journal of Forecasting, -.
                  Authors: Dumitrescu
                  Hurlin
                  Madkour
                  Coders: Dumitrescu
                  Hurlin
                  Last update
                  06/05/2012
                  Ranking
                  10
                  Runs
                  29
                  Visits
                  340
                  A Theoretical and Empirical Comparison of Systemic Risk Measures: MES versus CoVaR
                  Abstract
                  In this paper, we propose a theoretical and empirical comparison of two popular systemic risk measures - Marginal Expected Shortfall (MES) and Delta Conditional Value at Risk (ΔCoVaR) - that can be estimated using publicly available data. First, we assume that the time-varying correlation completely captures the dependence between firm and market returns. Under this assumption, we derive three analytical results: (i) we show that the MES corresponds to the product of the conditional ES of market returns and the time-varying beta of this institution, (ii) we give an analytical expression of the ΔCoVaR and show that the CoVaR corresponds to the product of the VaR of the firm's returns and the time-varying linear projection coefficient of the market returns on the firm's returns and (iii) we derive the ratio of the MES to the ΔCoVaR. Second, we relax this assumption and propose an empirical comparison for a panel of 61 US financial institutions over the period from January 2000 to December 2010. For each measure, we propose a cross-sectional analysis, a time-series comparison and rankings analysis of these institutions based on the two measures.
                  Benoit, S., G. Colletaz, C. Hurlin, and C. Perignon, "A Theoretical and Empirical Comparison of Systemic Risk Measures: MES versus CoVaR", SSRN.
                  Authors: Benoit
                  Colletaz
                  Hurlin
                  Perignon
                  Coders: Benoit
                  Colletaz
                  Hurlin
                  Perignon
                  Last update
                  10/25/2012
                  Ranking
                  53
                  Runs
                  181
                  Visits
                  398
                  Is Public Capital Really Productive? A Methodological Reappraisal
                  Abstract
                  We present an evaluation of the main empirical approaches used in the literature to estimate the contribution of public capital stock to growth and private factors' productivity. Based on a simple stochastic general equilibrium model, built as to reproduce the main long-run relations observed in US post-war historical data, we show that the production function approach may not be reliable to estimate this contribution. Our analysis reveals that this approach largely overestimates the public capital elasticity, given the presence of a common stochastic trend shared by all non-stationary inputs.
                  Minea, A., and C. Hurlin, "Is Public Capital Really Productive? A Methodological Reappraisal", University of Orleans.
                  Authors: Minea
                  Hurlin
                  Coders: Minea
                  Hurlin
                  Last update
                  09/10/2012
                  Ranking
                  43
                  Runs
                  3
                  Visits
                  42
                  Are Public Investment Efficient in Creating Capital Stocks in Developing Countries? Estimates of Government Net Capital Stocks for 26 Developing Countries, 1970-2002
                  Abstract
                  In many poor countries, the problem is not that governments do not invest, but that these investments do not create productive capital. So, the cost of public investments does not correspond to the value of the capital stocks. In this paper, we propose an original non parametric approach to evaluate the efficiency function that links variations (net of depreciation) of stocks to public investments. We consider four sectors (electricity, telecommunications, roads and railways) of two Latin American countries (Mexico and Colombia). We show that there is a large discrepancy between the amount of investments and the value of increases in stocks.
                  Arestoff, F., and C. Hurlin, "Are Public Investment Efficient in Creating Capital Stocks in Developing Countries? Estimates of Government Net Capital Stocks for 26 Developing Countries, 1970-2002 ", Economics Bulletin, 30, 1515-1531.
                  Authors: Arestoff
                  Hurlin
                  Coders: Arestoff
                  Hurlin
                  Last update
                  10/08/2012
                  Ranking
                  11
                  Runs
                  N.A.
                  Visits
                  26
                  Appendices for the article "Is Public Capital Really Productive? A Methodological Reappraisal"
                  Abstract
                  We present an evaluation of the main empirical approaches used in the literature to estimate the contribution of public capital stock to growth and private factors’ productivity. Based on a simple stochastic general equilibrium model, built as to reproduce the main long-run relations observed in US post-war historical data, we show that the production function approach may not be reliable to estimate this contribution. Our analysis reveals that this approach largely overestimates the public capital elasticity, given the presence of a common stochastic trend shared by all non-stationary inputs.
                  Hurlin, C., and A. Minea, "Appendices for the article "Is Public Capital Really Productive? A Methodological Reappraisal" ", Université d'Orléans.
                  Authors: Hurlin
                  Minea
                  Coders: Hurlin
                  Minea
                  Last update
                  09/26/2012
                  Ranking
                  21
                  Runs
                  N.A.
                  Visits
                  28
                  Testing for Granger Non-causality in Heterogeneous Panels
                  Abstract
                  This paper proposes a very simple test of Granger (1969) non-causality for hetero- geneous panel data models. Our test statistic is based on the individual Wald statistics of Granger non causality averaged across the cross-section units. First, this statistic is shown to converge sequentially to a standard normal distribution. Second, the semi- asymptotic distribution of the average statistic is characterized for a fixed T sample. A standardized statistic based on an approximation of the moments of Wald statistics is hence proposed. Third, Monte Carlo experiments show that our standardized panel statistics have very good small sample properties, even in the presence of cross-sectional dependence.
                  Dumitrescu, E., and C. Hurlin, "Testing for Granger Non-causality in Heterogeneous Panels", Economic Modelling, Forthcoming.
                  Authors: Dumitrescu
                  Hurlin
                  Coders: Dumitrescu
                  Hurlin
                  Last update
                  07/12/2017
                  Ranking
                  40
                  Runs
                  451
                  Visits
                  502
                  Monotonicity in Asset Returns: New Tests with Applications to the Term Structure, the CAPM, and Portfolio Sorts
                  Abstract
                  Many theories in finance imply monotonic patterns in expected returns and other financial variables. The liquidity preference hypothesis predicts higher expected returns for bonds with longer times to maturity; the Capital Asset Pricing Model(CAPM)implies higher expected returns for stocks with higher betas; and standard asset pricing models imply that the pricing kernel is declining in market returns. The full set of implications of monotonicity is generally not exploited in empirical work, however. This paper proposes new and simple ways to test for monotonicity in financial variables and compares the proposed tests with extant alternatives such as t-tests, Bonferroni bounds, and multivariate inequality tests through empirical applications and simulations.
                  Patton, J. A., and A. Timmermann, "Monotonicity in Asset Returns: New Tests with Applications to the Term Structure, the CAPM, and Portfolio Sorts", Journal of Financial Economics, 98, 605-625.
                  Authors: Patton
                  Timmermann
                  Coders: Patton
                  Timmermann
                  Last update
                  11/17/2012
                  Ranking
                  63
                  Runs
                  19
                  Visits
                  116
                  Maximum Likelihood Estimation of Discretely Sampled Diffusions: A Closed-Form Approximation Approach
                  Abstract
                  When a continuous-time diffusion is observed only at discrete dates, in most cases the transition distribution and hence the likelihood function of the observation is not explicitely computable. Using Hermite polynomials, I construct an explicit sequences of closed-form functions and show that it converges to the true (but unknown) likelihood function. I document that the approximation is very accurate and prove that maximizing the sequence results in an estimator that converges to the true maximum likelihood estimator and shares its asymptotic properties. Monte Carlo evidence reveals that this method outperforms other approximation schemes in situations relevant for financial models.
                  Aït-Sahalia, Y., "Maximum Likelihood Estimation of Discretely Sampled Diffusions: A Closed-Form Approximation Approach", Econometrica, 70, 223-262.
                  Authors: Aït-Sahalia
                  Coders: Aït-Sahalia
                  Last update
                  10/29/2014
                  Ranking
                  2
                  Runs
                  119
                  Visits
                  621
                  Adaptive Estimation of Vector Autoregressive Models with Time-Varying Variance: Application to Testing Linear Causality in Mean
                  Abstract
                  Linear Vector AutoRegressive (VAR) models where the innovations could be unconditionally heteroscedastic and serially dependent are considered. The volatility structure is deterministic and quite general, including breaks or trending variances as special cases. In this framework we propose Ordinary Least Squares (OLS), Generalized Least Squares (GLS) and Adaptive Least Squares (ALS) procedures. The GLS estimator requires the knowledge of the time-varying variance structure while in the ALS approach the unknown variance is estimated by kernel smoothing with the outer product of the OLS residuals vectors. Different bandwidths for the different cells of the time-varying variance matrix are also allowed. We derive the asymptotic distribution of the proposed estimators for the VAR model coefficients and compare their properties. In particular we show that the ALS estimator is asymptotically equivalent to the infeasible GLS estimator. This asymptotic equivalence is obtained uniformly with respect to the bandwidth(s) in a given range and hence justifies data-driven bandwidth rules. Using these results we build Wald tests for the linear Granger causality in mean which are adapted to VAR processes driven by errors with a non stationary volatility. It is also shown that the commonly used standard Wald test for the linear Granger causality in mean is potentially unreliable in our framework (incorrect level and lower asymptotic power). Monte Carlo and real-data experiments illustrate the use of the different estimation approaches for the analysis of VAR models with time-varying variance innovations.
                  Raïssi, H., "Adaptive Estimation of Vector Autoregressive Models with Time-Varying Variance: Application to Testing Linear Causality in Mean", IRMAR-INSA and CREST ENSAI.
                  Authors: Patilea
                  Coders: Raïssi
                  Last update
                  10/08/2012
                  Ranking
                  58
                  Runs
                  9
                  Visits
                  169
                  Copula-Based Models for Financial Time Series
                  Abstract
                  This paper presents an overview of the literature on applications of copulas in the modelling of financial time series. Copulas have been used both in multivariate time series analysis, where they are used to charaterise the (conditional) cross-sectional dependence between individual time series, and in univariate time series analysis, where they are used to characterise the dependence between a sequence of observations of a scalar time series process. The paper includes a broad, brief, review of the many applications of copulas in finance and economics.
                  Patton, J. A., "Copula-Based Models for Financial Time Series", Handbook of Financial Time Series, Springer Verlag, -.
                  Authors: Patton
                  Coders: Patton
                  Last update
                  10/08/2012
                  Ranking
                  3
                  Runs
                  38
                  Visits
                  572
                  Mixed Logit with Repeated Choices: Households' Choices of Appliance Efficiency Level
                  Abstract
                  Mixed logit models, also called random-parameters or error-components logit, are a generalization of standard logit that do not exhibit the restrictive "independence from irrelevant alternatives" property and explicitly account for correlations in unobserved utility over repeated choices by each customer. Mixed logits are estimated for households’ choices of appliances under utility-sponsored programs that offer rebates or loans on high-efficiency appliances.
                  Train, K., "Mixed Logit with Repeated Choices: Households' Choices of Appliance Efficiency Level", The Review of Economics and Statistics, 80, 647-657.
                  Authors: Revelt
                  Train
                  Coders: Train
                  Last update
                  06/05/2012
                  Ranking
                  7
                  Runs
                  6
                  Visits
                  243
                  Threshold Effects of the Public Capital Productivity : An International Panel Smooth Transition Approach
                  Abstract
                  Using a non linear panel data model we examine the threshold effects in the productivity of the public capital stocks for a panel of 21 OECD countries observed over 1965-2001. Using the so-called "augmented production function" approach, we estimate various specifications of a Panel Smooth Threshold Regression (PSTR) model recently developed by Gonzalez, Teräsvirta and Van Dijk (2004). One of our main results is the existence of strong threshold effects in the relationship between output and private and public inputs : whatever the transition mechanism specified, tests strongly reject the linearity assumption. Moreover this model allows cross-country heterogeneity and time instability of the productivity without specification of an ex-ante classification over individuals. Consequently it is posible to give estimates of productivity coefficients for both private and public capital stocks at any time and for each countries in the sample. Finally we proposed estimates of individual time varying elasticities that are much more reasonable than those previously published.
                  Hurlin, C., "Threshold Effects of the Public Capital Productivity : An International Panel Smooth Transition Approach", University of Orléans.
                  Authors: Colletaz
                  Hurlin
                  Coders: Hurlin
                  Last update
                  07/22/2014
                  Ranking
                  31
                  Runs
                  2290
                  Visits
                  677
                  Evaluating Interval Forecasts
                  Abstract
                  A complete theory for evaluating interval forecasts has not been worked out to date. Most of the literature implicitly assumes homoskedastic errors even when this is clearly violated and proceed by merely testing for correct unconditional coverage. Consequently, the author sets out to build a consistent framework for conditional interval forecast evaluation, which is crucial when higher-order moment dynamics are present. The new methodology is demonstrated in an application to the exchange rate forecasting procedures advocated in risk management.
                  Hurlin, C., C. Perignon, "Evaluating Interval Forecasts", International Economic Review, 39, 841-862.
                  Authors: Christoffersen
                  Coders: Hurlin
                  Perignon
                  Last update
                  03/09/2012
                  Ranking
                  32
                  Runs
                  57
                  Visits
                  167
                  Backtesting Value-at-Risk: A Duration-Based Approach
                  Abstract
                  Financial risk model evaluation or backtesting is a key part of the internal model’s approach to market risk management as laid out by the Basle Committee on Banking Supervision. However, existing backtesting methods have relatively low power in realistic small sample settings. Our contribution is the exploration of new tools for backtesting based on the duration of days between the violations of the Value-at-Risk. Our Monte Carlo results show that in realistic situations, the new duration-based tests have considerably better power properties than the previously suggested tests.
                  Hurlin, C., and C. Perignon, "Backtesting Value-at-Risk: A Duration-Based Approach", Journal of Financial Econometrics, 2, 84-108.
                  Authors: Pelletier
                  Christoffersen
                  Coders: Hurlin
                  Perignon
                  Last update
                  07/23/2012
                  Ranking
                  26
                  Runs
                  17
                  Visits
                  207
                  Backtesting Value-at-Risk Accuracy: A Simple New Test
                  Abstract
                  This paper proposes a new test of value-at-risk (VAR) validation. Our test exploits the idea that the sequence of VAR violations (hit function) – taking value 1 - α if there is a violation, and -α otherwise – for a nominal coverage rate α verifies the properties of a martingale difference if the model used to quantify risk is adequate (Berkowitz et al., 2005). More precisely, we use the multivariate portmanteau statistic of Li and McLeod (1981), an extension to the multivariate framework of the test of Box and Pierce (1970), to jointly test the absence of autocorrelation in the vector of hit sequences for various coverage rates considered relevant for the management of extreme risks. We show that this shift to a multivariate dimension appreciably improves the power properties of the VAR validation test for reasonable sample sizes.
                  Hurlin, C., and S. Tokpavi, "Backtesting Value-at-Risk Accuracy: A Simple New Test", Journal of Risk, 9, 19-37.
                  Authors: Hurlin
                  Tokpavi
                  Coders: Hurlin
                  Tokpavi
                  Last update
                  03/13/2012
                  Ranking
                  49
                  Runs
                  2
                  Visits
                  240
                  How To Evaluate an Early Warning System? Towards a unified Statistical Framework for Assessing Financial Crises Forecasting Methods
                  Abstract
                  This paper proposes an original and unified toolbox to evaluate financial crisis Early Warning Systems (EWS). It presents four main advantages. First, it is a model-free method which can be used to asses the forecasts issued from different EWS (probit, logit, markov switching models, or combinations of models). Second, this toolbox can be applied to any type of crisis EWS (currency, banking, sovereign debt, etc.). Third, it does not only provide various criteria to evaluate the (absolute) validity of EWS forecasts but also proposes some tests to compare the relative performance of alternative EWS. Fourth, our toolbox can be used to evaluate both in-sample and out-of-sample forecasts. Applied to a logit model for twelve emerging countries we show that the yield spread is a key variable for predicting currency crises exclusively for South-Asian countries. Besides, the optimal cut-off correctly allows us to identify now on average more than 2/3 of the crisis and calm periods.
                  Candelon, B., E. Dumitrescu, and C. Hurlin, "How To Evaluate an Early Warning System? Towards a unified Statistical Framework for Assessing Financial Crises Forecasting Methods", IMF Economic Review, 60.
                  Authors: Candelon
                  Dumitrescu
                  Hurlin
                  Coders: Candelon
                  Dumitrescu
                  Hurlin
                  Last update
                  07/23/2012
                  Ranking
                  34
                  Runs
                  24
                  Visits
                  269
                  Forecasting Realized Volatility Using a Nonnegative Semiparametric Model
                  Abstract
                  This paper introduces a parsimonious and yet exible nonnegative semi-parametric model to forecast financial volatility. The new model extends the linear nonnegative autoregressive model of Barndor-Nielsen & Shephard (2001) and Nielsen & Shephard (2003) by way of a power transformation. It is semiparametric in the sense that the distributional form of its error component is left unspecified. The statistical properties of the model are discussed and a novel estimation method is proposed. Asymptotic properties are established for the new estimation method. Simulation studies validate the new estimation method. The out-of-sample performance of the proposed model is evaluated against a number of standard methods, using data on S&P 500 monthly realized volatilities. The competing models include the exponential smoothing method, a linear AR(1) model, a log-linear AR(1) model, and two long-memory ARFIMA models. Various loss functions are utilized to evaluate the predictive accuracy of the alternative methods. It is found that the new model generally produces highly competitive forecasts.
                  Preve, D., J. Yu, "Forecasting Realized Volatility Using a Nonnegative Semiparametric Model", Uppsala University.
                  Authors: Eriksson
                  Preve
                  Yu
                  Coders: Preve
                  Yu
                  Last update
                  06/06/2012
                  Ranking
                  64
                  Runs
                  19
                  Visits
                  114
                  Mixed Logit with Bounded Distributions of Correlated Partworths
                  Abstract
                  The use of a joint normal distribution for partworths is computationally attractive, particularly with Bayesian MCMC procedures, and yet is unrealistic for any attribute whose partworth is logically bounded (e.g., is necessarily positive or cannot be unboundedly large). A mixed logit is specified with partworths that are transformations of normally distributed terms, where the transformation induces bounds; examples include censored normals, log-normals, and SB distributions which are bounded on both sides. The model retains the computational advantages of joint normals while providing greater flexibility for the distributions of correlated partworths. The method is applied to data on customers’ choice among vehicles in stated choice experiments. The flexibility that the transformations allow is found to greatly improve the model, both in terms of fit and plausibility, without appreciably increasing the computational burden.
                  Train, K., "Mixed Logit with Bounded Distributions of Correlated Partworths ", Applications of Simulation Methods in Environmental Resource Economics , Chapter 7.
                  Authors: Sonnier
                  Train
                  Coders: Train
                  Last update
                  07/10/2012
                  Ranking
                  4
                  Runs
                  9
                  Visits
                  268
                  Value-at-Risk (Chapter 5: Computing VaR)
                  Abstract
                  Book description: To accommodate sweeping global economic changes, the risk management field has evolved substantially since the first edition of Value at Risk, making this revised edition a must. Updates include a new chapter on liquidity risk, information on the latest risk instruments and the expanded derivatives market, recent developments in Monte Carlo methods, and more. Value at Risk will help professional risk managers understand, and operate within, today’s dynamic new risk environment.
                  Hurlin, C., C. Perignon, "Value-at-Risk (Chapter 5: Computing VaR)", MacGraw-Hill, Third Edition.
                  Authors: Jorion
                  Coders: Hurlin
                  Perignon
                  Last update
                  03/19/2012
                  Ranking
                  44
                  Runs
                  63
                  Visits
                  328
                  The pernicious effects of contaminated data in risk management
                  Abstract
                  Banks hold capital to guard against unexpected surges in losses and long freezes in financial markets. The minimum level of capital is set by banking regulators as a function of the banks’ own estimates of their risk exposures. As a result, a great challenge for both banks and regulators is to validate internal risk models. We show that a large fraction of US and international banks uses contaminated data when testing their models. In particular, most banks validate their market risk model using profit-and-loss (P/L) data that include fees and commissions and intraday trading revenues. This practice is inconsistent with the definition of the employed market risk measure. Using both bank data and simulations, we find that data contamination has dramatic implications for model validation and can lead to the acceptance of misspecified risk models. Moreover, our estimates suggest that the use of contaminated data can significantly reduce (market-risk induced) regulatory capital.
                  Fresard, L., C. Perignon, and A. Wilhelmsson, "The pernicious effects of contaminated data in risk management", Journal of Banking and Finance, 35.
                  Authors: Fresard
                  Perignon
                  Wilhelmsson
                  Coders: Fresard
                  Perignon
                  Wilhelmsson
                  Last update
                  11/23/2012
                  Ranking
                  9999
                  Runs
                  N.A.
                  Visits
                  42
                  Techniques for Verifying the Accuracy of Risk Management Models
                  Abstract
                  Risk exposures are typically quantified in terms of a "Value at Risk" (VaR) estimate. A VaR estimate corresponds to a specific critical value of a portfolio's potential one-day profit and loss probability distribution. Given their function both as internal risk management tools and as potential regulatory measures of risk exposure, it is important to quantify the accuracy of an institution's VaR estimates. This study shows that the formal statistical procedures that would typically be used in performance-based VaR verification tests require large samples to produce a reliable assessment of a model's accuracy in predicting the size and likelihood of very low probability events. Verification test statistics based on historical trading profits and losses have very poor power in small samples, so it does not appear possible for a bank or its supervisor to verify the accuracy of a VaR estimate unless many years of performance data are available. Historical simulation-based verification test statistics also require long samples to generate accurate results: Estimates of 0.01 critical values exhibit substantial errors even in samples as large as ten years of daily data.
                  Hurlin, C., C. Perignon, "Techniques for Verifying the Accuracy of Risk Management Models", Journal of Derivatives, 3, 73-84.
                  Authors: Kupiec
                  Coders: Hurlin
                  Perignon
                  Last update
                  04/17/2012
                  Ranking
                  57
                  Runs
                  26
                  Visits
                  339
                  Outliers and GARCH Models in Financial Data
                  Abstract
                  We propose to extend the additive outlier (AO) identification procedure developed by Franses and Ghijsels(Franses, P.H., Ghijsels, H., 1999. Additive outliers, GARCH and forecasting volatility. International Journal of Forecasting, 15, 1–9) to take into account the innovative outliers (IOs) in a GARCH model. We apply it to three daily stock market indexes and examine the effects of outliers on the diagnostics of normality.
                  Charles, A., and O. Darné, D. Banulescu, E. Dumitrescu, "Outliers and GARCH Models in Financial Data", Economics Letters, 86, 347-352.
                  Authors: Charles
                  Darné
                  Coders: Charles
                  Darné
                  Banulescu
                  Dumitrescu
                  Last update
                  06/22/2012
                  Ranking
                  15
                  Runs
                  61
                  Visits
                  238
                  Extracting Factors from Heteroskedastic Asset Returns
                  Abstract
                  This paper proposes an alternative to the asymptotic principal components procedure of Connor and Korajczyk (Journal of Financial Economics, 1986) that is robust to time series heteroskedasticity in the factor model residuals. The new method is simple to use and requires no assumptions stronger than those made by Connor and Korajczyk. It is demonstrated through simulations and analysis of actual stock market data that allowing heteroskedasticity sometimes improves the quality of the extracted factors quite dramatically. Over the period from 1989 to 1993, for example, a single factor extracted using the Connor and Korajczyk method explains only 8.2% of the variation of the CRSP value-weighted index, while the factor extracted allowing heteroskedasticity explains 57.3%. Accounting for heteroskedasticity is also important for tests of the APT, with p-values sometimes depending strongly on the factor extraction method used.
                  Jones, S. C., "Extracting Factors from Heteroskedastic Asset Returns", Journal of Financial Economics, 62, 293-325.
                  Authors: Jones
                  Coders: Jones
                  Last update
                  11/17/2012
                  Ranking
                  30
                  Runs
                  17
                  Visits
                  81
                  Currency Crises Early Warning Systems: why they should be Dynamic
                  Abstract
                  This paper introduces a new generation of Early Warning Systems (EWS) which takes into account the dynamics, i.e. the persistence in the binary crisis indicator. We elaborate on Kauppi and Saikonnen (2008), which allows to consider several dynamic specifications by re- lying on an exact maximum likelihood estimation method. Applied so as to predict currency crises for fifteen countries, this new EWS turns out to exhibit significantly better predic- tive abilities than the existing models both within and out of the sample, thus vindicating dynamic models in the quest for optimal EWS.
                  Candelon, B., E. Dumitrescu, and C. Hurlin, "Currency Crises Early Warning Systems: why they should be Dynamic", Maastricht University.
                  Authors: Candelon
                  Dumitrescu
                  Hurlin
                  Coders: Candelon
                  Dumitrescu
                  Hurlin
                  Last update
                  06/04/2012
                  Ranking
                  45
                  Runs
                  64
                  Visits
                  112
                  Modeling State Credit Risks in Illinois and Indiana
                  Abstract
                  I use an open-source budget-simulation model to evaluate Illinois’s credit risk and to compare it to that of Indiana, a neighboring state generally believed to have better fiscal management. Based on a review of the history and theory of state credit performance, I assume that a state will default if the aggregate of its interest and pension costs reaches 30 percent of total revenues. In Illinois, this ratio is currently 10 percent, compared to 4 percent in Indiana. My analysis finds that neither state will reach the critical threshold in the next few years under any reasonable economic scenario, suggesting no material default risk. Over the longer term, Illinois has some chance of reaching the default threshold, but it would likely be able to take policy actions to lower the ratio before then. If market participants accept my finding that Illinois does not have material default risk, Illinois’s bond yields willfall, yielding cost savings for taxpayers as the state rolls over its debt.
                  Joffe, D. M., "Modeling State Credit Risks in Illinois and Indiana", Mercatus Center.
                  Authors: Joffe
                  Coders: Joffe
                  Last update
                  08/01/2013
                  Ranking
                  9999
                  Runs
                  N.A.
                  Visits
                  N.A.
                  Pitfalls in backtesting Historical Simulation VaR models
                  Abstract
                  Abstract Historical Simulation (HS) and its variant, the Filtered Historical Simulation (FHS), are the most popular Value-at-Risk forecast methods at commercial banks. These forecast methods are traditionally evaluated by means of the unconditional backtest. This paper formally shows that the unconditional backtest is always inconsistent for backtesting HS and FHS models, with a power function that can be even smaller than the nominal level in large samples. Our findings have fundamental implications in the determination of market risk capital requirements, and also explain Monte Carlo and empirical findings in previous studies. We also propose a data-driven weighted backtest with good power properties to evaluate HS and FHS forecasts. A Monte Carlo study and an empirical application with three US stocks confirm our theoretical findings. The empirical application shows that multiplication factors computed under the current regulatory framework are downward biased, as they inherit the inconsistency of the unconditional backtest.
                  Escanciano, J., and P. Pei, "Pitfalls in backtesting Historical Simulation VaR models", Journal of Banking and Finance, 36, 2233-2244.
                  Authors: Escanciano
                  Pei
                  Coders: Escanciano
                  Pei
                  Last update
                  02/22/2013
                  Ranking
                  9999
                  Runs
                  N.A.
                  Visits
                  32
                  A Generalized Asymmetric Student-t Distribution with Application to Financial Econometrics
                  Abstract
                  This paper proposes a new class of asymmetric Student-t (AST) distributions, and investigates its properties, gives procedures for estimation, and indicates applications in financial econometrics. We derive analytical expressions for the cdf, quantile function, moments, and quantities useful in financial econometric applications such as the Expected Shortfall. A stochastic representation of the distribution is also given. Although the AST density does not satisfy the usual regularity conditions for maximum likelihood estimation, we establish consistency, asymptotic normality and efficiency of ML estimators and derive an explicit analytical expression for the asymptotic covariance matrix. A Monte Carlo study indicates generally good finite-sample conformity with these asymptotic properties.
                  Colletaz, G., "A Generalized Asymmetric Student-t Distribution with Application to Financial Econometrics", Journal of Econometrics, 157, 297-305.
                  Authors: Zhu
                  Galbraith
                  Coders: Colletaz
                  Last update
                  05/05/2012
                  Ranking
                  38
                  Runs
                  6
                  Visits
                  95
                  Determining the Number of Factors in Approximate Factors Models
                  Abstract
                  In this paper we develop some econometric theory for factor models of large dimensions. The focus is the determination of the number of factors (r), which is an unresolved issue in the rapidly growing literature on multifactor models. We first establish the convergence rate for the factor estimates that will allow for consistent estimation of r. We then propose some panel criteria and show that the number of factors can be consistently estimated using the criteria. The theory is developed under the framework of large cross-sections (N) and large time dimensions (T). No restriction is imposed on the relation between N and T. Simulations show that the proposed criteria have good finite sample properties in many configurations of the panel data encountered in practice.
                  Hurlin, C., "Determining the Number of Factors in Approximate Factors Models", Econometrica, 70, 191-221.
                  Authors: Bai
                  Ng
                  Coders: Hurlin
                  Last update
                  01/29/2013
                  Ranking
                  39
                  Runs
                  66
                  Visits
                  230
                  Unit Root Tests in Panel Data: Asymptotic and Finite-Sample Properties
                  Abstract
                  We consider pooling cross-section time series data for testing the unit root hypothesis. The degree of persistence in individual regression error, the intercept and trend coefficient are allowed to vary freely across individuals. As both the cross-section and time series dimensions of the panel grow large, the pooled t-statistic has a limiting normal distribution that depends on the regression specification but is free from nuisance parameters. Monte Carlo simulations indicate that the asymptotic results provide a good approximation to the test statistics in panels of moderate size, and that the power of the panel-based unit root test is dramatically higher, compared to performing a separate unitroottest for each individual time series.
                  Hurlin, C., "Unit Root Tests in Panel Data: Asymptotic and Finite-Sample Properties", Journal of Econometrics, 108, 1-24.
                  Authors: Levin
                  Lin
                  Chu
                  Coders: Hurlin
                  Last update
                  06/28/2012
                  Ranking
                  37
                  Runs
                  220
                  Visits
                  322
                  Maximum Likelihood Methods for Models of Markets in Disequilibrium
                  Abstract
                  For the abstract, please click on: http://www.jstor.org/discover/10.2307/1914215?uid=3738016&uid=2&uid=4&sid=56146953873
                  Hurlin, C., "Maximum Likelihood Methods for Models of Markets in Disequilibrium", Econometrica, 42, 1013-1030.
                  Authors: Maddala
                  Nelson
                  Coders: Hurlin
                  Last update
                  02/15/2013
                  Ranking
                  50
                  Runs
                  85
                  Visits
                  391
                  A Comparative Study of Unit Root Tests with Panel Data and a New Simple Test
                  Abstract
                  The panel data unit root test suggested by Levin and Lin (LL) has been widely used in several applications, notably in papers on tests of the purchasing power parity hypothesis. This test is based on a very restrictive hypothesis which is rarely ever of interest in practice. The Im–Pesaran–Shin (IPS) test relaxes the restrictive assumption of the LL test. This paper argues that although the IPS test has been offered as a generalization of the LL test, it is best viewed as a test for summarizing the evidence from a number of independent tests of the sample hypothesis. This problem has a long statistical history going back to R. A. Fisher. This paper suggests the Fisher test as a panel data unit root test, compares it with the LL and IPS tests, and the Bonferroni bounds test which is valid for correlated tests. Overall, the evidence points to the Fisher test with bootstrap-based critical values as the preferred choice. We also suggest the use of the Fisher test for testing stationarity as the null and also in testing for cointegration in panel data.
                  Hurlin, C., "A Comparative Study of Unit Root Tests with Panel Data and a New Simple Test", Oxford Bulletin of Economics and Statistics, 61, 631-652.
                  Authors: Maddala
                  Wu
                  Coders: Hurlin
                  Last update
                  10/08/2012
                  Ranking
                  59
                  Runs
                  217
                  Visits
                  129
                  Unit Root Tests for Panel Data
                  Abstract
                  This paper develops unit root tests for panel data. These tests are devised under more general assumptions than the tests previously proposed. First, the number of groups in the panel data is assumed to be either finite or infinite. Second, each group is assumed to have different types of nonstochastic and stochastic components. Third, the time series spans for the groups are assumed to be all different. Fourth, the alternative where some groups have a unit root and others do not can be dealt with by the tests. The tests can also be used for the null of stationarity and for cointegration, once relevant changes are made in the model, hypotheses, assumptions and underlying tests. The main idea for our unit root tests is to combine p-values from a unit root test applied to each group in the panel data. Combining p-values to formulate tests is a common practice in meta-analysis. This paper also reports the finite sample performance of our combination unit root tests and Im et al.'s [Mimeo (1995)] t-bar test. The results show that most of the combination tests are more powerful than the t-bar test in finite samples. Application of the combination unit root tests to the post-Bretton Woods US real exchange rate data provides some evidence in favor of the PPP hypothesis.
                  Hurlin, C., "Unit Root Tests for Panel Data", Journal of International Money and Finance, 20, 249-272.
                  Authors: Choi
                  Coders: Hurlin
                  Last update
                  10/08/2012
                  Ranking
                  60
                  Runs
                  62
                  Visits
                  261
                  Testing for Unit Roots in Heterogeneous Panels
                  Abstract
                  This paper proposes unit root tests for dynamic heterogeneous panels based on the mean of individual unit root statistics. In particular it proposes a standardized t-bar test statistic based on the (augmented) Dickey–Fuller statistics averaged across the groups. Under a general setting this statistic is shown to converge in probability to a standard normal variate sequentially with T (the time series dimension) →∞, followed by N (the cross sectional dimension) →∞. A diagonal convergence result with T and N→∞ while N/T→k,k being a finite non-negative constant, is also conjectured. In the special case where errors in individual Dickey–Fuller (DF) regressions are serially uncorrelated a modified version of the standardized t-bar statistic is shown to be distributed as standard normal as N→∞ for a fixed T, so long as T>5 in the case of DF regressions with intercepts and T>6 in the case of DF regressions with intercepts and linear time trends. An exact fixed N and T test is also developed using the simple average of the DF statistics. Monte Carlo results show that if a large enough lag order is selected for the underlying ADF regressions, then the small sample performances of the t-bar test is reasonably satisfactory and generally better than the test proposed by Levin and Lin (Unpublished manuscript, University of California, San Diego, 1993).
                  Hurlin, C., "Testing for Unit Roots in Heterogeneous Panels ", Journal of Econometrics, 115, 53-74.
                  Authors: Im
                  Pesaran
                  Shin
                  Coders: Hurlin
                  Last update
                  10/08/2012
                  Ranking
                  61
                  Runs
                  57
                  Visits
                  106
                  Testing for a Unit Root in Panels with Dynamic Factors
                  Abstract
                  This paper studies testing for a unit root for large n and T panels in which the cross-sectional units are correlated. To model this cross-sectional correlation, we assume that the data are generated by an unknown number of unobservable common factors. We propose unit root tests in this environment and derive their (Gaussian) asymptotic distribution under the null hypothesis of a unit root and local alternatives. We show that these tests have significant asymptotic power when the model has no incidental trends. However, when there are incidental trends in the model and it is necessary to remove heterogeneous deterministic components, we show that these tests have no power against the same local alternatives. Through Monte Carlo simulations, we provide evidence on the finite sample properties of these new tests.
                  Hurlin, C., "Testing for a Unit Root in Panels with Dynamic Factors", Journal of Econometrics, 122, 81-126.
                  Authors: Moon
                  Perron
                  Coders: Hurlin
                  Last update
                  10/08/2012
                  Ranking
                  62
                  Runs
                  399
                  Visits
                  124
                  Tests of Conditional Predictive Ability
                  Abstract
                  We propose a framework for out-of-sample predictive ability testing and forecast selection designed for use in the realistic situation in which the forecasting model is possibly misspecified, due to unmodeled dynamics, unmodeled heterogeneity, incorrect functional form, or any combination of these. Relative to the existing literature (Diebold and Mariano (1995) and West (1996)), we introduce two main innovations: (i) We derive our tests in an environment where the finite sample properties of the estimators on which the forecasts may depend are preserved asymptotically. (ii) We accommodate conditional evaluation objectives (can we predict which forecast will be more accurate at a future date?), which nest unconditional objectives (which forecast was more accurate on average?), that have been the sole focus of previous literature. As a result of (i), our tests have several advantages: they capture the effect of estimation uncertainty on relative forecast performance, they can handle forecasts based on both nested and nonnested models, they allow the forecasts to be produced by general estimation methods, and they are easy to compute. Although both unconditional and conditional approaches are informative, conditioning can help fine-tune the forecast selection to current economic conditions. To this end, we propose a two-step decision rule that uses current information to select the best forecast for the future date of interest. We illustrate the usefulness of our approach by comparing forecasts from leading parameter-reduction methods for macroeconomic forecasting using a large number of predictors.
                  Giacomini, R., "Tests of Conditional Predictive Ability ", Econometrica, 74, 1545-1578.
                  Authors: White
                  Giacomini
                  Coders: Giacomini
                  Last update
                  07/04/2012
                  Ranking
                  25
                  Runs
                  17
                  Visits
                  67
                  logo

                  Didn't find your answer ?

                  captcha refresh

                  Frequently Asked Questions