Modeling State Credit Risks in Illinois and Indiana
Modeling State Credit Risks in Illinois and Indiana
By Marc D. Joffe
Mercatus Center (2013)
Abstract Paper

Marc  D. Joffe

Public Sector Credit Solutions

United States

Coder Page  

Souce code for the Public Sector Credit Framework - which generates prog.c - can be found at http://www.github.com/joffemd/pscf A Windows installation is available at http://www.publicsectorcredit.org/pscf.html
Created
July 30, 2013
Software:
C GNU C
Visits
N.A.
Last update
August 01, 2013
Ranking
9999
Code downloads
N.A.
Data downloads
N.A.
Abstract
I use an open-source budget-simulation model to evaluate Illinois’s credit risk and to compare it to that of Indiana, a neighboring state generally believed to have better fiscal management. Based on a review of the history and theory of state credit performance, I assume that a state will default if the aggregate of its interest and pension costs reaches 30 percent of total revenues. In Illinois, this ratio is currently 10 percent, compared to 4 percent in Indiana. My analysis finds that neither state will reach the critical threshold in the next few years under any reasonable economic scenario, suggesting no material default risk. Over the longer term, Illinois has some chance of reaching the default threshold, but it would likely be able to take policy actions to lower the ratio before then. If market participants accept my finding that Illinois does not have material default risk, Illinois’s bond yields willfall, yielding cost savings for taxpayers as the state rolls over its debt.
Joffe, D. M., "Modeling State Credit Risks in Illinois and Indiana", Mercatus Center.
Coder:
  • Marc D. Joffe

    Public Sector Credit Solutions

    United States

Marc D. Joffe also created these companion sites

Other Companion Sites on same paper

Modeling State Credit Risks in Illinois and Indiana

Other Companion Sites relative to similar papers

The Risk Map: A New Tool for Validating Risk Models
Abstract
This paper presents a new tool for validating risk models. This tool, called the Risk Map, jointly accounts for the number and the magnitude of extreme losses and graphically summarizes all information about the performance of a risk model. It relies on the concept of Value-at-Risk (VaR) super exception, which is defined as a situation in which the loss exceeds both the standard VaR and a VaR defined at an extremely low coverage probability. We then formally test whether the sequences of exceptions and super exceptions is rejected by standard model validation tests. We show that the Risk Map can be used to validate market, credit, operational, or systemic (e.g. CoVaR) risk estimates or to assess the performance of the margin system of a clearing house.
Colletaz, G., C. Hurlin, and C. Perignon, "The Risk Map: A New Tool for Validating Risk Models", SSRN.
Authors: Colletaz
Hurlin
Perignon
Coders: Colletaz
Hurlin
Perignon
Last update
07/25/2013
Ranking
51
Runs
146
Visits
431
Value-at-Risk (Chapter 5: Computing VaR)
Abstract
Book description: To accommodate sweeping global economic changes, the risk management field has evolved substantially since the first edition of Value at Risk, making this revised edition a must. Updates include a new chapter on liquidity risk, information on the latest risk instruments and the expanded derivatives market, recent developments in Monte Carlo methods, and more. Value at Risk will help professional risk managers understand, and operate within, today’s dynamic new risk environment.
Hurlin, C., C. Perignon, "Value-at-Risk (Chapter 5: Computing VaR)", MacGraw-Hill, Third Edition.
Authors: Jorion
Coders: Hurlin
Perignon
Last update
03/19/2012
Ranking
44
Runs
63
Visits
328
The pernicious effects of contaminated data in risk management
Abstract
Banks hold capital to guard against unexpected surges in losses and long freezes in financial markets. The minimum level of capital is set by banking regulators as a function of the banks’ own estimates of their risk exposures. As a result, a great challenge for both banks and regulators is to validate internal risk models. We show that a large fraction of US and international banks uses contaminated data when testing their models. In particular, most banks validate their market risk model using profit-and-loss (P/L) data that include fees and commissions and intraday trading revenues. This practice is inconsistent with the definition of the employed market risk measure. Using both bank data and simulations, we find that data contamination has dramatic implications for model validation and can lead to the acceptance of misspecified risk models. Moreover, our estimates suggest that the use of contaminated data can significantly reduce (market-risk induced) regulatory capital.
Fresard, L., C. Perignon, and A. Wilhelmsson, "The pernicious effects of contaminated data in risk management", Journal of Banking and Finance, 35.
Authors: Fresard
Perignon
Wilhelmsson
Coders: Fresard
Perignon
Wilhelmsson
Last update
11/23/2012
Ranking
9999
Runs
N.A.
Visits
42
Code for kMajority Cost Paper
Abstract
Several authors have examined the optimal k-majority rule using a variety of criteria. We formalize and extend the original argument laid out by Buchanan and Tullock (1965) using the expected costs of a rational voter. I.
Ragan, R., "Code for kMajority Cost Paper", University of Georgia .
Authors: Ragan
Coders: Ragan
Last update
07/30/2013
Ranking
9999
Runs
N.A.
Visits
N.A.
A Simple Empirical Measure of Central Banks' Conservatism
Abstract
In this paper we suggest a simple empirical and model-independent measure of Central Banks' Conservatism, based on the Taylor curve. This new indicator can easily be extended in time and space, whatever the underlying monetary regime of the considered countries. We demonstrate that it evolves in accordance with the monetary experiences of 32 OECD member countries from 1980, and is largely equivalent to the model-based measure provided by Krause & Méndez [Southern Economic Journal, 2005]. We finally bring forward the interest of such an indicator for further empirical analysis dealing with the preferences of Central Banks.
Levieuge, G., "A Simple Empirical Measure of Central Banks' Conservatism", SSRN.
Authors: Levieuge
Lucotte
Coders: Levieuge
Last update
07/23/2012
Ranking
47
Runs
6
Visits
76
Pitfalls in backtesting Historical Simulation VaR models
Abstract
Abstract Historical Simulation (HS) and its variant, the Filtered Historical Simulation (FHS), are the most popular Value-at-Risk forecast methods at commercial banks. These forecast methods are traditionally evaluated by means of the unconditional backtest. This paper formally shows that the unconditional backtest is always inconsistent for backtesting HS and FHS models, with a power function that can be even smaller than the nominal level in large samples. Our findings have fundamental implications in the determination of market risk capital requirements, and also explain Monte Carlo and empirical findings in previous studies. We also propose a data-driven weighted backtest with good power properties to evaluate HS and FHS forecasts. A Monte Carlo study and an empirical application with three US stocks confirm our theoretical findings. The empirical application shows that multiplication factors computed under the current regulatory framework are downward biased, as they inherit the inconsistency of the unconditional backtest.
Escanciano, J., and P. Pei, "Pitfalls in backtesting Historical Simulation VaR models", Journal of Banking and Finance, 36, 2233-2244.
Authors: Escanciano
Pei
Coders: Escanciano
Pei
Last update
02/22/2013
Ranking
9999
Runs
N.A.
Visits
32
Why don’t Banks Lend to the Private Sector in Egypt?
Abstract
Bank credit to the private sector fell as a share to GDP during the last decade, in spite of a successful bank recapitalization in the middle of the 2000s and high and stable growth before the recent macroeconomic turmoil. This paper explains this trend based on both bank supply factors and demand for credit from the private sector. First the paper describes the evolution of the banks’ sources and uses of funds in the period 2005-2011, characterized by two different cycles of external capital flows. Then it estimates supply and demand equations of credit to the private sector, using quarterly data for the period 1999-2011. First, the system of simultaneous equations is estimated assuming continuous market clearing. Then the system is estimated allowing for transitory disequilibrium. In general, the main results are robust to the market clearing assumption. Our main findings show that, while real industrial production and the stock market have a significant impact on credit demand, deposits and claims on government affected the supply of credit in Egypt. Finally, both models yield similar results for the most recent period of private credit contraction: the single most important factor explaining the largest share of the decline is the expansion of banking credit to the public sector. The slowdown in economic activity and the contraction of bank deposits explain the remainder of the predicted contraction in bank credit to the private sector.
Herrera, S., C. Hurlin, and C. Zaki, "Why don’t Banks Lend to the Private Sector in Egypt? ", World Bank Working Paper Series.
Authors: Herrera
Hurlin
Zaki
Coders: Herrera
Hurlin
Zaki
Last update
10/17/2013
Ranking
56
Runs
252
Visits
112
Using a Modified Taylor Cell to Validate Simulation and Measurement of Field-to-Shorted-Trace Coupling
Abstract
Predicting the immunity of electronic boards to radiated electromagnetic interference requires the computation of the coupling efficiency of an electromagnetic field to PCB traces. In the case of complex PCBs, full-wave electromagnetic solvers are convenient, yet at the expense of simulation time. Therefore, this paper introduces the extension of a modified Taylor-based analytical model to the case of traces terminated at one end by a non-characteristic impedance. This model makes it possible to determine the far-field-to-trace coupling using only a sum of closed-form equations. When applied to a shorted, meandered PCB trace, it was found to be accurate to within 2.2 dB compared with GTEM measurements, which demonstrates its relevance for immunity prediction. Moreover, the full-wave simulation of this case study was validated using the extended model and found to be accurate to within 1.4 dB.
Op 't Land, S., "Using a Modified Taylor Cell to Validate Simulation and Measurement of Field-to-Shorted-Trace Coupling", INSA Rennes.
Authors: Op 't Land
Ramdani
Perdriau
Braux
Drissi
Coders: Op 't Land
Last update
07/22/2014
Ranking
9999
Runs
63
Visits
N.A.
logo

Didn't find your answer ?

captcha refresh

Frequently Asked Questions


There isn't any question about this code.