Tuesday, March 31, 2015

Examining the Taylor Rule(s)

A while back I made a calculator that compared estimates of Taylor Rules from several economists. You can find a version of the calculator with live data from FRED at the side bar on the right of this page.

When I first posted the calculator, there was a clear partisan divide, with the liberal economists Paul Krugman and Jared Bernstein calling for negative rates, and the conservative economists Greg Mankiw and John Taylor (inventor of the Taylor Rule concept) calling for interest rate hikes. Glen Rudebusch--a Fed economist whose politics I know little about but who published careful empirical estimates of the Taylor Rule--actually predicted the lowest interest rate of the bunch, at -0.43 percent. Recently, however, that dichotomy has been replaced by a different one: Krugman's and Mankiw's rules are now calling for pretty large rate hikes--all the way to above 3 percent--while Bernstein's, Rudebusch's, and Taylor's versions call for much more modest hikes to about 1 percent.

The change has to do with what turns out to be fairly important differences in each version of the Taylor Rule. I never really explained where my inspiration for the Taylor Rule calculator from--one commenter even accused me of making the Taylor Rules up--so here it is: The Mankiw rule comes from a version that Mankiw published in a paper a while back, which he mentioned on his blog here. The Krugman version comes from Krugman's rebuttal to Mankiw's version here, to which Mankiw further responded here. The Jared Bernstein version comes from his blog post here in which he argues that the Taylor Rule says monetary policy was too tight rather than too loose. Of all the versions I find Bernstein's most incomprehensible, for reasons that John Taylor explained here--Bernstein neither uses the original version Taylor proposes, nor did he mention using any econometrics or published estimates to arrive at different parameters. Glen Rudebusch's version was published in his 2009 FRBSF economic letter (see attached spreadsheet).John Taylor's version, of course, is the original Taylor Rule.

In general, a Taylor Rule is a function of the form:
interest rate=inflation+α(inflation-target)+β(NAIRU-unemployment)+γ
where α, β, and γ are just three policy parameters chosen by the Fed--α can be thought of as the weight given to inflation, β is the weight given to the output gap, and γ is sometimes interpreted as a "natural rate of interest." Krugman and Mankiw actually estimated something similar but slightly different:
interest rate=β(inflation-unemployment)+γ
However, given that NAIRU and the inflation target don't really change, their version is essentially the same as the one above but constrained to put equal weights on inflation and output. By contrast, Rudebusch and Bernstein put substantially more weight on output than inflation, while Taylor put far more weight on inflation than output.

Right now, the actual differences in model specification doesn't actually make much difference between those five models--the recent divergence between Krugman/Mankiw and Taylor/Bernstein/Rudebusch is almost entirely due to the fact that the two groups use different measures of inflation: Mankiw used core CPI instead of the Fed's preferred PCE deflator, and since Krugman's point was about Mankiw's version, he also used core CPI. The Fed actually targets the PCE deflator, not core CPI, though the latter is often better at predicting the future path of the PCE deflator. The Fed, for it's part, prefers to use the trimmed-mean PCE deflator over core CPI to predict the future path of overall PCE deflator, though the two are pretty similar right now. So this is all a big lesson that it matters whether you believe recent low overall inflation is a blip or a trend.

Anyway, I've compared the historical performance of each of the Taylor Rules along with my own new estimate using all the data from 1960 to present:
Difference in actual FedFunds rates and those predicted by various versions of the Taylor Rule.
The horizontal line at zero shows what we would see--zero deviation--if the Taylor Rules perfectly predicted the actual Fed Funds interest rate. What you can see is that all versions have been quite terrible at this purported goal.

But there are some general points which all the estimates agreed on:
  • monetary policy was "too loose" during the 1970s,
  • monetary policy tightened way "too much" during the 1982 recession,
  • we hit the zero-lower-bound, at least briefly, in 2009
  • the rate "should" be above zero by now
I used scare quotes on that list because I want to warn readers against reading too much into Taylor Rule predictions: a Taylor Rule merely aims to predict what the Fed would do, but is not itself the result of any kind of optimal control procedure and shouldn't be interpreted as a prediction of the optimal interest rate target.

Of the rules I looked at, mine actually fits the historical data the best:
Standard deviations of the actual fed funds rate from predict rates for various Taylor rules over the 1960 to 2015 period.
We shouldn't make too much of these standard deviations--they're different models estimated with different time periods and different dataseries--but it is a little surprising that Rudebusch's estimates from the 1988 to 2008 sample have the highest error rate over the full dataset. In fact, of all the models, Rudebusch's version is the most aggressively dovish, reflecting, essentially, an uncontrolled slide into disinflation after the 1982 tightening cycle.

When I estimate the taylor rule over the entire sample I get the following weights on inflation and output respectively:
Estimated Taylor Rule over the full sample is
r=π+0.09(π-π*)+0.42(u*-u)+1.99
which places more weight on inflation than output, and satisfies the Taylor Principle.
Yet here's what I get over the 1988 to 2008 period:
Estimated post-Volcker Taylor rule is
r=π-0.02(π-π*)+1.59(u*-u)+2.44
which violates the Taylor principle!
One notable thing in comparing my estimates is that it appears that the Fed puts a lot more weight on the output gap, and indeed the volatility in the unemployment gap has been much lower over the 1988 to 2008 period than before. But two very strange features emerged:
  1. the Fed apparently no longer follows the Taylor principle, and
  2. the Fed apparently thinks the natural rate of real interest has risen, even though nominal interest rate has fallen, over the post-Volcker period.
On the first point, despite using the same series over the same time period, my estimates differ from Rudebusch for one simple reason (as far as I can tell): I constrained the inflation target to 2 percent, while Rudebusch allowed it to float. Maybe this also explains why I found that the implied natural rate of real interest has risen in the post-Volcker period.

Perhaps, then, we can conclude that the Fed has not actually been targeting 2 percent inflation. It has been targeting something less than 2 percent for quite some time. After the Great Recession and the formal adoption of the inflation target critics have charged that the Fed treats the target as a ceiling rather than a symmetric target. My estimates over the 1988 to 2008 era are also consistent with this behavior.

Thursday, March 26, 2015

Cost-shifting vs cross-subsidization

Medicare pays less. If you have private insurance, that might be good for you.
It has been interesting to watch all the different reactions to Austin Frakt's piece on so-called "cost-shifting." For the uninitiated, cost-shifting refers to the hypothesis that low Medicare payment rates causes providers to shift the cost of treating Medicare patients onto privately insured patients through higher prices.

I think it might help here to distinguish two separate things: the "cost-shifting" hypothesis can be either a values claim or a causal claim. As Frakt repeatedly noted in his NYT post as well as on TIE, there's plenty of evidence that providers do cross-subsidize by charging different prices to different types of customers. As a values statement, you could certainly interpret this as meaning that higher margin patients, such as those with private insurance, "bear" a larger share of the provider's fixed costs than lower margin Medicare patients, because they represent a larger share of the provider's revenue. But that's extremely different than the causal claim that lower payment rates for Medicare cause providers to increase prices to private insurers.

The causal claim version of the cost-shifting hypothesis is not supported by either economic theory or empirical evidence. Consider the theory: we'll let [$]P_m[$] denote the rates paid to Medicare and [$]P_i[$] be the rates to private insurers, while [$]m[$] is the number of Medicare patients and [$]y[$] is the number of privately insured patients that the hospital treats. The function [$]c\left(m+y\right)[$] gives the hospital's total costs of treating all their patients. We'll assume that the government controls [$]P_m[$] and sets it below market rates so that [$]P_m \lt P_i[$]. Demand for healthcare depends in part on price, so [$]y[$] is actually a function of [$]P_i,[$] and we'll denote this demand function as [$]y=f\left(P_i\right)[$]. We can, of course, write a similar function for Medicare but since the government sets [$]P_m[$], we'll leave that implicit for our purposes. The hospital chooses [$]P_i[$] and [$]m[$]--the number of medicare patients it will accept--to maximize profits [$]\pi[$] given by [$$]\max_{P_i,m}\pi=P_mm+P_if\left(P_i\right)-c\left(m+f\left(P_i\right)\right).[$$] The first order conditions are
\begin{align}
P_i:&~f\left(P_i\right)+P_if'\left(P_i\right) =c'\left(m+f\left(P_i\right)\right)f'\left(P_i\right) \\
m:&~P_m=c'\left(m+f\left(P_i\right)\right)
\end{align}
where [$]f'\left(P_i\right)[$] is the first derivative of [$]f\left(P_i\right)[$] while [$]c'\left(m+f\left(P_i\right)\right)[$], the first derivative of [$]c\left(m+f\left(P_i\right)\right)[$], is the marginal cost function. Combining the two FOCs we get
\begin{equation}
f\left(P_i\right)=\left(P_m-P_i\right)f'\left(P_i\right) \label{intuit}
\end{equation}
Totally differentiating, we get
\begin{equation}
\frac{\partial P_i}{\partial P_m}=\frac{f'\left(P_i\right)}{2f'\left(P_i\right)-f''\left(P_i\right)\left(P_m-P_i\right)}
\end{equation}
Ok, for a standard demand curve, [$]f'\left(P_i\right)<0[$] and we postulated that [$]P_m-P_i<0[$]. Off the top of my head I don't recall any theorems about the second derivative of demand [$]f''\left(P_i\right)[$] but I will argue it is negative, because we typically think of people wanting infinite--or at least disproportionately large quantities--of something when its price is zero, which doesn't happen if the second derivative is positive (assuming twice continuous differentiability, etc). Hence, [$]\frac{\partial P_i}{\partial P_m}>0[$] which means this theory predicts that a decrease in Medicare payment rates will actually cause hospitals to decrease prices to private insurers as well. (Did I do all the math right? Some one please check me.)

We can intuit this result by examining equation \eqref{intuit} and recalling that [$]y=f\left(P_i\right)[$]. A decrease in [$]P_m[$] increases the right-hand side of the equation, implying an increase in [$]y[$], and we know the only way to increase [$]y[$] is by decreasing the price [$]P_i[$]. That is, hospitals seek to offset lost revenues from Medicare patients by serving fewer Medicare patients and enticing more privately insured patients with lower prices.

Austin Frakt has already gone through the evidence, and in fact confirms exactly what this little theory predicted: lower Medicare rates decrease rather than increase prices to private insurers.

Tuesday, March 24, 2015

What is a "broad-based tax cut?"

In honor of tax season, I want to talk about an old pet peeve of mine: when people say "broad-based tax cut" or "across-the-board tax cut." Pretty much every presidential election year, some GOP candidate proposes an "across-the-board" tax rate cut--last time it was Mitt Romney--by which they mean a reduction in the marginal tax rates at all tax brackets, and the press usually reports this as "something for everyone," as though cuts in lower rates go to lower income groups, while cuts in higher brackets go to higher income brackets. That presentation is only half true: while only the highest income earners benefit from a cut in the top marginal tax rate, cuts in rates for lower brackets are actually shared by both lower income earners and those in the highest income bracket.

In general, those in the highest tax bracket always benefit the most from tax cuts in any tax bracket.

I'll refer you to Chye-Ching Huang for an explainer on why exactly that is the case, but to illustrate the point, I've made an app which lets you see how the money from various kinds of tax cuts (or hikes) is distributed across income groups. The table below the graph gives all of the maximum income thresholds for each Federal Income Tax bracket, with the corresponding tax rate to the right. Using the up/down arrows next to each, you can change any of the income thresholds or tax rates, including the standard deduction. The graph then shows how much more or less each income group would pay in taxes as a result of your policy, compared to the actual 2014 rates. I have started you out with a $50 increase in the standard deduction versus actual 2014 rates:

Tax Rates

ThresholdRate
$907510%
$3690015%
$8935025%
$18635028%
$40510030%
$40675035%
over$40675039.6%
standard deduction$6200

Show original Rates
ThresholdRate
$907510%
$3690015%
$8935025%
$18635028%
$40510030%
$40675035%
over$40675039.6%
standard deduction$6200

Wednesday, March 11, 2015

A fix for Google Chart API

I've been working with Google Chart recently, and while it makes some pretty cool graphics, there are some things that bug me about it. Here's one: there's no method for adding a series to the graph. Considering the following graph, which grabs and plots data from FRED based on user input:

If you press the button it adds the PCE Deflator to the series that was already there. This is pretty common functionality for a web chart. But Google Chart does it poorly.


There are two ways to plot series in google chart: either use addRows or setCell. Here's how it works with addRows: Suppose you've just done your ajax or whatever so you now have a javascript array of dates and an array of corresponding values for the series you want to graph. The Google chart only reads data from Google's DataTable class object, so you need to reorganize your data from whatever came over via ajax into a DataTable using the DataTable's interface, consisting of the addRows and setCell methods. So for example we can do this:
var temp=[];
for(var i=0;i<dates.length;i++){
  temp.push(new Date([dates[i]),parseFloat(values[i])]);
}
var data=new google.visualization.DataTable();
data.addColumn('date','X');
data.addColumn('number','Series Name');
data.addRows(temp);
Importantly, addRows will only accept a vector of rows of data as an argument. If your ajax data source forces you to loop through the rows of data anyway (this is true of the FRED API, for example), and you want to graph all your series at once, then this is fine. But you've already graphed one series, and want to add a second, then you can't use addRows. So you have use setCell instead, and consequently have to keep track of row and column numbers manually:
if(data.getNumberOfRows()!=values.length){
data.addRows(values.length);
}
data.addColumn('number','Second Series');
for(var i=0;i<values.length;i++){
    data.setCell(i,1,parseFloat(values[i]));
}
Either way, because Google's chart objects always interpret rows as observations and columns as series, you are forced to iterate through each row of the DataTable to add a series. I don't think this makes any sense--for a graphing application, you want to focus on adding and manipulating whole series--adding columns at a time, not rows. I'd count this oversight as a bug.

To fix this bug, I've made a small extension of the Google API: a method addColumnWithValues, which takes three arguments, data type, series name, and an array of all the values. So to add a series you simply do:
data.addColumnWithValues('number','Series 2',values);
No more typing out for loops or manually tracking all the row and column numbers. To use my extension, you'll need to drop this in your javascript code somewhere inside the callback for the google chart:
google.visualization.DataTable.prototype.addColumnWithValues=function(type,name,valuesarray){
var that=this;
that.addColumn(type,name);
var rowNum=valuesarray.length;
var colNum=that.getNumberOfColumns()-1;
if(colNum===0){
that.addRows(rowNum);
}
switch(type){
case 'number':
for(var i=0;i<rowNum;i++){
that.setCell(i,colNum,parseFloat(valuesarray[i]));
}
break;
case 'date':
for(var i=0;i<rowNum;i++){
that.setCell(i,colNum,new Date(valuesarray[i]));
}
break;
default:
for(var i=0;i<rowNum;i++){
that.setCell(i,colNum,valuesarray[i]);
}
break;
}
}
Would it have been so hard for Google to include something like this in the original API? I think they should have.

Note that this is not necessarily a high-performance solution. It's something of a hack really, that works because javascript is extremely dynamically typed. While this solution works, I suspect Google could get a lot better performance building something up from the lower levels of the DataTable class, avoiding the need to call the setCell method for literally each row of data. In particular, I suspect if they changed the DataTable class so that it stored data as an array of column-arrays instead of an array of row-arrays, almost everything would be faster, and most things would require less coding.

I don't think there's much doubt that Google's charts are prettier, but if you want easy line graphs you are always welcome to use my own homegrown javascript graph API--a graphing engine for the HTML5 <canvas> element--which I've used on this blog previously.

Update: I did a speed test comparing google's API with my own canvas graph API. Regardless of the format of the raw data (in column arrays vs FRED-like json objects), my API is about 2 orders of magnitude faster than google chart.

Hierarchy of American law

Several news stories have led me to believe Americans don't really understand that American law has a fairly strict order of precedence. A state can't overturn a federal law. The president--nor even Congress--can't overturn a ratified treaty.

As a refresher, this is the order of precedence:
  1. US Constitution
  2. Treaties
  3. Federal statute
  4. Federal regulations
  5. Federal caselaw
  6. State Constitutions
  7. other state laws
An authority lower on this list cannot overrule any authority higher on this list unless explicitly authorized to do so by something even higher on the list. Treaties are actually one of the highest forms of American law, and the only legal way out of a treaty obligation is to negotiate and ratify another treaty (well, I suppose a constitutional amendment is another option). Violations of treaties tend to have relatively minor consequences so we do it often, but make no mistake--that is totally illegal.

Also, you'll note I did not rank city, township, and county laws and regulations on the list. That's because, from the perspective of the US constitution, none of those exist. The US constitution creates states (and the District of Columbia), and all other local laws and regulations are expressions of the powers it delegates to the states.