Examining the Taylor Rule(s)

3/31/2015 02:48:00 PM
A while back I made a calculator that compared estimates of Taylor Rules from several economists. You can find a version of the calculator with live data from FRED at the side bar on the right of this page.

When I first posted the calculator, there was a clear partisan divide, with the liberal economists Paul Krugman and Jared Bernstein calling for negative rates, and the conservative economists Greg Mankiw and John Taylor (inventor of the Taylor Rule concept) calling for interest rate hikes. Glen Rudebusch--a Fed economist whose politics I know little about but who published careful empirical estimates of the Taylor Rule--actually predicted the lowest interest rate of the bunch, at -0.43 percent. Recently, however, that dichotomy has been replaced by a different one: Krugman's and Mankiw's rules are now calling for pretty large rate hikes--all the way to above 3 percent--while Bernstein's, Rudebusch's, and Taylor's versions call for much more modest hikes to about 1 percent.

The change has to do with what turns out to be fairly important differences in each version of the Taylor Rule. I never really explained where my inspiration for the Taylor Rule calculator from--one commenter even accused me of making the Taylor Rules up--so here it is: The Mankiw rule comes from a version that Mankiw published in a paper a while back, which he mentioned on his blog here. The Krugman version comes from Krugman's rebuttal to Mankiw's version here, to which Mankiw further responded here. The Jared Bernstein version comes from his blog post here in which he argues that the Taylor Rule says monetary policy was too tight rather than too loose. Of all the versions I find Bernstein's most incomprehensible, for reasons that John Taylor explained here--Bernstein neither uses the original version Taylor proposes, nor did he mention using any econometrics or published estimates to arrive at different parameters. Glen Rudebusch's version was published in his 2009 FRBSF economic letter (see attached spreadsheet).John Taylor's version, of course, is the original Taylor Rule.

In general, a Taylor Rule is a function of the form:
interest rate=inflation+α(inflation-target)+β(NAIRU-unemployment)+γ
where α, β, and γ are just three policy parameters chosen by the Fed--α can be thought of as the weight given to inflation, β is the weight given to the output gap, and γ is sometimes interpreted as a "natural rate of interest." Krugman and Mankiw actually estimated something similar but slightly different:
interest rate=β(inflation-unemployment)+γ
However, given that NAIRU and the inflation target don't really change, their version is essentially the same as the one above but constrained to put equal weights on inflation and output. By contrast, Rudebusch and Bernstein put substantially more weight on output than inflation, while Taylor put far more weight on inflation than output.

Right now, the actual differences in model specification doesn't actually make much difference between those five models--the recent divergence between Krugman/Mankiw and Taylor/Bernstein/Rudebusch is almost entirely due to the fact that the two groups use different measures of inflation: Mankiw used core CPI instead of the Fed's preferred PCE deflator, and since Krugman's point was about Mankiw's version, he also used core CPI. The Fed actually targets the PCE deflator, not core CPI, though the latter is often better at predicting the future path of the PCE deflator. The Fed, for it's part, prefers to use the trimmed-mean PCE deflator over core CPI to predict the future path of overall PCE deflator, though the two are pretty similar right now. So this is all a big lesson that it matters whether you believe recent low overall inflation is a blip or a trend.

Anyway, I've compared the historical performance of each of the Taylor Rules along with my own new estimate using all the data from 1960 to present:
Difference in actual FedFunds rates and those predicted by various versions of the Taylor Rule.
The horizontal line at zero shows what we would see--zero deviation--if the Taylor Rules perfectly predicted the actual Fed Funds interest rate. What you can see is that all versions have been quite terrible at this purported goal.

But there are some general points which all the estimates agreed on:
  • monetary policy was "too loose" during the 1970s,
  • monetary policy tightened way "too much" during the 1982 recession,
  • we hit the zero-lower-bound, at least briefly, in 2009
  • the rate "should" be above zero by now
I used scare quotes on that list because I want to warn readers against reading too much into Taylor Rule predictions: a Taylor Rule merely aims to predict what the Fed would do, but is not itself the result of any kind of optimal control procedure and shouldn't be interpreted as a prediction of the optimal interest rate target. (Update: Linking to this post explaining the derivation of the original Taylor Rule, John Taylor says that it was "designed as optimal." Even so, my point here was that several of these estimates--Mankiw, Krugman, Rudebusch, in particular--were picked because they empirically matched Fed behavior, and we shouldn't read too much into that.)

Of the rules I looked at, mine actually fits the historical data the best:
Standard deviations of the actual fed funds rate from predict rates for various Taylor rules over the 1960 to 2015 period.
We shouldn't make too much of these standard deviations--they're different models estimated with different time periods and different dataseries--but it is a little surprising that Rudebusch's estimates from the 1988 to 2008 sample have the highest error rate over the full dataset. In fact, of all the models, Rudebusch's version is the most aggressively dovish, reflecting, essentially, an uncontrolled slide into disinflation after the 1982 tightening cycle.

When I estimate the taylor rule over the entire sample I get the following weights on inflation and output respectively:
Estimated Taylor Rule over the full sample is
which places more weight on inflation than output, and satisfies the Taylor Principle.
Yet here's what I get over the 1988 to 2008 period:
Estimated post-Volcker Taylor rule is
which violates the Taylor principle!
One notable thing in comparing my estimates is that it appears that the Fed puts a lot more weight on the output gap, and indeed the volatility in the unemployment gap has been much lower over the 1988 to 2008 period than before. But two very strange features emerged:
  1. the Fed apparently no longer follows the Taylor principle, and
  2. the Fed apparently thinks the natural rate of real interest has risen, even though nominal interest rate has fallen, over the post-Volcker period.
On the first point, despite using the same series over the same time period, my estimates differ from Rudebusch for one simple reason (as far as I can tell): I constrained the inflation target to 2 percent, while Rudebusch allowed it to float. Maybe this also explains why I found that the implied natural rate of real interest has risen in the post-Volcker period.

Perhaps, then, we can conclude that the Fed has not actually been targeting 2 percent inflation. It has been targeting something less than 2 percent for quite some time. After the Great Recession and the formal adoption of the inflation target critics have charged that the Fed treats the target as a ceiling rather than a symmetric target. My estimates over the 1988 to 2008 era are also consistent with this behavior.
Harald Korneliussen 4/01/2015 03:21:00 AM
"Of the rules I looked at, mine actually fits the historical data the best"

Yes, but how did you arrive at your parameters? If you derived them from the historical data in the period in question, it's not exactly surprising that it fits them best.