Friday, April 17, 2015

The irrefutability of neo-fisherism

Here's a question I've been thinking about: how will we ever know if the neo-fisherite hypothesis is correct?

The neo-fisherite hypothesis is one of the alternative macroeconomic models to emerge after the Great Recession, and it holds that the conventional Taylor Rule is exactly backwards: lowering interest rates actually causes inflation to fall and raising the interest rate causes inflation to rise. John Cochrane has a nice exploration of the idea here. The idea itself isn't new--it has been in the back of macroeconomists' minds since the early days, though I'd credit Noah Smith with giving the hypothesis it's name. The difference between neo-fisherism and orthodoxy is surprisingly subtle--they rely on the exact same equations, and differ only in one particular assumption about the direction of causality. In the neo-fisherite view, causality runs directly from interest rates to inflation so that setting interest rates is the same as setting the inflation rate, whereas in the orthodox view interest rates have only an indirect effect on inflation through monetary operations, while direct causality runs from inflation to interest rates as central banks react to changes in the economy. The difference is so subtle that many macroeconomists aren't even aware they are making these assumptions--a problem worsened by the unfortunate habit of New Keynesians to leave the money market implicit (a topic for another post, but point is there are two equations in NK models that short-circuit the money market, but the model is equivalent to a money-in-utility type model where the central bank uses Open Market Operations to set interest rates).

So the two models differ only by the assumption of the direction of causality between inflation and policy interest rates. And the policy implications of each are exact opposites. How will we know which is right? Reverse-causality is a classic empirical trap.

Here's the dilemma. Let's compare the New Keynesian and Neo-Fisherite predictions right now.
  • New Keynesian prediction:
    As the economy improves and inflation picks up, the Fed will raise interest rates
  • Neo-Fisherite prediction:
    as the Fed raises interest rates, the economy will improve and inflation will pick up
Despite having opposite recommendations for what the Fed should do, those are both the exact same prediction!

Thursday, April 16, 2015

Does welfare for workers subsidize Walmart?

This is a claim that has become a popular talking point on the left: by paying workers so little that many of them qualify for things like medicaid, TANF, food stamps, and EITC, big employers like Walmart are essentially being subsidized by the welfare system.

At one level, I think most proponents of this argument are making a values claim and not a causal one. Compared to the counter-factual where Walmart and others paid enough so that none of their employees qualified for public assistance, these companies are the beneficiaries of huge public subsidies. That's not the same as arguing that simply repealing welfare would be enough to cause these employers to pay that much. The general sense is that Walmart and others could pay all their workers enough:
At over $446 billion per year, Walmart is the third highest revenue grossing corporation in the world. Walmart earns over $15 billion per year in pure profit and pays its executives handsomely. In 2011, Walmart CEO Mike Duke – already a millionaire a dozen times over – received an $18.1 million compensation package. The Walton family controlling over 48 percent of the corporation through stock ownership does even better. Together, members of the Walton family are worth in excess of $102 billion – which makes them one of the richest families in the world.
not necessarily that they would in a world without public subsidies:
Meanwhile, Walmart routinely blocks any attempt by workers to organize, using anti-union propaganda and scare tactics, firing employees without just cause, failing to provide any form of decent healthcare coverage or a livable wage.

To make matters worse, these abusive Walmart policies have increased employee reliance on government assistance and the need for a government funded social safety net. In fact, Walmart has become the number one driver behind the growing use of food stamps in the United States with "as many as 80 percent of workers in Wal-Mart stores using food stamps."
See the difference?

To be sure, I don't buy the values argument here. Yes, society does have a moral obligation to both ensure that everyone earns more than subsistence and to share prosperity as broadly as possible. Sometimes it may be more economically efficient to do this through policies that affect wages than through government-budgeted redistribution (though I will scrutinize your model if you make this claim), but I see no inherent moral reason to prefer redistribution through the market wage mechanism. Low wages supplemented by welfare is perfectly moral policy for low-skill workers, and the argument that Walmart is shamefully exploiting the welfare system only plays into the right-wing narrative that welfare is shameful and low-skill workers are icky. The left-wing attempt to vilify Walmart for hiring low-productivity workers only further vilifies low-productivity workers.

But then, not all proponents of the welfare-for-walmart theory are sticking to the values claim. Some go further to make the causal claim that reducing welfare benefits would cause employers to increase wages:Unlike the values claim before, this causal claim is within economists' domain.

So, if that's the hypothesis, what's the model? I think this is the model most proponents have in mind: In economic terms, welfare increases labor supply by literally keeping more workers alive, and because the labor demand curve is downward sloping this implies lower market wages. Graphically:
Welfare-for-walmart theorists believe that by keeping more workers from starving to death, welfare programs increase labor supply and thereby decrease the wages firms have to pay.
Health economics research vaguely supports the broad proposition: social factors like income have big effects on health and we have every reason to believe that welfare assistance reduces the overall mortality rate so that there are, in fact, more workers with a welfare system than without. But, we're talking about effects on different orders of magnitude, and the welfare-for-Walmart claim--that all the workers would starve to death without public assistance--is not consistent with the research finding that lower incomes decreases life expectancy somewhat. I have no doubt that some Walmart workers go hungry sometimes, but the idea that most of them would starve to death in the absence of public assistance is pretty hysterical. The american poor still rank among the wealthiest compared to the long history of human subsistence.

So, that's the theory anyway. I'm not sure it's even internally consistent--more workers alive would increase labor supply, that's true, but it also increases the demand for goods and services. In the standard DSGE model, for example, an increase in the labor supply will eventually (pretty rapidly, actually) cause the capital stock to increase by just enough to maintain the original wage rate. So without some behind-the-scenes theory for why public welfare decreases the capital/labor ratio (distortionary taxation?) this theory doesn't add up.1

There are variants of the labor supply story that don't assume mass starvation:Dube's argument here is that various welfare schemes each impose their own distortions on the labor supply decision--the EITC, for example, is essentially a tax on non-employment since you can't qualify without a job, while SNAP (food stamp) benefits which decrease as your market income rises is essentially a tax on working more. Certain kinds of tax distortions can drive a wedge in the capital/labor ratio and reduce long-run wages, but it's not guaranteed, so this theory is still missing something. Moreover, while exact effects depend on the design of each program, I don't think it's controversial to say that the combined net effect of all these welfare programs is to decrease aggregate labor supply, which would tend to increase not suppress market wages.

1. To see why I make this claim, consider the DSGE model I posted previously. (I'm posting math as images with white text--if your using an RSS feed with a white background, click over here) The consumption Euler equation without distortionary taxation is , while from the firm's profit-maximization problem . That means the steady state labor/capital ratio, , is fixed by technology, time preferences, and capital depreciation. Also from the firm's problem, , with fixed L/K implies increasing labor supply does not decrease wages, except in the short-run transition phase. No doubt there are a variety of deviations from the standard model that would allow long-run wages to change, but, as always, if you don't present your model it didn't happen. We can't debate claims that aren't stated.

Tuesday, April 7, 2015

How many people would lose health insurance if King wins?

You've probably heard of King v Burwell, the lawsuit before the Supreme Court that could take ACA subsidies away from 34 states if the plaintiff, King, wins. You're probably also aware that in such an event there is no viable plan to pass a Congressional fix and, realistically, little chance that states will be able to establish their own exchange in time to keep the subsidies.

It would be unusual if the Court stayed the ruling--though Justice Alito hinted at it--so if King wins that probably means 9,346,000 people lose subsidies for their health insurance immediately. How many will lose insurance completely?

That is a question with several dimensions. Without the subsidies, many people won't want to pay for coverage anymore at the new higher price. Then there is an additional effect caused by adverse selection: those who do still want insurance at the higher price will be the higher-cost, sicker patients, meaning that the risk pool will become higher-cost, further driving up premiums for everyone above and beyond the current pre-tax prices. RAND says premiums would rise 47 percent above the current unsubsidized rates due to adverse selection. It doesn't stop there. Those who do continue to buy insurance will downgrade to the cheapest plans offering the least amount of coverage.

Rand estimates that 9.6 million people would lose insurance entirely in this scenario, about 70 percent of the non-group market for the affected states. The Urban Institute's estimates don't look much better: 8.2 million lose insurance. And I'd add that much of the loss is likely to be permanent, as those who get burned by a sudden, massive, and totally unexpected (more than half of the public is totally unaware of King v Burwell, and even fewer know it affects their state) rate hikes will never return to the exchanges again.

What got me thinking about this was this paper from 2009 that estimated the elasticities of demand for health insurance among the self-employed. The elasticities aren't necessarily generalizeable to the ACA--theory tells us that insurance equilibria aren't very robust and depend heavily on market structure. But out of curiosity I did the calculation. They estimated an extensive (the decision whether or not to buy insurance) elasticity of -0.3 and an intensive (decision of how much to buy) elasticity of -0.7, which is pretty moderate as far as published estimates go. Just counting the loss of the subsidies, that works out to 7,205,766 people losing insurance due directly to the loss of the subsidy. If we borrow RAND's estimate of the price effect from adverse selection, that's more than a million additional people loosing insurance, for a total of 8,813,166 loosing insurance (see update). So, RAND's and Urban Institute's estimates look pretty reasonable to me.

And what about the intensive margin? The ACA metal tiers didn't exist in the study sample, of course, so we have no idea what will happen. But if we do apply the -0.7 estimate literally, that means a King victory will totally wipe out all but the very cheapest plan in the Federal Exchanges.

Update: The 8,813,166 figure has been revised up because I originally forgot to include exchange plans that experience a price hike from adverse selection but were not receiving subsidies. I'm still probably missing people because the ACA also requires insurers to pool risks across all conforming plans, including both those sold on and off the exchanges. Thus most of the off-exchange non-group market would experience a similar 47 percent rate hike.

Friday, April 3, 2015

Bad jobs report, good wage report?

The BLS jobs report comes out on the first Friday of every month, and today's was a disappointment, showing a lower than expected increase of 128,000 jobs in March. In addition, January and February figures were revised down, making for a disappointing first quarter to 2015. First quarter was also staggeringly weak in 2014--GDP actually shrank, making it the only decline in GDP on record that didn't turn into a recession.

On the other hand, wages rose. As Danny Vinik notes:
"What’s even more interesting in this report is that wage growth actually beat expectations, rising 0.3 percent in March versus the expected 0.2 percent. In the past year, wages have grown 2.0 percent. That’s only a slight beat and can be attributed to the noisy report. Yet it’s still tough to square a slight increase in wages with the ugly jobs numbers. It’s a complete reversal of the narrative that we’ve seen over the past year where the jobs numbers repeatedly came in above 200,000 yet wage growth was muted. Simply put, it doesn’t make any sense!"
According to Vinik, wage growth contradicts the bad jobs growth.

Theory predicts that wages growth should rise as labor markets tighten, and decline in weak labor markets, and I don't dispute that. However, I'm on record criticizing people who made a big deal out of wage growth in the past monthly reports because, in my view, wage growth seems to be too persistent for that kind of analysis--any month-over-month change in wages is far more likely to be noise than signal.

So, here's a few simplistic regressions to help think about the correlations:
This the unemployment rate, the percent change in employment, and the change in the unemployment rate, regressed on wages. The p-values give a sense of the strengths of the correlations between these variables--contemporaneous wage and employment numbers are hardly correlated at all. In fact, wages and employment actually have a slightly negative correlation of -0.04, which I'd take to say that wage growth is about as likely to rise as to fall in the same month as a strong jobs report. If you're concerned about multicolinearity, here's the regression with just wages and employment:
In fact, the relationship looks even weaker, if that's possible. (Note: I did the regressions with both the absolute and percent change in employment just to be sure--they are essentially the same results)

The unemployment rate, on the other hand, is actually predictive of the changes in wages.
So we would expect to start seeing more wage growth in low unemployment environments. But my goodness that is a low R-squared. What this says is that low unemployment months have higher wage growth on average, but in any particular month we are still only slightly more likely to see the two go in the same direction as opposite directions.

And just to complete the thought, here's what we get if we look at the change in unemployment instead of the level:
Same thing, just weaker correlation than with the levels.

I'm not saying that strong labor markets don't lead to wage increases. But clearly we won't be able to see this relationship in real-time monthly datapoints. All of that commentary is just tilting at statistical noise, not actual economic indicators, in my opinion.

Tuesday, March 31, 2015

Examining the Taylor Rule(s)

A while back I made a calculator that compared estimates of Taylor Rules from several economists. You can find a version of the calculator with live data from FRED at the side bar on the right of this page.

When I first posted the calculator, there was a clear partisan divide, with the liberal economists Paul Krugman and Jared Bernstein calling for negative rates, and the conservative economists Greg Mankiw and John Taylor (inventor of the Taylor Rule concept) calling for interest rate hikes. Glen Rudebusch--a Fed economist whose politics I know little about but who published careful empirical estimates of the Taylor Rule--actually predicted the lowest interest rate of the bunch, at -0.43 percent. Recently, however, that dichotomy has been replaced by a different one: Krugman's and Mankiw's rules are now calling for pretty large rate hikes--all the way to above 3 percent--while Bernstein's, Rudebusch's, and Taylor's versions call for much more modest hikes to about 1 percent.

The change has to do with what turns out to be fairly important differences in each version of the Taylor Rule. I never really explained where my inspiration for the Taylor Rule calculator from--one commenter even accused me of making the Taylor Rules up--so here it is: The Mankiw rule comes from a version that Mankiw published in a paper a while back, which he mentioned on his blog here. The Krugman version comes from Krugman's rebuttal to Mankiw's version here, to which Mankiw further responded here. The Jared Bernstein version comes from his blog post here in which he argues that the Taylor Rule says monetary policy was too tight rather than too loose. Of all the versions I find Bernstein's most incomprehensible, for reasons that John Taylor explained here--Bernstein neither uses the original version Taylor proposes, nor did he mention using any econometrics or published estimates to arrive at different parameters. Glen Rudebusch's version was published in his 2009 FRBSF economic letter (see attached spreadsheet).John Taylor's version, of course, is the original Taylor Rule.

In general, a Taylor Rule is a function of the form:
interest rate=inflation+α(inflation-target)+β(NAIRU-unemployment)+γ
where α, β, and γ are just three policy parameters chosen by the Fed--α can be thought of as the weight given to inflation, β is the weight given to the output gap, and γ is sometimes interpreted as a "natural rate of interest." Krugman and Mankiw actually estimated something similar but slightly different:
interest rate=β(inflation-unemployment)+γ
However, given that NAIRU and the inflation target don't really change, their version is essentially the same as the one above but constrained to put equal weights on inflation and output. By contrast, Rudebusch and Bernstein put substantially more weight on output than inflation, while Taylor put far more weight on inflation than output.

Right now, the actual differences in model specification doesn't actually make much difference between those five models--the recent divergence between Krugman/Mankiw and Taylor/Bernstein/Rudebusch is almost entirely due to the fact that the two groups use different measures of inflation: Mankiw used core CPI instead of the Fed's preferred PCE deflator, and since Krugman's point was about Mankiw's version, he also used core CPI. The Fed actually targets the PCE deflator, not core CPI, though the latter is often better at predicting the future path of the PCE deflator. The Fed, for it's part, prefers to use the trimmed-mean PCE deflator over core CPI to predict the future path of overall PCE deflator, though the two are pretty similar right now. So this is all a big lesson that it matters whether you believe recent low overall inflation is a blip or a trend.

Anyway, I've compared the historical performance of each of the Taylor Rules along with my own new estimate using all the data from 1960 to present:
Difference in actual FedFunds rates and those predicted by various versions of the Taylor Rule.
The horizontal line at zero shows what we would see--zero deviation--if the Taylor Rules perfectly predicted the actual Fed Funds interest rate. What you can see is that all versions have been quite terrible at this purported goal.

But there are some general points which all the estimates agreed on:
  • monetary policy was "too loose" during the 1970s,
  • monetary policy tightened way "too much" during the 1982 recession,
  • we hit the zero-lower-bound, at least briefly, in 2009
  • the rate "should" be above zero by now
I used scare quotes on that list because I want to warn readers against reading too much into Taylor Rule predictions: a Taylor Rule merely aims to predict what the Fed would do, but is not itself the result of any kind of optimal control procedure and shouldn't be interpreted as a prediction of the optimal interest rate target. (Update: Linking to this post explaining the derivation of the original Taylor Rule, John Taylor says that it was "designed as optimal." Even so, my point here was that several of these estimates--Mankiw, Krugman, Rudebusch, in particular--were picked because they empirically matched Fed behavior, and we shouldn't read too much into that.)

Of the rules I looked at, mine actually fits the historical data the best:
Standard deviations of the actual fed funds rate from predict rates for various Taylor rules over the 1960 to 2015 period.
We shouldn't make too much of these standard deviations--they're different models estimated with different time periods and different dataseries--but it is a little surprising that Rudebusch's estimates from the 1988 to 2008 sample have the highest error rate over the full dataset. In fact, of all the models, Rudebusch's version is the most aggressively dovish, reflecting, essentially, an uncontrolled slide into disinflation after the 1982 tightening cycle.

When I estimate the taylor rule over the entire sample I get the following weights on inflation and output respectively:
Estimated Taylor Rule over the full sample is
which places more weight on inflation than output, and satisfies the Taylor Principle.
Yet here's what I get over the 1988 to 2008 period:
Estimated post-Volcker Taylor rule is
which violates the Taylor principle!
One notable thing in comparing my estimates is that it appears that the Fed puts a lot more weight on the output gap, and indeed the volatility in the unemployment gap has been much lower over the 1988 to 2008 period than before. But two very strange features emerged:
  1. the Fed apparently no longer follows the Taylor principle, and
  2. the Fed apparently thinks the natural rate of real interest has risen, even though nominal interest rate has fallen, over the post-Volcker period.
On the first point, despite using the same series over the same time period, my estimates differ from Rudebusch for one simple reason (as far as I can tell): I constrained the inflation target to 2 percent, while Rudebusch allowed it to float. Maybe this also explains why I found that the implied natural rate of real interest has risen in the post-Volcker period.

Perhaps, then, we can conclude that the Fed has not actually been targeting 2 percent inflation. It has been targeting something less than 2 percent for quite some time. After the Great Recession and the formal adoption of the inflation target critics have charged that the Fed treats the target as a ceiling rather than a symmetric target. My estimates over the 1988 to 2008 era are also consistent with this behavior.