# Separating Hyperplanes

A blog between the spheres of Economic Theory and Policy Analysis

## Monday, September 15, 2014

### Monday Morning Music

## Sunday, September 14, 2014

### Why we have no idea if ACA premiums are rising

The Kaiser study compared the second-lowest-cost silver plans on the exchanges for 16 cities from 2014 and 2015, finding an average decrease of 0.8 percent from 2014 to 2015. I do have doubts about the dataset itself--it's possible that states that offer more comprehensive data (the criteria for inclusion in this study) are not representative of most states. Time will tell.

But Avik Roy isn't impressed: he cites this McKinsey study finding an 8 percent

*increase*on average between 2014 and 2015. Their methods are similar to Kaiser's, with (as far as I can tell) the only major difference

^{[update]}being that McKinsey looked at the cheapest silver plan while Kaiser had looked at the second-cheapest plan (same caveats about incomplete data apply to both). That tells me that estimators of this type aren't very stable, and thus not very useful.

These estimators suffer from compositional fallacies. For example, on twitter Austin Frakt makes an excellent observation:

Empl-sponsored health ins premium growth moderating but cost-sharing rising. Controlling for plan generosity, what's the real growth rate?

— Austin Frakt (@afrakt) September 10, 2014

That is, neither estimator is literally comparing the same plans year to year. Moreover, they are looking at only one plan in each area in each year, and therefore missing what's happening to most of the premiums that people are buying. To see why this isn't robust, just consider the case where the cheapest plan in 2014 simply isn't offered in 2015 because no one bought it, but that none other premiums changed at all. This would show up in McKinsey's study as a large premium hike, even though no one is paying a higher premium! The Kaiser-McKinsey estimator has extremely poor theoretical validity, and is not what we want to look at how premiums are changing.When I think about premium hikes here’s the “ideal” estimator I have in mind: take the premium for each specific plan in 2015 and subtract the 2014 premium for the exact same plan, and then take the average of the differences weighted by the number of 2014 buyers for each plan. This truly tells us the average—not the mostly meaningless “benchmark”—change in premiums while eliminating all confounding compositional effects. It’s also mostly impossible, because the plans offered in 2015 are not the same as the ones in 2014. They never are. Still, I think this is the standard against which various metrics should be compared—a statistic is only valid if it approximates this. Kaiser and McKinsey haven’t got it.

*[update]*Avik Roy also pointed out this difference in methods between the two studies:

@hyperplanes Kaiser study only looked at cities. That’s useful, but not nationally representative. PwC/McKinsey better for that.

— Avik Roy (@Avik) September 14, 2014

## Wednesday, September 10, 2014

### Why might increased insurance coverage decrease Emergency Department use?

According to the study McIntyre cites, some individuals who would have chosen ED visits when they have no insurance would instead opt for an alternative with insurance (it helps to think of this as a choice between ED and doctor's office, but for our purposes "doctor's office" is just a stand-in for anything other than ED, including no care at all). Let [$]B[$] represent their budget set without insurance, and [$]B'[$] is the budget set with insurance, and denote the choice of ED over doctor's office as [$]e[$] while [$]d[$] denotes the choice of doctor's office instead of ED.

^{[1]}The relation [$]C\left(\cdot \right)[$] tells us which element of a given budget set the individual will choose (note that the choice can be a set of alternatives that the individual is indifferent between). Since the study showed that gaining coverage--moving from budget set [$]B[$] to budget set [$]B'[$]--caused some individuals to switch from [$]e[$] to [$]d[$], this implies that we have [$$]e\in B~and~e\in C\left(B\right)[$$] as well as[$$]d\in B'~and~d\in C\left(B'\right).[$$]

Now, the fact that the study observed a

*decrease*in ED visits means we have [$]d\in C\left(B'\right)[$] but [$]e\notin C\left(B'\right)[$] for at least some individuals (if not, then the study estimate isn't causal). What accounts for this difference in choice between the two budget sets?

Suppose that both the ED and doctor's visit are affordable without insurance, so that [$]e,d\in B[$], and that they are both still affordable with insurance [$]e,d\in B'.[$] This would mean we have a contradiction--this says that the consumer simultaneously considers the doctor's office a better option than the ED, while also considering the ED a better option than the doctor's office! If you are familiar with the theory, I'm invoking the Weak Axiom of Revealed Preference (WARP) here. The only way to avoid a contradiction is if either [$]d\notin B[$] or [$]e\notin B'[$]. Assuming WARP holds, whenever anyone says expanding coverage reduces emergency room usage, they are really making one (or both) of two possible empirical statements: that either 1| an uninsured person can afford ED visits but not the doctor's office, or that 2| an insured person can afford the doctor's office but not an ED visit. It would be helpful to know which.

Most studies--including my own dataset comparing ED and clinic visits--suggest that the ED is somewhat more expensive than the alternatives, so it would be a bit weird if [$]d\notin B.[$] Weird, but not totally impossible if you consider, for example, that many doctor's offices will turn away patients that do not have insurance, regardless of whether they can pay in cash. If this is what's driving the result in this study, then we've uncovered a useful policy insight: we can reduce health costs by requiring doctor's offices to accept cash in lieu of insurance. Another possibility is search costs: maybe doctor's offices are actually more expensive than EDs once you include the costs of finding a doctor who will accept you--a process that takes time and effort on the part of patients. It may simply be that EDs are easy to locate, while doctor's are not. That too would be useful to know.

On the other hand, maybe [$]e\notin B'.[$] Although it may sound weird at first that gaining insurance access can mean you can no longer afford to go to the emergency room, but this is actually realistic for some people. Even though insurers pay a portion of the ED bill, consumers are in turn paying the premiums that pay for that portion of the bill, so that having insurance does not expand one's budget set overall. What insurance does do, however, is substitute some elements inside the budget set for some elements outside of the budget set, which would be the case if, for example, insurers require a higher co-pay for ED visits than for the alternatives. On twitter, Seth Trueger offered this bit of evidence for this:

@hyperplanes most do, yes: usually higher ED copays vs others pic.twitter.com/OK9yVTJiLI

— Seth Trueger (@MDaware) September 10, 2014

Thus, having a clear theoretical model helps elucidate empirical study of the impact of insurance on ED use. Yes, gaining insurance can reduce ED use, and ascertaining exactly why can reveal important policy implications.

[1] Note, [$]e,d[$] are actually vectors in which just one of the elements represents the choice of healthcare, and the rest of the elements represent the choices of all other goods and services. We are really talking about choices between consumption bundles, which is why it is ultimately possible that getting insurance--whose premium is deducted out of the budget--can make bundles associated with ED use less affordable than without insurance.

## Tuesday, September 9, 2014

### How much healthcare fraud is there?

While the most infamous examples of health insurance fraud involve Medicare, private insurers like Blue Cross Blue Shield are also hit with tens of thousands of suspected fraud events every year. However, the evidence on the prevalence of fraud is stunningly weak given the apparently enormous scale of the problem. The most widely used estimates hail from the National Health Care Anti-Fraud Association, which suggests healthcare fraud accounts for between $68 billion and $280 billion a year, which as Donald Simborg noted in JAMIA, is astonishing for both its enormity and huge margin of error. Considering that the Coalition Against Fraud estimates total insurance fraud at $80 billion for all industries, this suggests almost all insurance fraud is health insurance fraud. To put that in perspective, the lower bound estimate is higher than all healthcare research and development combined, which clocks in at less than $50 billion according to the CMS.

No doubt the uncertainty about these estimates owes largely to the fact that fraud, by its nature, is hard to catch. But the inconsistency of the estimates may also stem from differing definitions of fraud. Although we typically think of medical fraud as involving identity theft or intentionally billing for drugs and treatments that were never ordered, medical billing errors may represent a much more common experience, as related by this Wall Street Journal coverage of the $10,000 mystery proceedure that the insurer paid for even though it never happened. Billing errors, which may or may not be intentional, aren't always counted as fraud, and seldom lead to criminal prosecutions.

How common are billing errors? As with fraud in general, estimates are sketchy. The most widely cited figure in the media—cited by Consumer Reports—comes from the The Medical Billing Advocates of America, who claim that an outlandish 80 percent of all medical bills contain errors. I've contacted MBAA about their methods and will update if they send a response. This Wall Street Journal piece cites Stephen Parente's much lower estimates of 30 to 40 percent, though I couldn't find this estimate in his extensive published papers (again, will update if he responds to my inquiry). The cleanest published research I found comes from JAMA, and using Medicare data from the 1980s put the estimate of billing errors much lower at between 20 to 14 percent—and falling over time—with over billing exceeding under billing to the tune of roughly 2 percent of total spending.

This over billing may not be accidental. A different study compared providers' medical databases to their billing databases to uncover discrepancies between the diagnoses made and the treatments billed for various conditions. It found low rates of patients who were prescribed but not billed for treatments—1.1 percent for heart failure and 12 percent for hypertension—but very high rates of patients being billed for things for which there was no corresponding record in the medical database—29.6 percent for heart failure and 26.8 percent for hypertension. This method does not necessarily allow us to estimate the rate of billing errors, and there may be legitimate reasons for the discrepancy, but the asymmetry between the two directions of the discrepancy suggests that many billing errors may be deliberate attempts to gouge insurers, which is a form of fraud often not classified as such.

Therefore, despite the poor quality of the data, we do have reason to suspect that healthcare fraud constitutes a significant cost for the typical health insurance policy holder.

## Thursday, September 4, 2014

### Why negative interest rates might not work

^{1}so that we can eliminate liquidity traps by implementing negative interest rates. Here's why I'm uncertain whether negative interest rate policy is expansionary.

To start, let's go through the standard logic: the fisher equation relates nominal interest, real interest, and inflation: [$$]r_t=\tilde{r}_t+\pi_t[$$] where [$]r_t[$] is the nominal interest controlled by the Fed, [$]\tilde{r}_t[$] is the real interest rate, and [$]\pi_t[$] is the inflation rate, in period [$]t[$]. The consumption/savings tradeoff depends on the returns to saving rather than consuming, so that consumption, and therefore output and employment, is decreasing in [$]\tilde{r_t}.[$] If prices are sticky, then [$]\pi[$] will respond to policy relatively slowly, so that in the short run reducing the nominal interest rate [$]r_t[$] leads to a reduction in [$]\tilde{r}_t[$] and thus an increase in consumption and reduction in saving. This is the liquidity effect, which is expansionary.

What this analysis has left out, however, are the feedback effects of inflation on output. This happens through two additional mechanisms: the New Keynesian Phillips Curve and the Euler consumption equation. We won't worry about exact analytical specifications for each, just let [$]f_t[$] describe how the household choice of [$]C_t[$] depends on the real interest rate and inflation (consumption Euler equation), and [$]g_t[$] describes the New Keynesian Phillips curve relationship between current and expected inflation, so that we have:

^{[update]}

\begin{align}

r_t&=\tilde{r}_t+\pi_t\\

C_t&=f_t\left(\tilde{r}_t,\pi_{t+1}\right) \\

\pi_t&=g_t\left(\pi_{t+1}\right)

\end{align}

where [$]g_t[$] is an increasing function and [$]f_t[$] is decreasing in [$]\tilde{r}_t[$] but increasing in [$]\pi_{t+1}[$].

^{2}It is now apparent that the liquidity analysis in the preceding paragraph was incomplete--inflation is actually a free variable! Conventional wisdom says that lowering the nominal interest rate causes inflation to increase, because we typically think of lowering the interest rate as being achieved by increasing the supply of money. But strictly from the mathematics, this is actually ambiguous--there are two paths to lower nominal interest rates: a monetary expansion that lowers real interest rates via sticky prices, or a monetary contraction that lowers expected inflation. Suppose we have monetary expansion, then expected inflation [$]\pi_{t+1}[$] rises via the money market (as I showed here), which induces more consumption via [$]f_t[$] and raises current inflation via the Phillips curve [$]g_t[$], which further reinforces the liquidity effect by lowering the real interest rate [$]\tilde{r}_t[$] for the given nominal rate target via the Fisher equation.

However, when we instead assume that lower nominal rates are achieved via monetary contraction, all of these reinforcing effects reverse signs: expected inflation falls, which reduces current inflation via [$]g_t[$], reduces current consumption via [$]f_t[$], and puts reverse pressure on the real interest rate in the fisher equation since, holding [$]r_t[$] at the target, a decrease in [$]\pi_t[$] implies higher, not lower, real interest [$]\tilde{r}_t[$]. So despite the simplistic logic of the first paragraph, lowering the Fed funds rate can potentially be contractionary if it is associated with a decrease in monetary aggregates.

For positive rates, the empirical evidence overwhelmingly suggests that lowering the Fed funds rate increases the money supply. This is not obvious from theory alone--lower nominal interest rates does reduce "money printing" in some respects, for example by reducing interest on reserves. But net effect is pretty unambiguous because the Fed typically engages in ample Open Market Operations that increase base money by far more than the reduction in interest payments. In this respect, what passes for conventional wisdom is quite wrong in saying that the Fed balance sheet doesn't matter--this neutrality is an illusion that arises from taking too many modelling shortcuts (like most New Keynesian papers, Christiano, Eichenbaum, and Rebelo (2011) for example, do not explicitly model the money market that drives their key assumptions, see footnote).

But under Miles Kimball's proposal, the Fed would lower interest rates to below zero by taxing away balances of e-currency. This is a reduction in monetary base, just like the case of IOR, and by itself would be contractionary, not expansionary. The expansionary effects of Kimball's policy depend on the assumption that households will increase consumption in response to the taxing of their cash savings, rather than letting their savings depreciate. That needn't be the case--it depends on the relative magnitudes of income and substitution effects for real money balances. The substitution effect is what Kimball has in mind--raising the price of real money balances will induce substitution out of money and into consumption. But there's also an income effect, whereby the loss of wealth induces less consumption and more savings. Thus, negative interest rate policy can be contractionary even though positive interest rate policy is expansionary.

Indeed, what Kimball has proposed amounts to a reverse Bernanke Helicopter--imagine a giant vacuum flying around the country sucking money out of people's pockets. Why would we assume that this would be inflationary?

[1] To be clear, this is something we should do regardless of whether we also enact Kimball's negative interest rate policy. Any business with an internet connection already has everything it needs to conduct payments electronically. Paper is costly and inefficient and should be killed.

[2] See Christiano, Eichenbaum, and Rebelo (2011). The consumption Euler is equation (11), while the New Keynesian Phillips Curve is equation (9). While Christiano et al do not explicitly model the money market, the NK model is equivalent to a money-in-utility model with nominal rigidities (as in this post but with monopolistic competition and Calvo pricing), where interest rate policy is enacted by targeting the money supply. This equivalence is invoked when Christiano et al assume the direction of causality from inflation to policy rate in equation (6).

*[update]What follows here is meant to provide intuition, not a formal proof. An earlier version omitted subscripts from [$]g_t, f_t[$] which was a bit misleading--these functions do have other time-dependent arguments that have been suppressed here for simplicity. For a proof that reversing the causal assumption embedded in the NK Taylor rule implies that lowering rates can be contractionary, see Schmitt-GrohÃ© and Uribe (2012), which has also been covered by David Andolfatto here.*