More on Kindig and Cheng
Matthew Martin 4/07/2013 10:28:00 AM
- Yes, the authors did adjust for age, using the pretty simple method described by the CDC here.
- They authors do show that in the pooled sample, mortality rates decreased substantially less overall for women than men--the result is not due to the fallacy of composition.
- Yes, they control for changes in population (though they assume it is exogenous, which is questionable)
- Yes, it does make sense to speak in terms of "mortality rates"--even though it is true that 100% die eventually, it will always be a relatively low fraction who die within any particular year. Since the authors adjust for age, this is a very meaningful measure of public health, though there are plenty of other measures that would also be worth studying.
Turns out I have a few reservations about the study's methods. Here's a quick list:
- They used random effects instead of fixed effects. Do they have a random sample of US counties? No, they used all of them. Hence, the assumptions required for random effects are pretty dubious here. But...I will give benefit of the doubt because biostatisticians have slightly different conventions on this than economists.
- There is no clean identification strategy, and as a result there is likely a lot of endogeneity biasing the coefficients--especially bias from the problem of "endogenous mobility." For example, I suspect the main reason they don't find any correlations between mortality and various measures of healthcare provision is because healthcare providers tend to move to where the sick people are, and the sick people move to where the doctors are, with the result that the model greatly understates the true causal effect of opening clinics and hiring doctors on the counties' mortality rates. Similarly, I suspect that endogenous mobility could be causing upward bias in the regional coefficients, since it could be that the healthiest individuals are the ones most likely to leave poorer regions.
- They used standardized coefficients, which was a really, really bad idea. The main feature that makes this study publishable is the fact that it shows such a stark contrast between male and female mortality rates across the country. Yet, by using standardized coefficients, the authors have prohibited us from directly comparing these two groups.
The reason someone would want to use standardized coefficients is to make it easier to compare the coefficients of variables that are measured very differently--for example income is measured in dollars, while obesity is measured in percentage of people, using standardized coefficients supposedly helps you compare the effects of one-standard deviation changes across all types of variables. But, in the case of this paper, the issue of whether standardizing actually helps with comparing coefficients within each model is a moot point--the paper has no clean identification strategy, so we can't claim that any of the coefficients are causal. Hence, we can't even say that one standardized coefficient is larger than another, because either has a good chance of being biased either direction. Let me clear: this study's main virtue is the comparison between the male and female models--it is really quite a shame the authors didn't embrace that, dump the standardized coefficients, and offer us a chance to do an Oaxaca decomposition.
- The use of standardized coefficients is probably part of the reason why we don't see any significant effects of obesity on mortality rates. Standardizing coefficients can be pretty disingenuous because in reality one standard deviation of, for example, income is still not at all comparable to one standard deviation in obesity. It isn't terribly hard to increase a county's average income by one standard deviation--that's basically just a matter of a new factory being built in the county. But it is very, very difficult to decrease obesity rates in a county by one standard deviation. So in reality by using standardized coefficients, the authors have filtered out most of the variation in obesity rates that they could have used for econometric identification, but not for income--hence it is unsurprising that they find the former is not statistically significant while the latter is highly significant.
- The paper comes much too close to arguing causality, when there is no reason to suspect any of their coefficients are causal. In reality this is just a descriptive paper--albeit a very interesting one--but that's all. To be fair, the authors do carefully coach their language to avoid directly claiming their results have causal significance, but come pretty darn close a couple of times. They list as one limitation their "use of ecological analysis...to identify associations that suggest causal relationships." That doesn't pass the smell test--all they've got are correlations, not coefficients that "suggest causal relationships." Also,in my opinion this sentence simply mis-characterizes the paper: "Our regression analysis goes beyond the descriptive findings to suggest reasons for the mortality disparities we found." My impression from just that sentence would be that they used a reduced-form causal inference model, but they didn't.
- The discussion and conclusion sections involved some vigorous assertions that weren't always supported by either citations or their empirical results. They even at times contradicted their own findings, which tells me that they don't think their coefficients are causal, either. For example in one paragraph they suggest that the "strong association between mortality and geographic region may be the result of a number of factors, such as disparities in the level or quality of health care across regions; patterns in health care use and treatments..." even though they tested a number of health care access and quality measures and found no effects. I tend to agree with what they say in the discussion, but the authors need to explain why they think these results didn't show up in their analysis, which means explaining that their coefficients likely suffer from various forms of bias.
- The standard errors should probably be clustered by state. There is no mention of clustering, so maybe they did this. But intuitively, the regulatory, statutory, and infrastructure framework surrounding health care varies a lot state-to-state, and counties within each state will tend to have more in common with each other than with counties in other states.