## Thursday, August 21, 2014

### Racial profiling of sickle cell patients

A news report from Seattle suggests that black sickle cell patients that go to emergency departments (EDs) for pain are often discriminated against because of their race:
"Karim Assalaam and Joehayward Wilson are both living with sickle-cell anemia. This genetic blood disorder is most common among people of African and Mediterranean descent. It can create intense internal pain. But when these two 24 year-old African-American go to emergency rooms for help, they are often met with suspicion.

"'I think it is because of being a young adult and African-American,' Wilson told us.

"The pair are among the 12,000 people in Washington State who carry the genetic trait for Sickle Cell Anemia. Wilson and Karim Assalaam are among the 450 living with the disease.

"'It's hard, people assume you're only out for the drugs or that's the only thing you're coming to there for,' Assalaam said.

"Sickle-cell sufferers frequently use Oxycontin for the pain. The drug is a narcotic that’s often abused by addicts who crush the slow-acting pills and then use the powder for a quick high."
There's now a fair amount of research establishing that medical staff tend to underestimate pain levels of black patients relative to white ones and that while this bias is larger for white doctors, black doctors also underestimate pain of black patients relative to white patients. Hence, it is not terribly surprising that ED staff--who do not specialize in sickle cell--would suspect black patients of abusing drugs, because they probably underestimate the amount of pain in black patients.

Sickle cell disease is not particularly common, affecting less than 3 in every 10,000 Americans. This means dedicated 24 hour sickle cell clinics staffed with doctors with experience with sickle cell are not financially viable, even though sickle cell patients could have severe episodes of pain or even life-threatening complications at any hour of the day. As a result many sickle cell patients rely at least in part on non-specialists in EDs to help manage their chronic condition, which is why this discrimination in pain prescriptions can be particularly problematic in this population.

## Tuesday, August 19, 2014

### On law enforcement

The crisis in Ferguson has prompted a national dialogue about law enforcement tactics and the unfair targeting of innocents through "tough-on-crime" policies like racial profiling, mandatory minimums, and criminal procedures that make it easier to convict. However, economics tells us that "tough-on-crime" tactics do not always maximize law and order. The unfair targeting of innocent people for criminal investigation through racial profiling and stop-and-frisk style tactics actually increases the incidence of criminality at the margins. Here's how.

### 1   Modeling the incidence of criminality

We consider a model with a large number of households with preferences over consumption $C$ and jail $J$ according to $U=E_B\left[u\left(C\right)-J\right]$ where $u$ is strictly concave increasing in $C.$ The household is endowed with lawful income of $y$ and has the option of committing a burglary to steal $B$ units of consumption. If the household is convicted, he serves jail time that yields $J$ units of disutility (we can think of disutility from jail as being a function $j\left(s\left(B\right)\right)$ where $s\left(B\right)$ is a policy function prescribing sentences based on the magnitude $B$ of the crime, and $j\left(\cdot\right)$ as the utility function of jail time. However, we consider the extensive margin where $B$ is fixed.)

The judicial authority investigates individuals with probability $q$ and the investigation leads to a conviction rate of $r_1\lt 1$ among those who committed the crime (that's a sensitivity of $r_1$), and $r_2\lt r_1$ among individuals who are innocent (that's a specificity of $1-r_2$). Therefore, the probability of being convicted given that the household commits the crime is $p_1=qr_1$ and the probability of conviction given hat the household does not commit the crime is $p_2=qr_2.$

The household's budget constraint is $C\leq y+B$ if he commits the crime, and $C\leq y$ otherwise. The household choice has two regimes, one where he commits crimes with probability 1, and one where he commits crime with probability 0, where utility from the former is $u\left(y+B\right)-p_1 J$ and the utility from the latter is $u\left(y\right)-p_2 J.$

Borrowing from the indivisible labor literature1 and assuming a functional form for the utility function, we can rewrite the above in terms of a representative agent that chooses a probability of committing crime $\alpha$ according to
\begin{align*}
\max_{\alpha}~ ln\left(C\right)-\alpha p_1J-\left(1-\alpha\right)p_2J&\\
subject~to~C\leq y+\alpha B&
\end{align*}
Solving yields $$\alpha^*=\frac{1}{J\left(p_1-p_2\right)}-\frac{y}{B}$$ This model captures our intuitions about criminal justice. For example, it is plainly apparent from the solution that increasing penalties $J,$ all else equal, reduces crime rates. It is often assumed that stepping up investigations--that is, targeting a larger share of the population for investigations--will result in a reduction in crime rates. To examine this we take the derivative of $\alpha$ with respect to the investigation rate $q:$ $$\frac{\partial \alpha}{\partial q}=\frac{J}{\left(J\left(p_1-p_2\right)\right)^2}\left(\frac{\partial p_2}{\partial q}-\frac{\partial p_1}{\partial q}\right)$$ which is less than zero if and only if $$\frac{\partial p_1}{\partial q}>\frac{\partial p_2}{\partial q}.$$

### 2   Modeling the profiling decision

To understand that last derivative, we need a model of police profiling. Mixing models of heterogeneity with a representative agent framework can be problematic, but lets assume that utilities are such that this is valid. Assume that individuals are heterogeneous in such a way that their probability $\tilde{\alpha}$ of committing a crime--as measured by a hypothetical social planner--is distributed according to a continuously differentiable distribution function $F\left(\tilde{\alpha}\right)$ with support $\left[0,1\right].$ The judicial authority prioritizes investigations of individuals so that individuals with the highest $\tilde{\alpha}$ probabilities are investigated first, followed by progressively lower probability types until they've exhausted their investigative resources--that is, until the share of the population being investigated equals the policy parameter $q.$ Thus we can write $$q\equiv 1-F\left(\bar{\alpha}\right)$$ where $\bar{\alpha}$ is the lowest probability type to be investigated. Therefore, we have that
\begin{align*}
p_1&=\underbrace{\left(1-F\left(\bar{\alpha}\right)\right)}_{q}\underbrace{\int^{1}_{\bar{\alpha}}\tilde{\alpha}f\left(\tilde{\alpha}\right)d\tilde{\alpha}}_{\alpha}r_1\\
p_1&=\underbrace{\left(1-F\left(\bar{\alpha}\right)\right)}_{q}\underbrace{\left(1-\int^{1}_{\bar{\alpha}}\tilde{\alpha}f\left(\tilde{\alpha}\right)d\tilde{\alpha}\right)}_{1-\alpha}r_2
\end{align*}
where $f\left(\tilde{\alpha}\right)$ denotes the density function of $\tilde{\alpha}$ and therefore the first derivative of $F\left(\tilde{\alpha}\right).$ Hereafter we will write $E_\bar{\alpha}$ to denote $\int^{1}_{\bar{\alpha}}\tilde{\alpha}f\left(\tilde{\alpha}\right)d\tilde{\alpha},$ which is the expected criminality of the population being investigated.

Thanks to the Leibniz rule, we can differentiate this to get
\begin{align*}
\frac{\partial p_1}{\partial q}&=E_\bar{\alpha}r_1+\bar{\alpha}\left(1-F\left(\bar{\alpha}\right)\right)r_1\\
\frac{\partial p_2}{\partial q}&=\left(1-E_\bar{\alpha}\right)r_2-\bar{\alpha}\left(1-F\left(\bar{\alpha}\right)\right)r_2
\end{align*}
Therefore, using the result derived in the first section, increasing enforcement decreases crime only if

E_\bar{\alpha}+\bar{\alpha}\left(1-F\left(\bar{\alpha}\right)\right)\gt \frac{r_2}{r_1+r_2} \label{conditions}

We can now state two propositions.

#### Proposition 1.

It is optimal to investigate everyone if and only if $E_0\geq \frac{r_2}{r_1+r_2}.$

Proof: Sufficiency follows immediately from \eqref{conditions} with $\bar{\alpha}=0.$ Necessity follows from Proposition 2.

#### Proposition 2.

If $E_0\lt\frac{r_2}{r_1+r_2},$ then there exists $q^*\gt 0$ such that $\frac{\partial \alpha}{\partial q}\geq 0$ for all $q\gt q^*,$ with strict inequality whenever $f\left(\tilde{\alpha}\right)\gt 0.$ That is, there exists a point beyond which further increasing enforcement actually increases crime.

Proof: Denote $G\left(\bar{\alpha}\right)\equiv E_\bar{\alpha}+\bar{\alpha}\left(1-F\left(\bar{\alpha}\right)\right).$ Then $G\left(1\right)=1\gt\frac{r_2}{r_1+r_2}$ because $E_1=1$ and $F\left(1\right)=1.$ Moreover, we postulated that $G\left(0\right)=E_0\lt \frac{r_2}{r_1+r_2}.$ Since $F\left(\bar{\alpha}\right)$ is continuously differentiable, $G\left(\bar{\alpha}\right)$ is continuous and by the intermediate value theorem there exists a nonempty set $A$ such that for $\hat{\alpha}\in A$ we have $G\left(\hat{\alpha}\right)=\frac{r_2}{r_1+r_2}.$ Let $\bar{\alpha}=\min \hat{\alpha}\in A,$ and $q^*=1-F\left(\bar{\alpha}\right),$ then for all $\tilde{\alpha}\in \left[0,\bar{\alpha}\right)$ we have $G\left(\tilde{\alpha}\right)\leq \frac{r_2}{r_1+r_2}$ with strict inequality if $f\left(\bar{\alpha}\right)\gt 0.$ Furthermore, note that $q\equiv 1-F\left(\tilde{\alpha}\right)$ is monotonically decreasing in $\tilde{\alpha},$ and is one-to-one when $f\left(\tilde{\alpha}\right)\gt 0,$ which implies that for $q\in \left[0,q^*\right)$ we must have $G\left(\tilde{\alpha}\right)\leq \frac{r_2}{r_1+r_2}$ with strict inequality when $f\left(\bar{\alpha}\right).$ This concludes the proof.

So what do these propositions say about stop-and-frisk? We are compelled to draw conclusions contrary to the beliefs of the New York City police commissioner: economics tells us that we can actually reduce crime by not investigating those individuals who are least likely to commit crimes, because this will reduce the wrongful conviction rate and increase the incentive to avoid committing crime. Stop-and-frisk policies do precisely the opposite: they target investigations indiscriminately at the public, innocent and guilty alike, which will increase the wrongful convictions and obviate the disincentive our justice system aims to place on criminal acts.

So there you have it. Economics tells us that stop-and-frisk causes crime.

1. I'm probably not the first one to have applied the indivisible labor literature to criminality in this way, though I did not do a search. If you know of any papers that I have incidentally duplicated, let me know so I can give credit here.

## Friday, August 15, 2014

### Why do black communities elect white politicians?

Vox tells me that 67 percent of the residents of Ferguson, Missouri are black, yet the police chief, mayor, and 5 out of 6 city council members are white. From the protests and racial tensions, it seems unlikely that this is because the black community thought these were capable leaders that can be trusted to look after their community's interests. So how does this disparity happen in a democracy?

This is not at all unique to Ferguson. Black representation has gotten a bit better in the US House of Representatives but is still awful in the Senate and most state legislatures. In the US House 9 percent of House members are black, compared to 13.6 percent of the population, which is a huge improvement over just a few years ago--the gains mostly due to direct federal intervention into the state redistricting process. The Senate, by contrast, is only 2 percent black. Nor is this disparity limited to the black community: women comprise half the population but hardly 17 percent of US Representatives.

There are lots of reason for this relative disenfranchisement of disadvantaged groups: gerrymandering; voter ID laws that make it harder for poor communities to vote; tampering with precinct locations, absentee ballots, and early voting dates in ways that make it disproportionately hard for these communities to vote; racist felony laws that have the effect of making large segments of minorities permanently ineligible to vote. Moreover, many local governments are not actually democracies, and city councils can effectively appoint their own members. How that works is retiring council members agree to retire mid-term, allowing the city to appoint their favorite candidate to fill out the term and breeze through a sham general election as a virtually unbeatable incumbent.

But even without all that, we'd still most likely have legislatures disproportionately dominated by white men. Democracy is strongly biased towards discrimination. To see why, consider a thought experiment: Consider a district comprised of half men and half women. Suppose women are equal-opportunity voters who are equally likely to vote for a man as for a woman, but the men are all sexist pigs who won't vote for any woman. Thus, in an election between a male and a female candidate, the male candidate will automatically get the 50 percent of voters who are men, plus an additional 25 percent who are women, for a landslide victory of 75-25. That is, democratic elections inherently favor the more discriminatory of two voting blocks, regardless of the actual percentages of votes each voting block commands.

Obviously, that example--where zero men are willing to vote for women--was a bit extreme. To see exactly how powerful this effect is though, I've done some simulations for you. In the US, there are an average of 710,767 voters in each House district. Let's assume that in each district, exactly half of voters are women, and the other half are men. Women are equally likely to vote for a male candidate as a female one, but men are only slightly biased towards men--specifically, we will assume that each man is has a 51 percent likelihood of voting for the male candidate, and 49 percent likelihood of voting for the female candidate. There are 435 House districts, so my simulation consists of 355383 women randomly casting votes according to a Bernoulli distribution with probability of voting male p=0.5, and 355383 men randomly casting votes according to a Bernoulli distribution with probability p=0.51, repeated 435 times to simulate all the House districts. I've then repeated this simulation 1,000 times to bootstrap estimates of the average number of men and women elected to congress in this system. The results are summarized in the table below:
 Average Number of Women: 0 435
Holy cow! Remember, men were barely, barely more likely to vote for men than women, yet in none of the 1000 simulations did a Woman win a single one of the 435 House seats! Not one! If you think I'm lying, try the simulation yourself. [It is, in principle, possible to analytically compute the probability of a woman winning a House seat under these conditions, and I included code for that in my R file, but this turned out to be more computationally intensive than the simulation. The probability is so small that the computer--the computer!--had trouble with the rounding errors. The simulation indicates a probability of less than 0.0000023.]

The lesson here is that discrimination--the bias of straight white men to vote for straight white men most of the time--causes massive inequality in electoral outcomes in a democracy. This may explain why there are hardly any women in congress, or why overwhelmingly black municipalities consistently elect nearly-all-white city councils.

This is why I think Democrats are largely on the right track in focusing on candidates' views on minority rights. In a world where a certain percentage of straight white men will vote only for other straight white men, it is important for the rest of us react by ruling out all candidates who do not care deeply about fixing the racial, gender, and other disparities in our society. If we choose indifference--if we pretend to be impartial--then the racists win. The nature of democracy demands that we be partial.

### The most important sentence in that hospital price study

Aaron Carroll and Sarah Kliff are both reporting on a study titled "Variation in charges for 10 common blood tests in California hospitals: a cross-sectional analysis" published in BMJ Open. Both, I think, mostly ignored the most important sentence in the whole thing, buried pretty deep into the article:
"This study evaluated the charges for blood tests, rather than the negotiated prices paid by most insurers."
This is a study of hospital "charges" which, despite the name, are not even close to what hospitals charge for procedures. Instead "charges" are sort of a list price, from which the hospital negotiates with various insurers and other institutional payers over how much to actually pay, often on a patient-by-patient basis.

Studying actual hospital prices, as opposed to the meaningless "charges" is not easy, especially if you are interested in knowing the price of a specific routine procedure, because hospitals do not actually charge on a per-procedure basis. What I mean is that in principle the hospital does furnish the insurer with a list of all the procedures performed on a patient and the associated "charges" but they subsequently negotiate the entire bundle of procedures performed on a particular encounter, so that what Humana pays the hospital for Johnny's transcranial doppler (TCD) scan is not the same as what Humana pays the same hospital for Tommy's TCD scan. These differences arise for two reasons: one, the associated services required to perform the TCD scan is going to be a little different for each patient, and that becomes a basis for negotiation; second, Tommy and Johnny may differ in their insurance coverage--perhaps they have different plans, or perhaps they have the same plan but different underwriting characteristics. These things affect the total bills that Humana pays for seemingly identical patients with seemingly identical procedures. I've done some digging, and as best as I can tell, hospitals do not keep records of negotiated prices for each procedure--their internal pricing data contains the per-procedure "charges" and lump-sum per-patient-encounter bill totals, but never an itemized breakdown of how much an insurer actually pays for what. The way this works is you get a bill with a bunch of "charges" for each procedure, then at the bottom a one-line lump sum "adjustments" that says how much less--it is always less--they are actually billing. My point here is that not only does this study not actually compare the prices of specific procedures between hospitals, such a dataset probably does not even exist.

To really do this study correctly, we'd need to perform a modified Abowd-Kramarz-Margolis decomposition to identify hospital-specific and procedure-specific components to variation in total hospital bills. This procedure is data-intensive, and as far as I know has never been attempted. An alternative approach, and one I use in my own work, is to use hospital cost data, which unlike billing prices, is actually available on a per-procedure basis (but, be careful how you interpret that)--the downside is that costs do not give a sense of per-procedure markups (or frequently, markdowns, as there's lots of cross-subsidization going on), which is the step that adds so much complexity.

How big is the disparity between the charges this study looked at and what hospitals actually bill? I have some data that sheds a little bit of light on that. We want a sense of the difference between charges and bills for routine care rather than, say, severe trauma wounds. I do not have a data set with routine checkups, but I do have a data set with uncomplicated visits to a sickle cell clinic for pain and fever (that is for generally routine, non-complex issues) at a large urban medical center (Update: that should read "large urban pediatric medical center"--my sample is kids only). Sickle Cell is a chronic condition where the red blood cells "sickle" which causes potentially severe inflammation and tissue damage that can be life-threatening in some cases. But pain and fever are two symptoms that are relatively common so that treating these is a routine activity at a sickle cell clinic. Excluding those patients who developed more serious complications beyond just pain and fever, here is a histogram of the ratio of adjustments to total charges:
The median reduction in charges was 76.2 percent, meaning that what the hospital actually billed insurers was 76.2 percent less than the sum of the per-procedure "charges." More than half the charge price is just bargaining!!!

And just to further give an idea exactly how much cross-subsidization goes on, note that some patients received adjustments more than 100 percent (the maximum adjustment was 107.2 percent below "charges"), so that their total bill was actually less than zero. Now, the clinic isn't giving people cash for coming to the clinic--these less-than-zero bills are being applied to outstanding and generally very large billing balances for chronic patients who visit the hospital often. Yes, in some literal sense they end up weirdly owing less after visiting the clinic than before, but the proper way of understanding this is as an adjustment to the life-cycle path of billing, since the same payer owes that prior billing balance as is paying for this clinic visit.

So enough with all the studies of "charges" already.

## Monday, August 11, 2014

### Wallace Neutrality: It all depends on why, exactly, money is valuable

Noah Smith has raised an interesting question: Can the Fed set interest rates? The answer may surprise you.

It is typically assumed in practice that a currency's monetary authority has the ability to set interest rates, and that it does so primarily by manipulating the supply of money in open-ended operations to achieve it's interest target--a process known as "open market operations" (OMO). Most monetary DSGE models include a Taylor Rule that actually skip over OMO and assume that the monetary authority sets the interest rate outright, without ever bothering to compute the time-path of money supply required. However, Smith points to a paper by Neil Wallace called "A Modigliani-Miller Theorem for Open-Market Operations" which defines a principle of "Wallace neutrality" which says that the standard treatment is all wrong--OMO cannot affect either inflation or interest rates at all! That's quite a claim--no matter how much money the fed "prints" (via OMO), it will never cause inflation. As for the well-established empirical relationship between inflation and money supply increases, Wallace says this:
"Most economists are aware of considerable evidence showing that the price level and the amount of money are closely related. That evidence, though, does not imply that the irrelevance proposition [Wallace neutrality] is inapplicable to actual economies. The irrelevance proposition applies to asset exchanges [OMO] under some conditions. Most of the historical variation in money supplies has not come about by way of asset exchanges; gold discoveries, banking panics, and government deficits and surpluses account for much of it."
Basically, he's arguing that the observed correlation between prices and money may not apply to the measured changes in money supply brought about by OMO. It's quite a claim.

Let me offer a model. There's an infinitely lived household that derives utility from consumption $C_t$ and real money balances $\frac{M_t^h}{p_t}$ where $M_t^h$ is the households nominal bank account balance at the beginning of period $t$, and $p_t$ is the price of a unit of consumption in period $t$. The utility function is given by [$$]\sum_{t=0}^\infty\beta^t\left[ln\left(C_t\right)+ln\left(\frac{M_t^h}{p_t}\right)\right].$$Further, the household is endowed in each period with an income of []y_t[] units of the consumption good, and must pay []\tau_t[] units in taxes to the fiscal authority. In addition to saving money []M_{t+1}^h[] for next period the household can invest in bonds []B_{t+1}[] that will yield gross real interest of []R_{t+1}[] in period []t+1[] giving us the budget constraint$$C_t+B_{t+1}+\frac{M_{t+1}}{p_t}\leq y_t+R_tB_t-\tau_t.$$We assume (as Wallace does) that households have perfect foresight of all other agent's actions. Solving the household's constrained maximization problem yields$$C_{t+1}=\beta R_{t+1} C_t$$which describes the inter-temporal consumption tradeoff for a given interest rate, as well as$$\frac{1}{\beta}\left(\frac{p_{t+1}}{p_t}+\frac{1}{R_{t+1}}\right)\frac{M_{t+1}^h}{p_{t+1}}+B_{t+1}^h+\frac{M_{t+1}^h-M_t^h}{p_t}-R_tB_t^h +\tau_t=y_t$$which can be thought of as describing the conditions under which the household is indifferent between money, bonds, and consumption, a necessary condition for an optimum. In addition to the household there are two other agents: a monetary authority (aka the Federal Reserve) and a fiscal authority (aka Congress). Congress sets government consumption levels []G_t[], measured in units of the consumption good, issues bonds []B_{t+1}[] that must be repaid at gross real interest []R_{t+1}[], all financed by lump-sum taxation of []\tau_t[] units of the household's income. Thus the Congress is bound by the budget constraint$$G_t+B_{t+1}=R_tB_t+\tau_t.$$The Fed engages in OMO by buying bonds financed by increasing the money supply, according to the budget constraint$$B_{t+1}^f-R_tB_t^f=\frac{M_{t+1}-M_t}{p_t}.[$$]

We define an equilibrium as a sequence of prices $\left\{p_t,R_t\right\}_{t=0}^\infty$ such that the fiscal authority's budget constraint is satisfied, the monetary authority's constraint is satisfied, household utility is maximized, and $B_{t+1}^h+B_{t+1}^f=B_{t+1},$ (bond market clearing condition) $M_{t+1}^h=M_{t+1},$ (money market clearing condition) and $C_t+G_t=y_t$ (aggregate resource constraint) for all periods. Plugging in the fiscal and monetary authority's budget constraints, along with the bond and money market clearing conditions into the solution to the household problem yields [$$]\frac{1}{\beta}\left(\frac{p_{t+1}}{p_t}+\frac{1}{R_{t+1}}\right)\frac{M_{t+1}}{p_{t+1}}=y_t-G_t.$$This is Robert Barro's famous "Riccardian Equivalence" result--holding expenditures constant (and assuming perfect foresight and only lump-sum taxation), government debt has absolutely no effects whatsoever on equilibrium--it is as if government spending entered directly into households' budget constraints. Recall that$$C_{t+1}=\beta R_{t+1} C_t$$which combined with the aggregate resource constraint implies$$R_{t+1}=\frac{y_{t+1}-G_{t+1}}{\beta\left(y_t-G_t\right)}$$and further combining that with the above result yields$$\pi_{t+1}=\frac{\frac{y_t-g_t}{y_{t+1}-G_{t+1}}\left(Q-\frac{M_t}{p_t}\right)}{y_t-G_t-\frac{1}{\beta}\left(Q+\frac{M_t}{p_t}\right)}[$$] where $Q\equiv \frac{M_{t+1}-M_t}{p_t}$ was the Fed's bond purchases, or OMO, and the left hand side, $\pi_{t+1}\equiv\frac{P_{t+1}-p_t}{p_t}$ is the inflation rate. I think there's a way to make this result a little prettier, but no matter, in this form it tells us what we wanted to find out:the Fed's bond purchases--$Q$ in the equation--clearly and unambiguously cause inflation, a direct contradiction of Wallace neutrality.

So did I just disprove Neil Wallace and embarrass the American Economic Review with a first-year grad student homework problem?

It all boils down to differing views of why money has value embedded in Wallace's and my models. It's a surprisingly difficult question for monetary economists. We all know, intuitively, why money is valuable--you need it to buy stuff and pay taxes--and microeconomics professors will give you hours-long lectures on the double-coincidence of wants and what a marvelous innovation the idea of liquidity represents. But none of those reasons answer the fundamental question of why people hold money, which is different than using it for transactions. Why not, for example, put all your savings into bonds until you want to buy stuff, then simultaneously sell the bonds and buy the stuff such that your cash account never rises above zero? After all, so long as interest rates are above zero, you actually lose money by holding it!

The model above is a standard money-in-utility (MIU) model at the core of most monetary New Keynesian models. The basic assumption needed to get MIU models is this: people want the flexibility that comes with holding your assets in a highly liquid form. The standard alternative, and the one that Milton Friedman believed in more or less his whole life (but did not fair well in the Great Recession) is the cash-in-advance model, which says that not only do you need to have money to buy things, but for some unstated reason you must actually possess that money in liquid form for a period of time prior to making your purchase. Neither of these models exhibits Wallace neutrality, nor anything close to resembling it.

In Wallace's model, money is neither desired for it's liquidity nor even necessary to make purchases. Money has no actual function in Wallace's economy. So why the heck would agents in his model want to buy and sell money? That's a very good question.

Wallace's model is what's known as an overlapping-generations model (OLG). In these models, unlike the model above, households live brief lives and die. Younger generations want to save up to insure their future incomes, but older generations want to dissave--to sell off whatever assets they have and consume as much as possible before they die. There are two ways to save, in general: either invest in some type of durable good, or lend money to a borrower. If there's no durable good to invest in, you have to find credit-worthy borrowers if you want to save. Unfortunately, the old generation (the only people who might want to borrow) is not credit-worthy--they are about to die soon, and therefore have no future incomes with which to repay. Because of this problem, OLG models are actually quite prone to inefficiency. One way to reduce that inefficiency is to introduce a fictitious durable "good," like money, that can be traded as if it was a storage technology for perishable consumption goods. In this way the young can purchase money from either the old or the Fed, and when they become old, sell that money in exchange for consumption (ie, buy stuff). In this way, money acts, not as a medium of exchange, but as a line of credit to people who would otherwise not be credit-worthy--in an OLG economy, money facilitates loans from the young to the old, benefiting both by increasing the latter's consumption and insuring the former's future consumption. Unlike an actual loan which dies when the borrower dies, money is a durable good and will outlive it's owner. However, in Wallace's model, there actually is an alternative durable good the young could invest in, so that when the Fed engages in OMO, it is actually just buying durable goods that the young would have invested in, and selling a virtually identical durable good--money--in it's stead, in an economy where money itself is neither desirable nor necessary. No wonder this has no effects!

So it is in situations where money exists only to facilitate intergenerational transfers and nearly identical alternative durable goods exist, where the money supply is expanded specifically by swapping between the two, and only in these kinds of situations, that economies exhibit Wallace neutrality. I do not deny that Wallace's model to some extent does resemble reality--we really do use money as a type of intergenerational loan--but this exists along side other reasons to hold money, such as the liquidity preference of MIU models. That alone ensures that the real world is not Wallace neutral. Wallace neutrality is significantly less robust in theory than Riccardian Equivalence or it's older brother, Miller-Modigliani. This is the point that neither Smith nor anyone else seems to have made.

But then, lots of other people have made good points too--basically, all of the same critiques of Riccardian Equivalence also apply to Wallace neutrality, which requires that individuals have perfect information, non-distortionary taxation, etc.