5/28/2014 08:00:00 AM
Tweetable

Financially, taking a gap year makes absolutely no sense. Intuitively, by delaying your education, a gap year replaces a year you could have worked as a college graduate, and since you can earn more as a college grad than as a high school grad[*], on net you are actually decreasing your life-long income with every gap year you take. For a large enough wage differential, this will more than offset the cost of college, including the cost of borrowing against future (higher) income to pay for it. But we'll prove this with calculus. Ok, not real calculus--we can fairly easily deduce the result by borrowing some well known properties of series derived elsewhere. I promise, this isn't nearly as much math as it looks like.

To start, we'll define a value function $V$ equal to the real present value of all future income, net of college costs, of an individual who just graduated highschool in year $t=0$. The value function has three components: the student works full time for $s$ years at annual salary $w$ before going to college, goes to college for four years earning no income and instead paying college costs $c$ that would not otherwise be incured, and then in year $t=s+4$ gets a full time job as a college graduate earning salary $y$ up until retirement in year $t=T$. The student can borrow or save at a real net interest rate of $r$. Thus the value function is given by $$V=\underbrace{\sum_{t=0}^s\frac{w}{\left(1+r\right)^t}}_{income~before~college}-\underbrace{\sum_{t=s}^{s+4}\frac{c}{\left(1+r\right)^t}}_{cost~of~college}+\underbrace{\sum_{t=s+4}^{T}\frac{y}{\left(1+r\right)^t}}_{income~after~college}.$$ I won't go into detail about why this is correct--this is standard discounting--but understand that the interest rate appears in the denominators to account for the advantage of saving for college in advance, and the cost of borrowing for college and repaying after the fact. It's all built in to this expression. So we will actually be answering a slightly more general question than asked in the blog post title: what is the value-maximizing number for $s$, under the constraint that $s$ must be an integer greater or equal to zero.

If you don't know any calculus, then this next step will blow your mind (but not if you're a high school grad who aced the BC calculus exam!). It turns out that the expression above can be rewritten as: $$V=\underbrace{\frac{1+r}{r}\left(w-\frac{w}{\left(1+r\right)^s}\right)}_{income~before~college}-\underbrace{\frac{1+r}{r}\left(\frac{c}{\left(1+r\right)^s}-\frac{c}{\left(1+r\right)^{s+4}}\right)}_{cost~of~college}+\underbrace{\frac{1+r}{r}\left(\frac{y}{\left(1+r\right)^{s+4}}-\frac{y}{\left(1+r\right)^T}\right)}_{income~after~college}$$ which follows directly from the fact that the sums in the first equation are geometric series--we've merely replaced the serires with the formulas for their sums, a result derived in Calc II. We can further simplify: $$V=\frac{1+r}{r}\left(w-\frac{y}{\left(1+r\right)^T}\right)+\frac{1+r}{r}\frac{1}{\left(1+r\right)^s}\left(-w-c+\frac{c}{\left(1+r\right)^4}+\frac{y}{\left(1+r\right)^4}\right)$$ Ok, the geometric series thing was the first bit of calculus, here's the second: maximizers (and minimizers) are preserved under monotonic transformations. That means we can ignore the constant at the begining of the expression above, and whatever maximizes $$\widetilde{V}=\frac{1+r}{r}\frac{1}{\left(1+r\right)^s}\left(-w-c+\frac{c}{\left(1+r\right)^4}+\frac{y}{\left(1+r\right)^4}\right)$$ also maximizes $V$.

Now let's consider the fraction $\frac{1}{\left(1+r\right)^s}$, which is the only term part of $V$ that depends on $s$. We have a positive net interest rate $r>0$, which implies that as $s$ increases (starting at 0, then to 1, then to 2, and so on), the expression $\frac{1}{\left(1+r\right)^s}$ decreases. Moreover, $\frac{1+r}{r}>0$ meaning that $\widetilde{V}$ (and therefore $V$) is increasing in $s$ whenever $-w-c+\frac{c}{\left(1+r\right)^4}+\frac{y}{\left(1+r\right)^4} >0$, which can be rewritten as $$\frac{y}{\left(1+r\right)^4}-w > c-\frac{c}{\left(1+r\right)^4}$$ This expression does not have a direct literal interpretation (remember it is a monotonic transformation of the object we care about), but obviously the left hand side is closely related to the college wage premium, and the right hand side is closely related to college costs. If the inequality above holds, then $V$ is decreasing in $s$ so to get $V$ as high as possible we want $s$ as low as possible, which is zero. So the optimal number of gap years is zero. But here's the interesting part, in my opinion: if the reverse is true, and we have $c-\frac{c}{\left(1+r\right)^4}>\frac{y}{\left(1+r\right)^4}-w$ then it is always possible to increase $V$ by increasing $s$, meaning that it's optimal to never go to college. Now, it's possible that the two sides are exactly equal, in which case going to college has no impact whatsoever on your finances (in this case any $s$ is optimal). Those three are the only possibilities, so we can state this as a theorem:
Proposition 1: Taking a gap year is financially harmful whenever college is financially beneficial.
This follows directly from the algebra above.

So is college financially beneficial? We've already done most of the algebra (and tiny bit of calculus) to figure this out, so let's proceed. A reparameterization may help to think clearly about this: lets define a college premium that's just equal to the difference in the salary that a person can make as a college grad versus no college: $a=y-w$. Plugging this into the inequality above and solving for $a$ yields $$a>\left(\left(1+r\right)^4-1\right)\left(w+c\right).$$ Now we just need some data.

The highest student loan interest rate is 0.0641--of course, that's a nominal interest rate, so given the Fed's inflation target of 2 percent per year and the fisher equation of interest, that's a real interest rate of $r=0.0441$. According to the internet, the average college grad makes $y=44970$ and the average high-school only grad makes $w=29950$, which is, frankly, a lot more than I think you could realistically make in a gap year (the figure includes people who, while uneducated, are experienced). So that's a college premium of $a=15020$. Furthermore, the average cost of in-state public college is $c=8893$ Important to note here is that the relevant figure here is tuition and fees, excluding living expenses that would have been incured anyway--you would need just as much food, for example, regardless of whether and when you go to college. Plugging these in, we see that it really, truly is worth going to college: $$15020=a>\left(\left(1+r\right)^4-1\right)\left(w+c\right)=7318.63.$$ If you're curious, with these numbers it turns out that taking a gap year will cost you \$36,430.36 over your life time, in real present value.

Of course all this depends on your individual circumstances. When picking a college and a major, you should look up the tuition and fees, look up what the employment prospects and salary are for graduates from your college and major, and plug them into this calculator along with the wages you can actually get from jobs you have been offered as a high school grad:
Of course, there are non-financial considerations regarding gap years. But this post is already longer than intended.

[*]I'm maintaining the widely held assumption that the wage differential between college grads and high school grads represents the causal effect of college on earnings potential. However, such a differential could arise even if college has no effect, because of things like selection bias. Matt Yglesias has a nice discussion of this, but it's beyond the scope of this already long post.
Anonymous 6/06/2014 07:30:00 PM
The financial consideration you have assumed away here is whether y|(gap year) = y|(~gap year). If the gap year materially affects your subsequent performance in college or your choice of major, for example, the two could be very different. Of course, this is pretty hard to test for.
--brianS