Sorry Dawkins, arguing against the existence of gods is not science
Matthew Martin
2/06/2014 01:57:00 PM
Tweetable
In light of the evolution vs. creationism debate between Bill Nye and Ken Ham a couple nights ago, I've been thinking about science and religion. Though I generally side with Mark Joseph Stern that Bill Nye inadvertently gave the public the wrong impressions about how science debates are conducted, Phil Plait makes a nice argument that we do need more scientists to illustrate that it evolution is not at all inconsistent with religion or religious beliefs, so long as you are willing to admit that the world is more than 6,000 years old. "Guided evolution"--the theory that various gods[*] intended for man to evolve and for science to work, is closer to what most people believe anyway.
But then there are people like Richard Dawkins, who make a career out of trying to argue that gods do not exist. Dawkin's argument is, from start to finish, a pure logical fallacy. He argues that failure to find evidence of gods' existence is itself evidence of his non-existence.
Not so. This is actually a statistical fallacy I see all the time in research papers and its wrong, wrong, wrong. If you are running a statistical test, there is a null hypothesis and an alternative hypothesis which are not only mutually exclusive, but also complementary (meaning that the null + alternative = entire set of possibilities). We then collect data and calculate the p-value, which can be thought of loosely as the probability of observing that data given the assumption that the null hypothesis is true. If the probability of observing that data given that the null is true is extremely small, then we have some measure of evidence that the null is not true--we "reject the null." Since the null and alternative are complementary, comprising the entire probability space, then this means we have evidence for the alternative (but be careful: "the alternative" does not mean "your most favored alternative"). If, however, the p-value is large, we fail to reject the null--we've not found anything in the data that's inconsistent with the null hypothesis. But here's the mistake that happens all the time: people automatically jump from "fail to reject the null" to "accept the null!" In frequentist statitistics, you never, ever, ever accept the null. It is quite possible for the data to be consistent with both the null and the alternative hypotheses. The only correct interpretation of a large p-value is that you have no evidence one way or another.
The mistake comes from confusing the statistics with a common risk-management technique. In policy analysis, there usually isn't a neutral position that you can take in the absence of conclusive evidence--you must either enact or not enact the policy. However, policies are usually costly in someway, meaning that it matters, potentially quite a lot if you get it wrong, and thus we need a way to manage the risks associated with negative results. The way we do this is by pursuing whichever course of action would be less risky (or costly) if we were wrong. Consider the example of testing for arsenic in drinking water supply. You do a one-sided test whether the arsenic is less than the threshold at which it would become harmful, but the p-value comes back large. We actually have no evidence that the arsenic level is harmful, but we choose not to drink it anyway. The risks of concluding that is harmful when it isn't are much smaller than the risks of concluding it's not harmful when it is. But as far as "truth" is concerned, we pulled that answer out of nowhere--the data did not, and cannot, tell us to accept the null.
So in Richard Dawkin's case, his null hypothesis is that gods do not exist, leaving the hypothesis that one or more gods do exist as the alternative. Dawkins points out that none of the data we've observed about the universe is inconsistent with the non-existence of any gods--the p-value is large, and we fail to reject the null. But then Dawkins argues that because we failed to reject the null, the null therefore must be true. Such an argument simply is not logical under any circumstances. To further illustrate why, just turn the statistical test around: let the null be that one or more gods exist, and the alternative that no gods exist. None of the data we've observed is inconsistent with the existence of one or more gods, so we again fail to reject the null. If Dawkin's argument was logical, then we'd have to accept this as equally valid evidence of the existence of gods, leaving us with a logical contradiction. This gets at the point I made earlier: it is quite possible for the data to be consistent with both the null and alternative hypotheses, and for this reason we can never conclusively accept any null hypothesis.
Now, you can work around this issue with Bayesian statistics. In the Bayesian way of thinking, you assume a prior belief, observe data, and the data either forces you to revise your prior belief or it doesn't. Obviously, Dawkin's prior is that gods do not exist, so he maintains that prior in the face of uninformative data. But here's the problem with arguing from such a position: the prior is itself totally subjective. A Hindu may start with the Baghavad Gita as his prior, and thus in the face of uninformative evidence concludes that it is correct.
So, everyone calm down. Science makes no claims on the veracity of religions.
[*]I adopt the inclusive plural, and you should to. This gramatical choice allows that different people believe in different gods, and that many people believe in more than one god.
Suppose an experiment is performed with three possible results A, B, and C. A will confirm your hypothesis. C will disconfirm, and B will be neutral (P(H|A)>P(H), P(H|B)=P(H), and P(H|C)<P(H)) Now suppose that you cannot tell the difference between result B or C. It should be clear in this case that receiving a result other than A should lower your probability of belief in your hypothesis(albeit potentially a very small amount)
There are a few reasons why it's difficult to empirically argue against the existence of god(s). First there is no clear definition of what constitutes a god.(I like to ask "what's the difference between gods and aliens?") This makes it difficult to be able to say that a particular set of evidence will disconfirm their existence because theist generally find a way to make any evidence consistent with their hypothesis. However, we can conceive of potential evidence which would confirm the existence of god(s). Namely events which appear to be non-systematic breakdowns of natural laws i.e. miracles (if the breakdowns are systematic we assume that we were mistaken about the natural laws and begin replacing them). With confirming evidence in our probability space we must have disconfirming evidence somewhere, we just are not able to distinguish it from neutral evidence. This puts us in a similar situation to the experiment described above. In order to maintain a proper probability space we must give all non-miracles disconfirming power (probably infinitesmal but there are a lot of non-miracles) for the hypothesis that god(s) exist.
Note this cannot be turned around as you suggest because we can point to specific evidence that would disconfirm the non-existence of god(s)
No standard statistical test has a "proper probability space" as you define it. We generally test whether a parameter is different from zero, which means that the null is that the parameter equals zero. On a continuous distribution, the probability of observing any specific number (ie, zero) is precisely zero--we will never obtain zero as the point estimate even if that is the true value of the parameter, no matter how much data we gather. So the set C has a measure of zero--it contains one point, the probability of observing which is zero. This does not in any way invalidate the statistical test, however. While we can't say that the parameter is zero, we can look for evidence that it isn't.
Your idea of giving neutral evidence disconfirming power is similar to occam's razor, which is often misquoted as "the simplest answer that fits the data is the likeliest one." Strictly speaking, this rendition of Occam's razor is a logical fallacy--simplicity doesn't necessarily increase likelihood at all, and can in some situations actually be less likely (this arises in network situations where errors don't get noticed until there are a lot of them).
What statisticians often do with negative results is estimate a confidence interval. While we cannot logically say that the parameter is zero, we often can say with confidence that the parameter is not far from zero, or that it must be so close to zero that it's inclusion adds essentially nothing to our understanding of the model. A similar principle applies here. The abundance of neutral evidence does let us say that, while gods may or may not exist, the answer to that question does not significantly affect performance of our models of understanding the physical universe. That is, gods are an unnecessary assumption, regardless of whether any actually exist.