Revised from this:
Scene setting
Charles: Consider our universe. It seems to be precisely tuned for life. Were its initial parameters even slightly different, the universe would have no stars, or no matter, or collapse in on itself a moment after it arose, amongst many other life denying fates.
It would be strange if this ‘just so happened’ in a purposeless, undesigned way. That out of all the possibilities, we landed on the one that permitted life. Theism (and other more ‘anthropocentric’ hypotheses, admittedly) provide a much more plausible explanation for the data. Because God wanted living beings, he would make a life-permitting universe. It makes a much better fit for the facts.
Debbie: How are you constructing this sort of inference? I can see ways you can do it, but what do you have in mind?
Charles: I’d like to borrow from Robin Collins, here. We need to talk about epistemic probability. In some sense, we need to evaluate the relevant conditionals while pretending we don’t know what actually occurred: that although some event really did happen, it would have been less likely to happen on one hypothesis relative to another. This ‘retrospective confirmation’ applies regardless of the conditional that “if (less-likely hypothesis), then we’d still see this evidence.”
A concrete example.
Sharpshooters: Suppose you get lined up against the wall, and ten marksmen take aim at you and fire. You survive. The fact you’re still alive surely speaks against the hypothesis that these are expert marksman with properly-functioning weapons: it suggests maybe they aren’t very good shots, or they’re firing blanks, or similar. That this event has already happened does nothing to screen off the effect of this evidence. (Indeed, if so, we couldn’t really do any history, of any kind.)
Applying a similar scheme, we can present a fine-tuning argument. That we turn out to be alive in a life-permitting universe surely counts against a non-theistic (or non-design) hypothesis if it’d be really unlikely to turn out this way – a bit like you’d be unlikely to survive if ten able marksman took a shot at you. Yet such an occurrence is much likely on Theism.
More formally, like this:
Let LPU = Life permitting universe
N = Naturalism
T = Theism
P(LPU|N) is very low. P(LPU|T) is not too low. So, via some reverse probability and prime principle of confirmation.
P(T|LPU)/P(N/LPU) > P(T)/P(N)
So the fact we arrive in a life permitting universe improves ones odds of Theism over naturalism – or design over not-design. Given how really, really small the relevant conditional is, this should give us a very large kick towards Theism.
Debbie: That is right, so long as your assignments for the crucial conditionals are correct. But I don’t think they are. For simplicity, let’s assume the only two horses in town are Theism or Naturalism.
Multiverse
Debbie: Suppose a multiverse, with uncountable universes. Perhaps it exhausts every possible combination of values/parameters and so on, or maybe just a very, very large number. If that’s so, then even if the chances of getting a life permitting universe ‘first time’ are low, the chances of getting one after a few trillion (or infinitely many) bites at the cherry aren’t that low. So P(LPU|N) isn’t very low, and so the confirmatory push of fine tuning to Theism is repulsed.
Charles: Why believe this multiverse hypothesis? It’s wildly ad hoc. It’s almost as bad as any sceptical hypothesis you care to name.
Debbie: Is it? Ignore any putative free-standing reasons from quantum mechanics, an infinite universe, or the like. What reason is there to reject a multiverse? Unlike a sceptical hypothesis, it doesn’t offend any of our intuitions of having knowledge. It seems a complete toss-up. Yet this means that the multiverse hypothesis is inscrutable given naturalism, and thus the P(LPU|N) is likewise inscrutable.
Charles: Perhaps, but we can set some idea of a probability bound. P(LPU|N) will basically be P(N&M), where M is a suitable multiverse hypothesis. I don’t think the sort of Ms are hugely likely, and much less likely than P(LPU|T). Even if it’s pretty inscrutable on naturalism it’s not exactly what we expect.
Debbie: I’m not sure about this. I have worries about what sort of things perfect beings want to do. But put that aside – I’m sure many people have your take on these probabilities as opposed to mine. But you aren’t out of the woods yet.
Finely tuned evil
Debbie: There is another worry. What we really should be asking is not whether we should expect a life permitting universe in general. But this universe in particular.
Not all Life Permitting Universes are expected given Theism. A world where all conscious beings are mercilessly tortured for their entire lives might be one example. Closer to home, a world with the distribution of evils we observe might be another. If so, then although our universe has attributes (like life-permittingness) that confirm Theism, our particular universe lies is a member of this set that actually disconfirms it because it includes evil. It might look like this:
Charles: Fair enough, then there is a theodicy issue, but I think that distracts us. Effectively, this is a rebutting defeater to fine-tuning: although fine tuning shifts our credence towards theism, other features of the world (such like its distribution of evil) shifts us back the other way. Yet (given the fine tuning argument), if we only know that the universe is life-permitting, then this tilts us towards Theism. If we also know that this universe has gratuitous evil or whatever, then we’re tilted back to naturalism. However, we’d have to sort out this question too. It would still be the case that fine tuning simpliciter would tilt us towards Theism.
Debbie: Let’s assume that we can sort out evil. I don’t think fine tuning works anyway.
It’s a big (modal) world, after all…
Debbie: My main worry is this: we simply lack modal access to say that P(LPU|NSU) is very low, where NSU is a naturalistic single universe. Indeed, we can’t say anything about P(LPU|NSU). If that’s so, then fine tuning exerts no persuasive pressure at all.
Grant indifference over known possibilities. It seems clear, however, that the set of worlds we can illuminate is much, much smaller than the set of live possibilities. I have no idea whether a universe with a cosmological constant a few trillion orders of magnitude bigger than its present value is life-permitting. It seems to me there are any number of ‘alien physics’ we can conceive of with completely different laws of physics (one example: a set of universes with four fundamental constants, a-d, for all values of which the universe is life-permitting). Not only this, but I don’t trust my modal perceptive apparatus enough to reveal the whole field of available possibilities. In other words, I am confident that our ‘modal sample’, or epistemically illuminated range is much much smaller than the space I need to evaluate.
Yet, if so, then extrapolating that because life-permittingness is rare within our modal range that it’s rare in general isn’t unreasonable. Yet if we can’t make this inference, we simply have no steer into what the likelihood of getting a life permitting universe on ‘just chance’.
Charles: All right. We need to restrict ourselves to our epistemically illuminated range, on which life-permittingness is unlikely. So we need to add in some principle, Q, which is something like “we fell inside this particular range.” So we want to evaluate P(LPU|N&k’), where k’ is our background knowledge, which includes Q, but doesn’t include U: that this particular universe occurred.
Debbie: That seems a bit arbitrary, doesn’t it? Why should we accept Q?
Charles: I think there are two good reasons. The first is one of principle – as a rule, we can include into background fairly freely knowledge that doesn’t bias us one way or another. As P(Q|T)/P(Q|N) seems pretty inscrutable, Q satisfies this ‘no-bias’ condition.
Further, I think we can see how including Q makes good intuitive sense. Consider this:
Partially illuminated flies: Suppose one fly illuminated by a large spotlight, such that it takes up a minute area of the illuminated range. You know nothing about what lies outside the spotlight – for all you know, just outside the illumination it’s swarming with flies. Suppose now this fly in the spotlight gets hit. This surely confirms that this fly was being aimed for. Even if it wasn’t so unlikely that a random shot would have hit a fly, it’s very surprising it his this particular fly.
No-bias and salience alteration
Debbie: Let’s take the principle first. I agree that no-bias is a necessary condition for free inclusion into background. But it isn’t a sufficient one. There’s another condition we need to meet which Q fails, which I call ‘Salience alteration’.
Suppose a standard scientific trial: say it’s about statins and heart attacks. We have a control group without statins, and a case group with them, appropriately matched, randomized, and blinded. We find that those taking statins have a lower incidence of heart attacks than the controls. Does this mean we should think statins are cardioprotective? We need to do some inferential statistics to see if the difference we observe is likely to have occurred ‘just by chance’.
Yet, if ‘no-bias’ is sufficient to incorporate into background knowledge, then we don’t need to do this. For A “Our sample incidences exactly match our population incidences” or I “Our sample incidences are completely unrelated to our population incidences” both pass ‘no-bias’. Yet they lead us to completely different conclusions about the salience of the data we observe. It’s clear we can’t just include things into our background that make things salient or not without good reason. Obviously in scientific trials, the only things we should include in our background about the salience of our results are those granted to us by proper statistical technique.
Where does this leave Q? Well, it is certainly salience altering: with it, we have powerful confirmation for Theism, without it, the likelihoods are inscrutable. So we need reason to believe it, much like we might want reasons for or against sceptical Theism or similar concerns. The worry is familiar – whether we should trust the appearance of fine-tuning, or the appearance of gratuitous evil. Just assuming it in background isn’t good enough.
Charles: Fair enough, then perhaps the example can show that this assumption is intuitive.
Targeting and Locality
Charles: Take the worst case scenario for the partially illuminated flies. That flies are everywhere except within our spotlight, where there is only one:
Our single fly-illuminated region-lots of flies almost makes a ‘bullseye’. Yet it surely remains that, even in this ‘worst case scenario’, hitting the middle fly confirms a aimed hypothesis over a just chance one. This single fly is special. If indeed this ‘worst case scenario’ still gives us the confirmation we want, then we don’t need to worry about what lies outside our epistemic spotlight: we still get the confirmation whatever is actually there. So it’s fine to include Q.
Debbie: I think the inference is right, but the process is misdiagnosed and fundamentally disanalagous to the fine tuning argument.
There are two different hypotheses you could use, one if ‘fly preference’, and the other is ‘aiming’. If you use aiming, then indeed it doesn’t matter how common flies are outside the epistemically illuminated range. This is because the confirmation relies on a locality – that is of small groups of flies surrounded by space, which will be rare however common or not flies are in general. If flies are rare, then hitting a ‘target’ of a fly is unlikely by chance. If flies are common, hitting the flies surrounded by (rare) empty space is unlikely by chance. The ‘worst case scenario’ for the aiming hypothesis isn’t solid flies, but a ‘polka-dot’ pattern of targets. But even then, hitting one confirms aiming.
Yet, a fly preferences hypothesis does depend on how common flies are. If you hit a fly and flies are common, it’s no great surprise on chance, even if a particular fly was in a rare locality. So for this hypothesis we need to estimate how common flies are across our partially illuminated space – if we only illuminate a tiny part of this space, and can’t make any assumptions about the space, then we should say it’s inscrutable.
It’s clear that the fine-tuning argument as presented isn’t about aiming, but rather about fly-preference. It is a particular property of a universe (that it permits life) that we are told to consider, not some locality condition like being an oasis of life in a barren locality. So we can’t include Q-like hypotheses into our background in these cases, as it risks giving aberrant significance to small and possibly unrepresentative samples.
Tension and development
Debbie: Worse follows. Not only does Q have little to speak in favour, there’s very good reason to be against it. We can co-opt Collins’ own deployment of probabilistic tension against adopting Q.
Collins suggests when considering a conjunction of hypotheses, we should consider how well they ‘hang together’. He particularly considers some extensions to a naturalistic single universe (NSU&e) on which you should expect fine tuning. Collins argues that if P(e|NSU) is low, NSU&e suffers from probabilistic tension, and that it’s good epistemic practise to avoid such tension. This seems about right: adding implausible extensions to your pet hypothesis to ‘explain away’ countervailing evidence doesn’t mean this evidence doesn’t ‘count against’ the hypothesis.
But now consider NSU&Q&k”, where k” is k’ subtract Q, and so our background knowledge minus the fact that this universe exists, and minus the fact the universe is in this epistemically illuminated range. Yet it is clear that this conjunction suffers from considerable probabilistic tension. This is because P(Q|k”&NSU) is very low. This is because, if we assume indifference (which we did to make the argument in the first place) then P(Q|k”&NSU) is simply the ‘modal area’ of epistemically illuminated range divided by the ‘modal area’ of the relevant space being considered (namely, all possible physical words). Yet, as I’ve urged above, this seems probably very small indeed. Then NSU&Q&k” suffers from probabilistic tension, and should be rejected.
NSU&k” does not suffer from this tension. Yet P(LPU|NSU&k”) is inscrutable. So, surely, the argument is unpersuasive.
Attractively, these sorts of modal scope objections don’t only apply to cosmological fine tuning. They also apply to the biological design argument too. Even if we grant (pace modern science) that getting replicators, or brains, or whatever else is unlikely to have happened on Naturalism, we can run a very similar objection to the above. So long as our insight into the relevant possibilities is poor (which it is), then that this particular route to get to brains or whatever is unlikely, it doesn’t mean there aren’t many other, and perhaps fairly likely routes just elsewhere. If that’s true, this biological design argument doesn’t work even if the factual premises are conceded.
Charles: I’m not sure that’s so successful for biological design. I think we think we’ve got a better handle on the relevant biological probabilities, and further we feel more confident in making assumptions about the modal space in questions that avoid these sampling concerns. Then again, no one is going to concede the factual points anyway.
I’m not sure there’s much hope of resurrecting a Q-like principle that lets us avoid the modal space we don’t know about when we make our inference. On reflection, the underlying principle of your objection (‘don’t infer from a sample to a population if the sample is much smaller and without knowledge of the population distribution’) seems sound: I doubt analogical arguments or Chrisholming a suitable restriction will work.
Debbie: Better luck might be in denying the modal space in question is as vast and inscrutable as I make out. Maybe I suffer from a modally promiscuous imagination, and these supposed possibilities are rather impossible or absurd. Yet they seem about right to me – they don’t seem any more impossible and absurd when considering those possible worlds in our epistemically illuminated range. I think this intuition, with a healthy dose of modal scepticism, is hard to budge.
Charles: Returning to what we covered earlier. Perhaps we can change what we’re after to some sort of locality or ‘bullseye’ prediction. That, as you covered above, is fairly secure to modal scope objections.
Debbie: Yes, but how? I can see how Theism predicts a LPU, but I don’t see how it predicts a bullseye LPU. That seems pretty inscrutable to me.
Charles: Perhaps as a way for God to reveal himself?
Debbie: Maybe, but Theists can’t really agree to what extent, and in what manner, god hides or reveals himself, so I don’t see much hope of a robust prediction there.
Then again, P(LPU&X|T&k”) doesn’t need to be really high. It just needs to be good enough to beat the naturalistic likelihood. The problem is that as you add in stronger and stronger locality concerns, the relevant likelihoods become harder and harder to divine. There’s also the issue that this looks like an increasingly ad hoc exercise.
Charles: A final question. Do you really believe it? Doesn’t it strike you as a somewhat arcane objection? For me, at least, the fine tuning data ‘hits you in the gut’. Do you think this objection is substantive, or more a puzzle to be sorted out by the proponent of the argument.
Debbie: Actually, there is a strong intuition underlying the above – it isn’t just some curio. I agree fine tuning data does have a hit to it – it seems odd that the universe turned out that way, and maybe there’s a more comprehensive explanation than ‘it happened’. But, I think, it’s pretty odd that the universe turned out any way: that all the ways the world could have been is vast and almost limitless. It is this nigh-mystical awe of the modal space that is that should lead us to be suspicious of being too confident of it being like the fine-tuning argument presumes it is.