Author’s note: Theoretical physicist Sean Carroll was recently interviewed by podcaster Alex O’Connor and asked to defend his stance that one of the most thought-provoking scientific arguments for God’s existence, the argument from cosmological fine-tuning, “is the best argument for God, but it’s still a terrible argument.” I am responding to Carroll and other critics of the fine-tuning argument in a series of posts.
Find the full series so far here.
In my last post, we saw that epistemic probabilities are best understood, especially in the context of the fine-tuning argument (FTA), as objective degrees of support rather than subjective degrees of belief. This undercut Carroll’s subjectivism and set the stage for dismantling his concerns and allegations about evidential anomalies undermining the FTA. I also took apart his Bayesian likelihood reversal argument that fine-tuning favors naturalism over theism. This claim is the result of conflating what God can do with what traditional theism understands his purposes to be, and from the fallacy of wishful thinking: since naturalism requires a life-permitting reality in order for us to be here, a life-permitting universe is more expected on naturalism because theism can do without it. As we saw, that’s not how any of this works. With these foundational issues under control, I turn to the third part of our introduction: cosmological fine-tuning and multiverse explanations.
I outlined the methodological foundation for the fine-tuning argument in my last post. Here, I’ll introduce the structure of the fine-tuning evidence and highlight the strength of the FTA before examining Carroll’s preferred explanation for fine-tuning, the infinite multiverse, and cataloguing its deepest conceptual and structural problems, something that Carroll completely ignored in his interview with O’Connor.
Stage 3: Cosmological Fine-Tuning: Evidence and Argument
With the correct Bayesian apparatus in place as summarized in the previous post, we’re ready to consider what Carroll ignores, namely the FTA in its strongest form. I’ll start by presenting three categories of evidence: laws, constants, and initial conditions. The first category, nomological fine-tuning, that is, the fine-tuning of the laws of nature themselves, will address the dimensionality of spacetime, the existence and character of the four fundamental forces, the quantization of energy, and the Pauli Exclusion Principle. These structural features of the universe’s architecture, necessary for embodied life, bypass the probability objections to the fine-tuning argument that occupy so much of the critical literature. Secondly, I will consider parametric fine-tuning, namely, the life-permitting ranges for many of the fundamental constants of nature, drawing on Luke Barnes’s rigorous discussion of nuclear physics, stellar structure, and chemistry. Third and finally, I will consider initial-conditions fine-tuning, most especially Roger Penrose’s demonstration that the low-entropy initial state of our universe occupies a region of available phase space so minuscule as to defy comprehension.
Display “The Fine-Tuning Argument is Terrible – Sean Carroll” from YouTube
With this evidence in hand, the FTA can be formulated in our Bayesian framework in a way that absorbs the strongest probability objections in the literature, notably, the normalizability problem, the alleged unjustifiability of the principle of indifference, and a recent technical critique advanced by Chris Smeenk (2026) on the basis of Porter Williams’s work on naturalness (2015, 2019). Smeenk contends that Barnes’s (2018, 2020) quantitative estimates rest on a confusion. He claims the estimates treat the fact that low-energy physics is sensitive to high-energy physics — a genuine and well-understood feature of modern particle theory — as showing that the observed values are improbable, and argues that this requires a justified way of assigning probabilities across all possible theories, something nobody actually has. While technically interesting, Smeenk’s objection doesn’t survive analysis with the epistemic probability framework we’ve established. Furthermore, the evidence base for fine-tuning is far broader than the “naturalness” failures it targets. What we find, as I will argue, is a likelihood ratio so large that it overwhelms any reasonable atheistic prior probability judgments against theism.
Stage 4: The Multiverse Examined
So what about the multiverse, then? Carroll’s preferred alternative faces a wide variety of difficulties that he does not mention but of which he is not ignorant. This is further evidence of a disingenuous intent — the winning of an argument rather than finding the truth. Problems with multiverse explanations differ — there is push-back on some of them, but others are generally acknowledged, even among multiverse proponents. In any case, these problems collectively cast overwhelming doubt on the multiverse as the explanatory panacea that Carroll suggests.
The Question of Beginnings
The Borde-Guth-Vilenkin theorem (2003) establishes that any region of the universe that has, on average, been expanding throughout its history cannot extend infinitely into the past — the history of at least one possible trajectory through it, whether traversed by a massive particle or a ray of light, must come to an end. Note that since the theorem permits cosmologies where some paths extend infinitely into the past while others do not, this does not, taken in isolation, prove that the universe began to exist. For instance, loop quantum cosmology (LQC) circumvents the BGV theorem’s conditions by modifying spacetime structure at the quantum level in a way that prohibits singularities and permits a “bounce.” However, as Andrew Loke (2022, chapters 4 & 5) has documented, such workarounds come at severe theoretical cost — including extraordinary fine-tuning of the bounce conditions themselves, underdetermination by the data, violations of the Generalized Second Law of thermodynamics (Wall 2012),1 and unresolved questions about the model’s physical interpretation. The bottom line is that evading the BGV theorem requires accepting increasingly baroque cosmologies and explanatory burdens, which gives rise to explanatory deficits that may be weightier than the problems they’re intended to solve. So while the BGV theorem may not require a beginning for our universe, acknowledging such a beginning is arguably the simplest and best explanation currently available.
The Measure Problem
An endless multiverse is a context in which every outcome occurs infinitely many times, rendering calculations of probability ill-defined. After all, when every outcome occurs infinitely many times, the ratio of one infinite quantity to another is mathematically meaningless, and the various technical methods devised to elicit a finite answer from such ratios yield different and even contradictory results depending on which method is chosen (Olum, 2012; Gibbons and Turok, 2008). Ken Olum (2012: 9-10) states the issue succinctly by saying “[i]n an infinite universe, everything which can happen will happen an infinite number of times, so what does it mean to say that one thing is more likely than another?” Since the multiverse cannot explain why we observe one reality as opposed to another in the absence of coherent probabilities, this technical difficulty, which is no closer to resolution than it was over twenty years ago, undermines its explanatory capacity entirely.
The Boltzmann Brain Catastrophe
In any cosmology with eternal expansion (including our own universe if dominated by a positive cosmological constant), on the assumption of naturalism and materialist theories of consciousness, random thermal fluctuations would eventually produce, in numbers vastly exceeding ordinary observers like ourselves, isolated “brains” with momentary conscious experiences (Boltzmann, 1896; Dyson, Kleban, and Susskind, 2002). If such models of the universe were correct, then materialist assumptions entail that we would almost certainly be such “Boltzmann brains” with illusory memories — which would undermine the very observations that led us to accept the models in the first place. This problem afflicts not only the multiverse but much of contemporary cosmology, and proposed solutions remain contested.
The Inverse Gambler’s Fallacy
It would be a mistake to conclude from observing that a double six had been rolled at the start of a backgammon game that there had been many such rolls of the dice to achieve this result. Similarly, philosophers have argued that inferring a multiverse from the fine-tuning of our own universe makes this same mistake, an error known as “the inverse gambler’s fallacy.” Ian Hacking (1987) originally made this point, which was then further developed by Roger White (2000). In short, the existence of other universes does not raise the probability that our universe is fine-tuned. The constants of each universe are fixed independently. What is more, if this inverse gambler’s fallacy objection to multiverse explanations succeeds, as Kenny Boyce and Philip Swenson (2026) have recently and quite cogently argued, the fact that our universe is fine-tuned may actually provide evidence against the multiverse rather than for it. This is far more plausible than Carroll’s likelihood reversal argument that we eviscerated earlier, and highly ironic given that he takes explanatory refuge in the multiverse.
When we consider multiverse explanations at length in subsequent posts, I will also examine whether they genuinely have “independent support” from inflationary cosmology, string theory, and many-worlds quantum mechanics, and argue that this support is far weaker and more contested than Carroll admits (Ellis 2011; Gordon 2011a; Gordon 2021: 457-470).
Next up: “The Counter-Evidence Considered.”
Notes
The Generalized Second Law of thermodynamics extends the familiar second law to contexts involving black holes and cosmological horizons. The ordinary second law says that physical systems naturally progress from states of greater order to states of lesser order, that is, a quantity physicists call entropy, which measures the degree of disorder in a system, always tends to increase. An example would be a hot cup of coffee cooling to room temperature; it never spontaneously heats back up. The Generalized Second Law adds that this principle holds even when black holes and cosmological horizons are involved, provided we count not only the ordinary disorder of matter and radiation but also the entropy associated with the areas of any such horizons. This is significant for bouncing cosmologies because a universe that contracts to a minimum size and then re-expands must pass through a state of extraordinarily high order — extraordinarily low entropy — at the bounce point, and such a transition would violate the Generalized Second Law. In such case, a bounce would not merely be physically exotic, but thermodynamically forbidden.
