Culture & Ethics Icon Culture & Ethics
Physics, Earth & Space Icon Physics, Earth & Space

Spider-Man, the Multiverse, and Intelligent Design

Photo credit: Cristian Bortes / bortescristian, CC BY 2.0 , via Wikimedia Commons.

Recently a friend convinced me to go see the new Spider-Man movie, where I was surprised to see that the multiverse not only makes an appearance but plays a crucial role in the plot. Without giving away any spoilers, Benedict Cumberbatch’s character, Doctor Strange, declares, “The multiverse is a concept about which we know frighteningly little.” He’s right. And that’s part of the problem for materialists who cite the multiverse to explain away the extraordinary evidence for cosmic fine-tuning. Let’s first review the evidence.

A Finely Tuned Universe

Many physical laws and constants of the universe are finely tuned and “just right” in order for life to exist. Here are a few noteworthy examples, many of which are taken from Return of the God Hypothesis:

ParameterDegree of Fine-TuningReferences
Strength of Electromagnetic Force1 part in 25 Ekström et al., “Effects of the Variation of Fundamental Constants on Population III Stellar Evolution,” Astronomy & Astrophysics, 514: A62 (2010); Epelbaum et al., “Dependence of the Triple-Alpha Process on the Fundamental Constants of Nature,” European Physics Journal A, 49 (2013): id 82.
Strength of Strong Nuclear Force1 part in 200Csótó et al., “Fine-Tuning the Basic Forces of Nature Through the Triple-Alpha Process in Red Giant Stars,” Nuclear Physics A, 688/1–2 (2001): 560-562.
Mass of Quark1 part in 1021Luke Barnes, “The Fine-Tuning of the Universe for Intelligent Life,” Publications of the Astronomical Society of Australia, Vol. 29 (4), 2012: 529-564; Stephen Meyer, Return of the God Hypothesis, pp. 137, 472.
Value of Gravitational Constant1 part in 1035Geraint Lewis and Luke Barnes, A Fortunate Universe: Life in a Finely Tuned Cosmos (Cambridge, UK: Cambridge University Press, 2016), p. 109.
Value of Cosmological Constant1 part in 1090John Leslie, in Universes (London, UK: Routledge, 1989), pp. 5, 31, gives a value of 1 part in 10120. But the best estimates today for the fine-tuning of the cosmological constant is 1 part in 1090. See the Extended Research Notes for Return of the God Hypothesis
Ratio of the masses of a neutron to the proton1 part in 1000Lewis and Barnes, A Fortunate Universe, p. 79.
Ratio of the Weak Nuclear Force to the Strong Nuclear Force1 part in 10,000 Martin Rees, “Large Numbers and Ratios in Astrophysics and Cosmology,” Philosophical Transactions of the Royal Society London A, 310 (1983): 317; Lewis and Barnes, A Fortunate Universe, p. 78.
Ratio of the Electromagnetic Force to Gravity1 part in 1040Davies, The Accidental Universe, pp. 71-73.
Cosmic Mass Density at Planck Time1 part in 1060Paul Davies, The Accidental Universe (Cambridge, UK: Cambridge University Press, 1982), 89; Leslie, Universes, 29.
Initial Expansion Rate of the Universe1 part in 1017Paul Davies, Superforce: The Search for a Grand Unified Theory of Nature, New York: Simon & Schuster, 1985, p. 184.
Initial Density of the Universe1 part in 1024Lewis and Barnes, A Fortunate Universe, p. 167.
Initial Entropy of the Early Universe1 part in 10 to the power of 10123Roger Penrose and Martin Gardner, The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics (Oxford, UK: Oxford University Press, 2002), pp. 444-445; Leslie, Universes, p. 28.

Fine-Tuning as Evidence for Design

In a recent issue of New Scientist, Paul Davies explains why the universe seems fine-tuned for our existence:

Change many of these constants, and nothing happens. “But with others, it’s drastic, not to say lethal,” says cosmologist Paul Davies at Arizona State University. Alter the relative strengths of gravity and electromagnetism just a little, say, and stars and galaxies can’t form. Flip the tiny difference in the proton and neutron’s masses to make the proton heavier, and you don’t even get stable atoms.

“Changing these numbers would probably preclude any life in the universe,” says Davies. It isn’t a big leap to say it looks like the knobs have been twiddled — as if the universe were somehow fine-tuned for our existence. 

But how do we explain this fine-tuning? The finely tuned laws and constants of the universe are an example of specified complexity in nature. They are complex in that their values and settings are highly unlikely. They are specified in that they match the specific requirements needed for life. And what, in our uniform experience, is the only known cause of high levels of specified complexity? Intelligent design.

This conclusion has in fact been endorsed by scientists at the highest levels. Physics Nobel laureate Charles Townes explained:

Intelligent design, as one sees it from a scientific point of view, seems to be quite real. This is a very special universe: it’s remarkable that it came out just this way. If the laws of physics weren’t just the way they are, we couldn’t be here at all. The sun couldn’t be there, the laws of gravity and nuclear laws and magnetic theory, quantum mechanics, and so on have to be just the way they are for us to be here.

(“‘Explore as much as we can’: Nobel Prize winner Charles Townes on evolution, intelligent design, and the meaning of life,” by Bonnie Azab Powell, UC Berkeley NewsCenter (June 17, 2005).)

Necessary, Not Sufficient

It’s important to appreciate that the fine-tuned laws of nature are necessary conditions, but not sufficient, for the existence of life. Invoking these laws alone to create or cause the existence of life is like saying, “A cake is caused by flour, eggs, milk, and an oven.” Those special ingredients and appliances are needed for cake, but they are not sufficient to “create” or a cake or “cause” it to exist. You also need someone to assemble the ingredients in the right order — i.e., to add information to the system. In other words, a baker is also required. 

Fine-tuning is thus required for life, but it is not enough to produce life. Something else is required to arrange biomolecules in the right configurations and add information to life. Both the fine-tuning (necessary condition) and the information (sufficient condition) are necessary. It’s difficult to see how one can account for either without intelligent design.

The Multiverse Response

How do materialists respond to this argument? They cannot deny that the laws of nature are specified for life. But they seek to make them inevitable — i.e., less unlikely or less complex — by invoking a multiverse. Charles Townes explains this tactic:

Some scientists argue that “well, there’s an enormous number of universes and each one is a little different. This one just happened to turn out right.” Well, that’s a postulate, and it’s a pretty fantastic postulate — it assumes there really are an enormous number of universes and that the laws could be different for each of them. The other possibility is that ours was planned, and that’s why it has come out so specially.

Under the multiverse proposal, if there exists a potentially near-infinite number of different universes, each with different values for its physical laws and constants, then just by chance one will happen to get the rare finely tuned parameters and conditions needed for life. George Ellis and Joe Silk explain in Nature that the motive behind the multiverse is to try to explain fine-tuning:

The multiverse is motivated by a puzzle: why fundamental constants of nature, such as the fine-structure constant that characterizes the strength of electromagnetic interactions between particles and the cosmological constant associated with the acceleration of the expansion of the Universe, have values that lie in the small range that allows life to exist. Multiverse theory claims that there are billions of unobservable sister universes out there in which all possible values of these constants can occur. So somewhere there will be a bio-friendly universe like ours, however improbable that is.

In essence, they seek to increase the odds of winning the cosmic lottery by inventing as many universes as are needed to make the existence of a universe that permits life more likely. We happen to occupy the one that permits complex life.

Problems with the Multiverse

But the multiverse is no solution to the problem of fine-tuning. That is for many reasons:

  1. What is the mechanism that keeps cranking out all of these hypothetical universes, and how did it originate? There is no explanation for the cause of the multiverse.
  2. The theory relies on the assumption that the universes would be different from each other. If there are multiple universes, why wouldn’t many or all of them have the same characteristics?
  3. Since we cannot observe anything outside our universe, multiverse theories are 100 percent philosophical speculation, not science.

That last point leads to a deeper reason to reject the multiverse hypothesis. Scientific analyses include a requirement that is violated by the multiverse concept: they should be testable. The inability of science to observe anything outside our universe does not seem likely to change anytime soon, or ever. As cosmologist George F. R. Ellis explained in Scientific American, any potential parallel universes would “lie outside our horizon and remain beyond our capacity to see, now or ever, no matter how technology evolves,” and thus “we have no hope of testing it observationally.” He further explains in Nature that although some versions of the multiverse make testable predictions, in the final analysis the multiverse is not testable:

Some people have devised forms of multiverse theory that are susceptible to tests: physicist Leonard Susskind’s version can be falsified if negative spatial curvature of the Universe is ever demonstrated. But such a finding would prove nothing about the many other versions. Fundamentally, the multiverse explanation relies on string theory, which is as yet unverified, and on speculative mechanisms for realizing different physics in different sister universes. It is not, in our opinion, robust, let alone testable.

The many-worlds theory of quantum reality posed by physicist Hugh Everett is the ultimate quantum multiverse, where quantum probabilities affect the macroscopic. According to Everett, each of Schrödinger’s famous cats, the dead and the live, poisoned or not in its closed box by random radioactive decays, is real in its own universe. Each time you make a choice, even one as mundane as whether to go left or right, an alternative universe pops out of the quantum vacuum to accommodate the other action.

Billions of universes — and of galaxies and copies of each of us — accumulate with no possibility of communication between them or of testing their reality.

It seems Doctor Strange was right — there’s little we can or do know about the multiverse — which is precisely why multiverse theories cannot be part of proper scientific explanations.

The Multiverse Destroys Science

Another danger of “multiverse thinking” is that it would effectively destroy the ability of scientists to study nature. A short hypothetical example shows why.

Imagine that a team of researchers discovers that 100 percent of an entire town of 10,000 people got cancer within one year — a “cancer cluster.” For the sake of argument, say they determine that the odds of this occurring just by chance are 1 in 1010,000. Normally, scientists would reason that such low odds establish that chance cannot be the explanation, and that there must be some physical agent causing cancer in the town.

Under multiverse thinking, however, one might as well say, “Imagine there are 1010,000 universes, and our universe just happened to be the one where this unlikely cancer cluster arose — purely by chance!” Should scientists seek a scientific explanation for the cancer cluster, or should they just invent 1010,000 universes where this kind of event becomes probable? The multiverse advocate might reply, “Well, you can’t say there aren’t 1010,000 universes out there, right?” Right — but that’s the point. There’s no way to test the multiverse, and science should not seriously consider untestable theories. Multiverse thinking makes it impossible to rule out chance, which essentially eliminates the basis for drawing scientific conclusions.

You Might Be a Boltzmann Brain

Another idea that helps illustrate how multiverse thinking could spell the end of rational inquiry is Boltzmann brains. Multiverse advocates claim that if any universe just happens to get the right parameters needed for life, then life will by necessity arise, meaning that if we happen to live in that universe, it’s not so “lucky” that we’re here to see that. If we didn’t live in that universe we wouldn’t be here to appreciate our lucky position. The problem of Boltzmann brains shows why we’re actually quite lucky after all.

According to quantum theory, sometimes very weird things can happen. This includes a brain popping into existence, just by chance, which has false memories and thinks it’s capable of understanding science and discerning the inner workings of the universe — when it’s really just hallucinating the whole thing. In fact, you yourself might be such a “Boltzmann brain”! How could you disprove it? In Return of the God Hypothesis, Stephen Meyer explains the disturbing epistemological implications of Boltzmann brains:

In an infinite number of universes, an infinite number of such [Boltzmann] brains would exist, including an infinite number with false memories and perceptions. More troubling, it follows that we ourselves might have such brains rather than so-called natural brains with reliable perceptions and true memories. Consequently, many physicists now worry that positing an infinite number of universes, either to solve the fine-tuning problem or (to the extent that any might be aware of it) the problem of the informational inputs necessary to render quantum cosmology plausible, should lead us to doubt the reliability of our own minds. In other words, positing infinite-universe cosmologies leads inevitably to radical epistemological skepticism. 

p. 400

Meyer explains that initially multiverse cosmologists believed that their mechanism for generating a “multiverse” would always lead to far fewer “Boltzmann brains” than natural brains, meaning it’s much more likely that we are real brains that can trust our senses. However, “This proposed solution turned out not to work,” as Meyer explains:

The physicists proposing it soon realized that, in any given sector of the inflating space, the inflaton field would produce astronomically more extremely young or short-lived universes than extremely old universes such as ours. … Why is this a problem? Because many such Boltzmann brains with false memories would arise by spontaneous quantum fluctuations in the young universes in the time that it would take for one or a few conscious intelligent forms of life (i.e., natural brains with real memories and accurate sense perceptions) to evolve in one of the relatively few old universes. Thus, the activity of the inflaton field would ensure that most observers would be Boltzmann brains in universes too young to permit the kind of evolution needed to produce ordinary observers with reliable memories. 

pp. 401-402

In other words, if inflationary models generated a multiverse, it’s far more likely that you are a Boltzmann brain hallucinating this article than a natural brain trying to understand the world we live in. Once again, multiverse thinking destroys our ability to do science. 

Moreover, Boltzmann brains only need one lucky localized patch of life-friendly space, whereas natural brains need an entire universe friendly to life in order for them to evolve. Because the former is much more likely to occur than the latter, this means that if we are indeed real, natural brains observing the universe, then we must indeed be quite lucky after all.

The Multiverse Requires MORE Fine-Tuning

There’s one final defeater to the multiverse: the mechanisms and explanations invoked to explain the multiverse themselves require fine-tuning! That means that no matter how much you think the multiverse allows you to escape fine-tuning, it actually pushes the question back and requires MORE fine-tuning — exacerbating the problem it set out to solve. For details on this important point, see Stephen Meyer and Brian Miller, “Physicist Sabine Hossenfelder Challenges the Evidence for Cosmological Fine-Tuning,” or Bruce Gordon, “Balloons on a String: A Critique of Multiverse Cosmology,” The Nature of Nature, eds. Bruce Gordon and William Dembski (Wilmington, DE: ISI Books, 2011), pp. 558-585.

Cut Down by Ockham’s Razor

The November 2021 New Scientist article mentioned earlier claims that we should accept the multiverse because it’s the “simplest” explanation for the fine-tuning we observe:

Physicists have long speculated why our universe seems “just right” for life. The most complex answer might be the simplest — that every other universe also exists

Apparently under the Newspeak of materialism, proposing an infinite ensemble of universes that we can’t observe is the “simplest” explanation. Some authorities feel otherwise — noting that the multiverse is not a “simple” explanation at all. 

Ockham’s razor is a logical principle, often used by scientists, that holds that the simplest explanation tends to be the correct one. What is the simplest explanation: (1) that the fine-tuning of the universe is the result of a near-infinite number of unobservable universes spawned by an unknown mechanism of unexplained origin, or (2) that the special, life-friendly conditions of our cosmos are the result of intelligent design? Theoretical physicist Sabine Hossenfelder (no friend of intelligent design) explains that the multiverse is too complex an assumption to survive Ockham’s razor:

If Ockham could see what physicists are doing here, he’d pray for God to bring reason back to Earth. You should remove unnecessary assumptions, alright. But certainly you shouldn’t remove assumptions that you need to describe observations. If you do, you’ll just get a useless theory, equations from which you can’t calculate anything. 

These useless theories which lack assumptions necessary to describe observations are what we now call a multiverse. And they’re about as useful as Ockham’s prayers.

Since you cannot calculate anything in the multiverse, the assumptions which physicists removed must then be replaced with something else. That “something else” is a probability distribution on the multiverse, which tells you not what we do observe, but what we are likely to observe. But it is simpler to assume a constant than an infinite number of universes with a probability distribution over them. Therefore, Ockham’s razor should shave off the multiverse. It’s superfluous. Unfortunately, this argument carries little weight among many of today’s theoretical physicists who value the multiverse because it excuses boundless speculation. [Emphasis added.]

In the end, the fine-tuning of the laws of nature shows high levels of complex and specified information, thus providing an argument for design:

  • The laws of nature exhibit an incredibly unlikely degree of fine-tuning that is required to produce a life-friendly universe.
  • There is currently no physical explanation for this fine-tuning.
  • We can observe our universe, and no others.
  • This unlikely fine-tuning represents astronomically high levels of specified complexity embedded in the laws of nature.
  • Invoking a multiverse is not only untestable, but it would effectively increase the amount of specified complexity necessary to account for life, rather than explaining its origin. 
  • And what, in our uniform experience, is the only known cause of high levels of specified complexity? Intelligent design.