CHAPTER 18: Theories of Something

 

page 259 bottom

Initial or boundary conditions…must somehow be specified.

In effect, they are “random” with respect to the system, since by definition they are irreducible to anything simpler. A ‘random’ expression is either one for which no compressing algorithm can be found or one that has already been maximally compressed. However, since there is no way to prove randomness, there is thus no way to prove that a theory is ultimate and final. Such a reduction would be possible only if physical reality happened to coincide with a specific deductive system. In that case, however, it would not be real in any physical sense, and would certainly not correspond to the real number continuum. Cf. Gregory Chaitin “The Limits of Reason” Scientific American 294 no3 (March 2006) pp74-81: “…it turns out that an infinite number of mathematical facts are irreducible, which means that no theory explains why they are true… The only way to ‘prove’ such facts is to assume them directly as new axioms, without using reasoning at all… Mathematics therefore has infinite complexity, whereas any individual theory of everything would have only finite complexity and could not capture all the richness of the full world of mathematical truth.” To the degree physical reality corresponds with mathematical truth, it could not be exhausted either.

 

page 260 bottom

Arbitrary assumptions must be made about such processes and their hypothetical venue.

The origin of life presupposes the broader matrix of a non-living universe; similarly, cosmic natural selection presupposes a multiverse, whose “fundamental parameters” must then be explained in turn, perhaps in an endless regression of universes within universes. Moreover, while there are known mechanisms to explain random genetic variations, there are no known mechanisms to explain why parameters common to all (possible) universes should vary “randomly” at all, much less why the variations should be “small” from one generation of universe to another. Smolin proposes that the variations selected would be minor departures from our universe, which is assumed to be close to optimal for production of black holes; this is on the analogy of genetic mutations, which are small random errors. In the case of biology, the source of mutation might be radiation, which comes from an environment external to that of the reproducing cell. What would it be in the case of “reproducing” universes? Genetic variations are minor because the organism is essentially homeostatic and conservative (with drastic changes culled out). Even if variations in the “parameters” of the standard model are supposed to result from quantum fluctuations, what would render or keep them “small” in a universe that does not yet have appreciable size? How to define a “baseline” for such fluctuations?

Estimates of the improbability of fine-tuning are arbitrary and wide-ranging, based on a variety of factors whose (im)probabilities are multiplied together. But such calculations assume that each factor is unrelated to the others; this would not be true in a universe that self-organizes and is synergistic in the way the living world is. The “fitness landscape” for life-bearing universes is not a real ecology, but at best a grand metaphysical construction, at worst a misleading fiction. There does not seem to be any interaction between possible universes, which do not define a “community” or a mutual environment in any sense. The multiverse is a collection of physically unrelated conceptual possibilities.

 

page 261 middle

…whose confidence…was inspired by his success with General Relativity.

Barrow contrasts this with Einstein’s earlier thought experiments. But even in Special Relativity there is a deductivist vein alongside the positivist one: the absolute speed of light is simply embraced as an axiom of his system

In Ideas and Opinions [pp270-276] Einstein writes: “Nature is the realization of the simplest conceivable mathematical ideas. I am convinced that we can discover, by means of purely mathematical constructions, those concepts and those lawful connections between them which furnish the key to the understanding of natural phenomena. Experience may suggest the appropriate mathematical concepts, but they most certainly cannot be deduced from it. Experience remains, of course, the sole criterion of physical utility of a mathematical construction. But the creative principle resides in mathematics. In a certain sense, therefore, I hold it true that pure though can grasp reality, as the ancients dreamed.” [quoted in Holton TO, p252] Note that Einstein says “the sole criterion of physical utility,” not the sole criterion of truth! In his Autobiographical Notes, he predicts that future theories will increasingly “distance themselves from what is directly observable,” [Holton TO, endnote 41, p272], a prediction that has to some extent come true.

 

page 262 bottom/263 top

It manifests…Kepler’s planetary orbits…Eddington’s numerology of basic constants, and Dirac’s large number hypothesis.

Kepler was obsessed by a model in which the orbits of the (five) known planets fit into spheres circumscribing the five Platonic solids; while they did happen to fit thus roughly, there were no further solids to account for Uranus and Neptune. Arthur Eddington was convinced that the basic constants were related to each other in some simple rational way—an idea that continues to be pursued in the modern guise of a grand unified theory. Dirac’s idea relates large dimensionless numbers to possible changes over time in the mass of the universe and the strength of gravity.

 

page 263 bottom

A first-order view of completeness must give way to a vision…

Smolin [Time Reborn, p155] suggests that the missing information that renders present quantum theory incomplete might not be found at a “deeper” level, as in hidden-variable theories, but outside the defined system, as an aspect of its inherent holism. However, even if information from the rest of the universe were included, the account would not be complete in the sense I propose. Smolin cautions that the price to pay for his proposal would mean giving up the relativity of simultaneity in favor of an absolute time. It would also mean further entrenchment in a perspective that excludes the epistemic observer, not to mention the theorist.

 

page 264 upper bottom

A conceptual space…in which…power laws can take different exponents.

For example, Weinberg Dreams of a Final Theory, p105: “In Newton’s theory there is nothing about the inverse square law that is particularly compelling… one could have replaced [it] with an inverse-cube law or an inverse 2.01-power law without the slightest change in the conceptual framework of the theory.” This strikes me as nonsense. The geometry of three dimensions (which Newton’s theory assumes) entails the inverse square law: since area increases with the square of distance, any intensity distributed over area falls off inversely. (Newton assumed the force of gravity to emanate in every direction from a source, like the spherical wave and unlike the line-of-sight connection imagined by Kepler.) If the exponent were different than 2, the geometry would be different, and hence the “conceptual framework.”

 

page 264 bottom

“Fine-tuning” is an epithet that names a group of problems in cosmology…

Related to fine-tuning in general, the so-called flatness problem traditionally relies on a passive mechanistic model of dynamic evolution from an original state, like the effects of deterministic chaos. However, continuous or multiple causes—involving feedback, as in a self-regulating universe—would render a different picture. The problem shifts if space is somehow self-reproducing or if the expansion rate is self-reinforcing. Moreover, the problem is not even well named, for any large enough universe would be approximately “flat” on a local scale, whether open or closed.

Cf. Wikipedia: flatness problem: “In the case of the flatness problem, the parameter which appears fine-tuned is the density of matter and energy in the universe. This value affects the curvature of space-time, with a very specific critical value being required for a flat universe. The current density of the universe is observed to be very close to this critical value. Since the total density departs rapidly from the critical value over cosmic time, the early universe must have had a density even closer to the critical density, departing from it by one part in 1062 or less. This leads cosmologists to question how the initial density came to be so closely fine-tuned to this ‘special’ value.”

The flatness problem may be thought of in terms of escape velocity. A projectile fired from the ground can do one of three things: 1) it can fall back to earth; 2) it can escape the earth’s gravity because of its initial speed; 3) it could have an initial speed that exactly balanced the earth’s attraction, so that its speed away from the earth approached but never reached zero. The “critical density” of the present universe (measured through various indirect indices) is supposed, like this third possibility, to poise the universe between indefinite expansion and ultimate collapse. Since any deviation from this critical value would be amplified over time in that scenario, it is assumed that expansion of the early universe must have been extremely close to the critical value. It seems this picture must be modified, however, by the discovery of cosmic acceleration. The projectile analogy presumes a fixed initial velocity, whereas the cosmos seems more like a self-propelling spaceship that accelerates over time. It might start moving away from the earth well below escape velocity and finish (if ever) well above it. The question would then be: how likely are we to find ourselves at a time when the velocity is close to escape velocity?

The horizon problem “points out that different regions of the universe have not ‘contacted’ each other because of the great distances between them, but nevertheless they have the same temperature and other physical properties. This should not be possible, given that the transfer of information (or energy, heat, etc.) can occur, at most, at the speed of light.” [Wikipedia: horizon problem] Both the flatness and the horizon problems are presumed resolved by the device of cosmic inflation, which has “space itself” expand faster than the speed of light. But why would space itself have a definite expansion rate at all, unless it is “physical,” with the characteristics of a medium? And, in that case, why would it be exempt from the cosmic speed limit of c?

 

page 264 bottom/265 top

These might involve…dependence of the actual world on critical values…

Cf. Lee Smolin “Cosmology as a problem in critical phenomena” arXiv:gr-qc/9505022v1, May 1995, p29-30: “There are two kinds of such fine tuning problems. The first involve issues of hierarchies, in which parameters have improbably small values, such as in the case of the values of the proton or electron mass, in Planck units. The other class involves cases in which structures of a certain kind would not exist if a parameter were to take values different from its present ones, by less than an order of magnitude. Examples of this are the proton-neutron mass difference, the electron mass, or the strength of the fundamental electric charge: increases in any of these separately, by factors less than ten would result in a world with no nuclear bound states, and hence no nuclear or atomic physics.”

 

page 265 lower top

The challenge is to understand…why the theorist should have to tailor them…

Possible explanations are wide-ranging for the “fine-tuning” of the twenty-some free parameters of the Standard Model and certain cosmological parameters, such as expansion rate and the cosmological constant. For example, they might involve: a) pure chance (no explanation needed or possible); b) anthropic reasoning (the universe must be roughly as it is because we would not be here otherwise); c) logical necessity (the universe could not have been otherwise given some basic theory of everything); d) design (the universe was made by God or superior aliens); e) natural selection (our parameters were somehow favored over many generations of universes.) [Rüdiger Vass “Is There a Darwinian Evolution of the Cosmos”—some Comments on Lee Smolin’s Theory of the Origin of Universes by Natural Selection”]. To these we might add: f) unrecognized processes of self-organization.

The fine-tuning problem bears comparison to the problem of irreversibility. We don’t expect the gas in a container to end up accidentally on one side of it, or in an otherwise condensed state, without the application of some force. Gravity supplies such a force. There are problems involved in extending the concept of entropy to the universe as a whole, when it was invented to deal with closed terrestrial systems. The concept warrants re-thinking in the specifically cosmological context.

 

page 265 bottom

It may well be that nature has more active ways of self-organizing.

Charles Peirce early suggested a model of evolution based on “non-Bernoulian trials” (individual trials that are ­not independent). Cf. SEP on Charles Sanders Peirce, sec 5: “One possible path along which nature evolves and acquires its habits was explored by Peirce using statistical analysis in situations of experimental trials in which the probabilities of outcomes in later trials are not independent of actual outcomes in earlier trials, situations of so-called ‘non-Bernoullian trials.’ Peirce showed that, if we posit a certain primal habit in nature, viz. the tendency however slight to take on habits however tiny, then the result in the long run is often a high degree of regularity and great macroscopic exactness. For this reason, Peirce suggested that in the remote past nature was considerably more spontaneous than it has now become, and that in general and as a whole all the habits that nature has come to exhibit have evolved. Just as ideas, geological formations, and biological species have evolved, natural habit has evolved.”

 

page 266 top

Lee Smolin’s black hole selection theory, for example, proposes…

Lee Smolin (The Life of the Cosmos, op cit) proposes a cosmic natural selection theory, with consequences he claims are testable. The theory he proposes is modeled on Darwinian natural selection and relies on an analog of random genetic mutation; new “universes” with slightly differing parameters are formed in black holes. Natural parameters are thereby selected between serial “bangs” within an overarching “time,” much like characteristics of organisms are selected over generations, leading to a preponderance of the type of universe (ours) that successfully “reproduces.” This rather parallels the assumption in biology that the only mechanism of biological evolution is natural selection among random mutations. (Smolin does not specify a mechanism for the “variations” from generation to generation of universe—only that the fluctuations involved would be “small.”) Perhaps, like life, the laws of nature may also reflect a more active participation of nature in shaping itself. See my paper, “An Argument for a Second-Order Cosmology” 2014, Academia.edu.

Smolin’s argument is intended to avoid resorting to raw chance alone to explain why the universe is as it is. He rejects the anthropic principle as an explanation, with its implied baggage of a multiverse (with, for example, 10229 distinct universes), suggesting instead a selection principle operating in time rather than space. He rightly points out that there must be some alternative between the extreme of a set of values that follow strictly from theory and a set of values established purely by chance. That is, there must be something between randomness and logical necessity to account for the order in the world.

 

page 266 middle

…self-organization beyond the mechanism of natural selection over generations.

The idea that laws of nature may change is not new. Smolin adds to it further hypotheses: 1) laws change only at “bangs”; (2) time is not coextensive with this universe but is its transcendent context; (3) the (only) process involved in cosmic evolution is natural selection, acting (4) on small variations of parameter from universe to universe. A weak point of the argument is that its mechanism lacks a causal basis comparable to those behind genetic mutation. It is simply postulated that variations of parameters must be “small,” with no theoretical reason why they should be any particular size or why they should occur at all.

 

page 266 lower middle

The estimations of improbability inspired by them may be spurious.

Like many others, Smolin sketches a calculation for the negligible probability that a “universe with randomly chosen values of the parameters of the standard model will have stars that live for billions of years” (and therefore be hospitable to life). He begins by estimating the probability that the proton mass could “randomly” be the required fraction (1/ 1019) of the Planck mass. He reasons that this would mean one chance in 1019. This involves a completely arbitrary assignment of 1019 putative possibilities—perhaps on the analogy of one chance in six for the role of a die face (in this case, a die with 1019 facets!) He then goes on to estimate specious probabilities for other factors, such as the ratio of proton and neutron masses and the cosmological constant, combining them finally to reach a grand improbability of 10–229!

 

page 268 top

While the accidental suitability…evidence of divine special creation.

John Leslie [Universes Routledge 1989] compares the improbability of life happening by chance to the analogy of surviving an artillery shell exploding in your trench. In both cases, he argues, we are entitled to surprise at miraculously being here. However, the surprise at surviving the bomb is based on prior knowledge of artillery shells and their effects; it is circular reasoning to assume that the existence of life and conscious beings is a matter with precedent. The comparison would be more apt if we didn’t know what a bomb is and if, in fact, this is the first and only one in existence.

The incredible accuracy involved in fine-tuning vastly exceeds human capability. It is precisely the sort of thing that cannot result from design, if by that we have in mind a concept referring ultimately to human capabilities. Of course, it may be argued that God’s abilities are infinite; I would counter this by suggesting that any concept of design we can entertain implicitly means finite design, modeled on human capabilities and limited resources, since this is all we have to refer to without begging the question. No meaning can be assigned to processes requiring infinite resources. On the other hand, “large” numbers are relative to scale and not infinite; extreme accuracy could conceivably be achieved with commensurate resources. In that case, however, we should as likely suspect superior aliens to be the creative agent.

 

page 268 upper middle

Cosmologists are no more immune to just-so creation stories than theologians.

For example: “The physical universe is in a very remarkable state, poised exquisitely between tedious uniformity and directionless chaos. The richness of the world we observe has emerged spontaneously from the featureless primordial universe that followed the big bang.” [D. Abbott, P. C. W. Davies, and C. R. Shalizi, “Order from disorder: the role of noise in creative processes,” Fluctuation and Noise Letters, Vol. 2, No. 4, pp. C1-C12, 2002.] While that is poetic enough, it might be more appropriate to claim that cosmological models are poised exquisitely between the demands of formalism and fit to data! “Tedious uniformity” and “directionless chaos” are rather biased expressions. “Featureless primordial universe” has a mythical ring to it (only an utter abstraction could be completely featureless). And, for richness to arise “spontaneously” from a featureless void still smacks of creation ex nihilo—or magic.

 

page 268 bottom

The question of how many derivatives…without much practical significance…

Deviations from the fit of a particular algorithm might in turn be expressed in an algorithm, and so forth. This is like Ptolemy’s epicycles and epicycles on the epicycles, which had, in retrospect, only a false significance. So it might be with meta-laws. What was involved in the shift from Ptolemy’s to Newton’s descriptions was not merely a refinement, but a wholly different perspective. The shift in the present case might be to a perspective that includes self-organization rather than additional layers of mechanist analysis.

 

page 269 bottom

…patterns are deliberately selected that can be expressed by computable functions.

David Deutsch “It from Qubit” Sept 2002 online archive p13: “It is only through our knowledge of physics that we know of the distinction between computable and non-computable… or between simple and complex.” On the other hand, Max Tegmark offers the (I believe absurd) view that either there are no non-computable numbers or the very existence of random-looking physical constants implies the existence of parallel universes: “There are therefore only two possible origins for random-looking parameters…like 1/137.0360565 [the fine-structure constant]: either they are computable from a finite amount of information, or the mathematical structure corresponds to a multiverse where the parameter takes each real number in some finitely specifiable set in some parallel universe. The apparently infinite amount of information contained in the parameter then merely reflects there being an uncountable number of parallel universes, with all this information required to specify which one we are in.” [Tegmark “The Mathematical Universe” 2007sec IV C]

 

page 270 top

…when it was supposed that life… was virtually unthinkable “by accident.”

Ironically, Darwin argued that nature demonstrated only the relative adaptations required in order to give a selective advantage, not the perfect adaptations that would be expected of a providential design. Like his Enlightenment predecessors, he conceded to the religious sentiment of his culture by ascribing design to the laws through which evolution works, though not the details of its products. [Barbour p58]

 

page 271 top

Many scientists turn to so-called anthropic reasoning to explain away the appearance of fine-tuning.

Despite the name, anthropic reasoning is an example of a type of consideration that has nothing in principle to do with human observers: just as those creatures exist whose behavior permits their existence, so the world they live in must be such as to permit their existence. The anthropic principle is the idea that we may conclude certain things about how the universe is just by virtue of the fact that we are here to observe it. It is a kind of reasoning backwards to conditions that must be met for life (and therefore science) to happen at all. It is one way to disarm astonishment at being here, given that we could not be here to observe any universe that did not permit our existence. However, anthropic reasoning would technically require only one observer, whose existence would only require a fluctuation in entropy sufficient for a single observer, not an entire universe. [Sean M. Carroll and Jennifer Chen “Spontaneous Inflation and the Origin of the Arrow of Time” arXiv:hep-th/0410270v1 27 Oct 2004]