page 229 middle
To what extent is it possible to reverse-engineer nature?
For a survey of logical obstacles to reverse engineering a system with a degree of complexity corresponding to an infinite sequence, see Bruce Edmonds “Understanding Observed Complex Systems—the hard complexity problem” (Philpapers web archive). To summarize: If nature (or a given real system) amounts to an infinite sequence, then it is possible it might not be exhaustively mapped by any theory. If it can be so mapped by a theory, there are an infinite number of other theories that can map it. There is no “general and effective” (i.e. computable) way to find a theory (i.e., set of equations) that maps it. There is no way to check, in a program that does not run forever, whether a given theory does map it, nor any way to check whether two theories make exactly the same predictions for such a system. And all this assumes perfect (“noiseless”) input data, which can never be the case with actual experiments.
page 229 upper bottom
…wondering how a truly empirical… science could lend itself to purely formal treatment within deductive systems.
Echoing Aristotle, one could also wonder how the artificial contrivances of experiment can yield results that are true to nature. As Margaret Cavendish, an early feminist critic of the new science, observed: “…“an artificial trial cannot be an infallible natural demonstration, the actions of Art, and the actions of Nature being for the most part very different, especially in productions and transmutations of natural things.” [Cavendish Philosophical Letters 255-6, quoted in Deborah Taylor Brazeley An Early Challenge to the Precepts and Practice of Modern Science: the fusion of fact, fiction, and feminism in the works of Margaret Cavendish, Duchess of Newcastle (1623-1673) Dissertation University of Calif. at San Diego, 1990, chapter 4; <http://www.she-philosopher.com/library.html>]
page 229 bottom
One thinker, who wears both hats, declares
See also Tegmark “The Mathematical Universe” 2007 sec VI B: “The physical laws that we have discovered provide great means of data compression, since they make it sufficient to store the initial data at some time together with the equations and an integration routine. As emphasized… the initial data might be extremely simple… It is therefore plausible that our universe could be simulated by quite a short computer program.” On the contrary, potentially infinite data are required to specify real initial conditions, as opposed to the artificially precise input to a simulation, which can be defined however one chooses.
page 230 lower middle
…such a complete specification ispossible only in a deductive system, not in nature.
A famous example of the requirement of “completeness” is Einstein’s insistence, in the EPR paper, that “every element of the physical reality must have a counterpart in the physical theory.” A. Einstein, B. Podolsky, and N. Rosen “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?” Physical Review Vol 47 May 1935 [italics theirs]. In fairness, as a recent author informs us: “Einstein did not write the EPR paper, did not like the argument it contained, and proposed, instead, his own rather different argument for the incompleteness of quantum mechanics… What is happening here is that the Einstein who early intimated the deep role that entanglement would play in quantum mechanics and found by 1927 that he could not get rid of the entanglement by a hidden variables interpretation has now come to see in entanglement (still not known by that name) the chief point of difference between quantum mechanics and field theories like general relativity as alternative frameworks for a future fundamental physics and the chief reason for preferring the latter to the former.” [Don Howard “Revisiting the Einstein-Bohr Dialogue” 2009, p23] In any case, Einstein was known for his ostensibly realist stance in regard to the quantum theory, of which he remained critical throughout his life. EPR’s pivotal attack on the completeness of the existing quantum theory stated a nominally empiricist credo: “The elements of the physical reality cannot be determined by a priori philosophical considerations, but must be found by an appeal to results of experiments and measurements.” This was mainly rhetoric, however, in the context of a “thought experiment.” Though it can reveal the presence of unsuspected factors not yet accounted for in theory, even actual experiment cannot exhaustively reveal nature’s ingredients, and depends in any case on theory to define what those factors are. Einstein’s “realism” should also be viewed in the larger context of the friendly ongoing debate with Neils Bohr and the “Copenhagen interpretation” of quantum theory, which was in many ways inspired by a general anti-realist spirit of the age.
EPR goes on to clarify how to determine what is or is not an “element of physical reality.” The criterion hinges on the ability of a theory to predict with total certainty the measurable variables: “If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity.” [italics theirs] Without committing themselves to a comprehensive definition of reality, this is presented as a sufficient though not a necessary condition. I propose on the contrary that this requirement of certainty contraindicates physical reality. Total certainty characterizes only deductive systems, not natural ones. Real systems, even in the macroscopic realm, always demonstrate a probability less than one hundred percent, or unity.
It is interesting to speculate that Einstein’s campaign against the “incompleteness” of the quantum theory of the time was inspired in part by his long friendship with Kurt Gödel. The two first met in 1933, two years after the publication of Gödel’s famous incompleteness theorems. (The EPR paper was published in March, 1935.) With very different personalities, their bond lay in their common devotion to “realism” (as each conceived it) in an era when that went against the grain. Both held that reality—either physical or mathematical—is ultimately rational and objective. [Cf. Rebecca Goldstein Incompleteness: the proof and paradox of Kurt Gödel Atlas Books (Norton) 2005] But, in both cases, this belief turns out to be a brand of idealism. For Einstein, the objective existence of physical reality implied that its parameters could ultimately be known by rational mind; any theory that can’t exhaustively represent them is incomplete. For Gödel, mathematical truth had an objective Platonic existence, independent of mathematicians, accessible because it was the embodiment of reason. Ironically, his own theorem says that not all of mathematical truth can ever be encompassed by any deductive system. It is unclear what conclusion Einstein might have drawn from this for physical truth. It seems he stopped short of concluding that, in a parallel way, physical reality must transcend any given theory.
page 230 bottom
…because it cannot be exhaustively formalized in any theory or computation.
This precludes, for example, replicating a human brain or designing a truly autonomous artificial intelligence from the top down. In contrast to models of it, the brain is not deterministic because it is not a product of definition; it is not a computer program written by some external agent. Nothing in theory prevents an artificial intelligence from organizing itself, however, in much the way that brains apparently do. The point is not that brains, being flesh, constitute a different type of entity than computers, but that formal analysis of the structure of the brain, as of any other natural reality, is limited in theory as well as practice. Consciousness may indeed be a simulation “running” in the brain; if so, our concept of the brain is part of that simulation. And the human brain may be a very good Universal Computer that can simulate other simulations; but this does not mean that it is itself a simulation, that it can exhaustively simulate the reality of nature, nor that it can be simulated as a part of nature.
page 230 bottom
…but this would only limit its power to represent nature.
Such representation is the doubly “classical” ideal—of Newtonian physics and of ancient Greek deductionism—in which the formalist movement of late 19th century was grounded. One must bear in mind that this is an historical, and not a logical, development. While the Greeks used geometric diagrams and proofs, modern formalist rigor was defined in algebraic terms.
In the golden days of the quantum theory Schroedinger, for example, recognized that “reality resists imitation through a model.” [E. Schroedinger, “The Present Situation in Quantum Mechanics”, originally published 1935, quoted in Wick, p176] This recognition motivated a more operational approach, in which it was no longer claimed that theory represents reality at all, but is simply useful to predict the outcomes of (runs of) experiments. [Richard Healey “How Quantum Theory Helps us Explain” PhilSci archive, 2011] David Deutsch argues differently: “In this way, quantum networks could simulate arbitrary physical systems not merely in the bottom-line sense of being able to reproduce the same output (observable behavior), but again, in the strong sense of mimicking the physical events, locally and in arbitrary detail, that bring the outcome about.” [Deutsch “It from Bit” 2002, p10, online archive] However, by “arbitrary physical system” he means a quantum field [p9], which is a theoretical construct. It simply begs the question to assume from the outset that the quantum field corresponds perfectly to physical reality; the quantum field can be described “in arbitrary detail” because it is an artifact!
In effect, the scientist may construct a conceptual device whose output matches that of the target system, considered as a black box. The device (the model) is not a literal, only a functional, representation of what the box contains, which cannot be opened to verify its content. (Even ordinary cognition represents the world in the way that language does: symbolically.) The brain cannot open the black box of the world outside the skull, nor can the scientist open the black box of nature. Operationalism at least acknowledges that there is a purpose (prediction) behind representation, which is never simply a matter of copying reality. While a successful (deterministic) theory would then be one that predicts every observed behavior of the system, there is no guarantee that the theory will not fail to predict some behavior in the future, indicating that it was not, after all, “complete.”
page 232 top
…“indeterminacy” is not a property inhering in nature, but a state of knowledge.
While nature seems to be fundamentally random at the quantum level, randomness is a matter of mathematical definition, not of decidable physical fact. On the other hand, the notion of intrinsic randomness, indeterminacy, or a-causality is either a reassuring metaphysical belief or nothing more than a name for the failure to perceive a pattern. To hold that nature is indeterministic (at the quantum scale, for example) is a subtle psychological and linguistic ploy to trump an unknown by declaring what it is.
page 232 lower top
…disregards indefinitely many other assertions, following specific purposes.
Cf. James McAllister “What do patterns in empirical data tell us about the structure of the world?” Synthese July 2009. I would not go so far as McAlister, to say that “empirical data sets exhibit all possible patterns,” nor that all patterns are equal in significance. While a pattern must be extracted via some criterion or method applied by an agent, it should also reflect something in the real world. That we may impose structure in diverse ways does not imply that the world has no inherent structure, much less that is has infinitely many superimposed structures. It has a definite structure for us, as it must for any cognizing agent; yet even that apparent structure might be inexhaustible and incomplete.
Roberto Unger [The Singular Universe and the Reality of Time Cambridge UP, 2015] raises the point that the structure and properties we observe characterize the present “cooled-down” epoch of cosmic history, and that a prior epoch may have been characterized by more potential and less actual structure. On the side of the observer’s input to how the world is cognized, one could say that our current views on properties and structure (and, hence, what we choose to observe) reflect the particular values and goals of the modern human era.
page 232 middle
However…more to the point, it defines an artifact.
This applies to structure, parts, and kinds, as well as properties. (The living organism, at least, defines its own parts and its own kind, independently of how we define them.) It is one thing to assert that there must exist a real basis for natural kinds, such as chemical elements. It is quite another to assert that we exhaustively know this basis, and that natural kinds correspond exactly to our definitions. There is more to water, for example, than a collection of identical H20 molecules. Obviously, distinctions between chemical substances reflect real structural differences, which depend ultimately upon discreteness at the quantum level. However, even this discreteness may prove relative to our ways of looking. The discreteness of waves, for example, emerges in the context of a continuum. It should not be confused with the absolute discreteness of products of definition, such as the integers.
page 232 bottom
…in principle, there is a way to tell the difference between reality and simulation…
Sartre identified a key difference between the real and the virtual (in his terms, between sensory and eidetic images. [J.P. Sartre the Psychology of Imagination Citadel 1965] That difference is detail. Limited information is encoded in an imagined or remembered scene compared to a real one. Only in a cursory way does a mental image resemble the reality of which it is an image. Unlike a photograph, it cannot be searched for more information than what is already stored in the brain. The image is but an iconic representation of one’s limited existing knowledge of the thing it pictures. New data can only come from further exposure to the real thing itself, or to a real image of it that may contain relatively unlimited detail.
A mental image is a mind’s representation to itself of its propositional knowledge. It may appear to be fleshed out with “qualities” in the way that the world appears to be. It may seem, therefore, to be as informationally rich as the world of which it seems a copy. But this is an illusion. Limited information is encoded in any mental image, or indeed in any other artifact, which is always reducible to a finite system. This is equally true of a dream, a virtual reality, or any other mental construct.
page 233 top
Reality, in contrast, is ambiguous but dense.
This difference might form the basis for a sort of Turing Test of phenomenal experience: to determine whether it is virtual or real. Instead of posing verbal questions (as in the Turing Test), scientists could probe simulated environments in the ways they probe the real environment. This could take the form, say, of experiments performed within the alleged simulation. The results could be analyzed to assess whether the apparent environment is real or simulated. If answers to such queries could not be distinguished from the sort of results obtained by probing physical reality, then on this extension of Turing’s principle the alleged simulation should be considered real.
The significance of detail has been used to effect in films such as The Matrix and Inception. In the former, a glitch in the simulation produces a tell-tale repetition; in the latter, a real object, such as a spinning top, is used as a talisman to affirm the difference between dream and waking state.
page 233 middle
While models may be isomorphic…this says nothing of their relationship to reality.
The “law of requisite variety” is one way to account for the fact that the model can never be adequate, since—except in the case of artifacts—it cannot be guaranteed to be as complex as the reality it models. Also, cf. Edward N. Zalta “Reflections on Mathematics” V.F. Hendricks and H. Leitgeb (eds.), Philosophy of Mathematics: 5 Questions, Automatic Press/VIP, 2007, pp.313- 328: “The language of science should not be interpreted in terms of… set-theoretic models unless it is explicitly intended to be language about models of the natural world instead of directly about the world itself…”
Just as a map facilitates literal navigation of space, so scientific models facilitate navigation of reality in a broader sense. But, a model does not represent some aspect of the world by virtue of being similar; moreover, it is not the model which represents but the scientist. [Ronald N. Giere “How Models Are Used to Represent Reality” Philosophy of Science, 71 (December 2004) p. 747]
Cf. also Francis Heylighen “Cybernetics and Second-Order Cybernetics”, p3, in: R.A. Meyers (ed.), Encyclopedia of Physical Science & Technology (3rd ed.), (Academic Press, New York, 2001): “[Cyberneticists] began with the recognition that all our knowledge of systems is mediated by our simplified representations—or models—of them, which necessarily ignore those aspects of the system which are irrelevant to the purposes for which the model is constructed. Thus the properties of the systems themselves must be distinguished from those of their models, which depend on us as their creators. An engineer working with a mechanical system, on the other hand, almost always know its internal structure and behavior to a high degree of accuracy, and therefore tends to de-emphasize the system/model distinction, acting as if the model is the system. Moreover, such an engineer, scientist, or ‘first-order’ cyberneticist, will study a system as if it were a passive, objectively given ‘thing’, that can be freely observed, manipulated, and taken apart. A second-order cyberneticist working with an organism or social system, on the other hand, recognizes that system as an agent in its own right, interacting with another agent, the observer.”
Giere makes the point that “data” are already abstracted from the world for specific purposes: “Fully specified representational models are tested by comparison with models of data, not directly with data, which are part of the world. So it is a model-model comparison, not a model-world comparison. The move from data to models of data requires models of experiments… [which] rules out characterizing the desired relationship between representational models and the world as isomorphism…” [Ronald N Giere “An Agent-Based Conception of Models and Scientific Representation”, p4-7]
page 233 bottom
There is only the trivial guarantee that an algorithm corresponds to the pattern it mathematically generates.
Perhaps the discovery of the Mandelbrot and similar fractals has renewed enthusiasm for the promise to capture the essential complexity of nature in simple algorithms. (The fractal is a mathematical object made accessible by computers. The concept was generalized to include the ‘multifractal’—with non-constant ‘fractal dimension’—in order to model yet deeper complexity.) Yet, the sophisticated patterns generated by these formulae are nonetheless products of definition. Their detail is not truly random, but prescribed by algorithm; it is made, not found. This does not prevent such simulations from being highly useful and entertaining, as in computer-generated imagery. The caveat, however, is that simulations may be valid enough for entertainment purposes yet not for theoretical purposes.
page 234 upper bottom
Similarly, one may falsely believe the essence of a natural object or process to be captured in a program or blueprint for its artificial reproduction.
Cf. Penrose, for example [Roger Penrose The Emperor’s New Mind: concerning computers, minds, and the laws of physics Penguin 1989, p25]: “According to quantum mechanics… any two electrons must necessarily be identical…If the entire material content of a person were to be exchanged with corresponding particles in the bricks of his house then in a strong sense, nothing would have happened whatsoever. What distinguishes the person from his house is the pattern of how his constituents are arranged, not the individuality of the constituents themselves.” However, we are never in a position to know definitively either the pattern or the constituents. We presently believe that electrons are indistinguishable and interchangeable; yet two atoms of a given element may differ in ways that give them different properties. This is because a structure has been discovered within what was once thought indivisible and integral. We can never be finally certain of natural structure.
page 235 top
…real differences between the flight of the bird and that of the aircraft.
The distinction between artifact and natural entity is often glossed over, as in the following: “Like a toy boat, a scientific model is a scaled-down version of a physical system, missing some parts of the original. Deciding what parts should be left out requires judgment and skill.” [Alan Lightman and Roberta Brawer Origins: the lives and worlds of modern cosmologists Harvard UP 1990, p3] The analogy is misleading because here one artifact is implicitly compared to another: the toy is an artifact modeling another artifact (the “physical system”). A model can be a scaled down version of an artifact (such as a boat), but not of a natural entity or phenomenon. Regardless of what one “decides,” there will always be an indefinite number of possible considerations left out.
page 235 lower top
Simulation is not duplication, except where artifacts are concerned.
Duplication means reproducing all physical aspects of a system. Simulation means producing a dynamically evolving representation of some aspect of a system. [Gualtiero Piccinini, private communication, Dec 20 2011]. Intuitively, it might seem possible to duplicate a natural thing or phenomenon without passing through the bottleneck of formalizing or without exhaustively understanding it. Prof. Piccinini feels we can be confident that something is duplicated if it’s made of the same ingredients put together the same way, and reproduces all physical aspects of the original system, even if we don’t know the formalism. Yet, how do we know that it is made of the same ingredients, or put together in the same way, or that it reproduces all the physical aspects? The conclusion begs the premise. Even if it were possible to bypass the formalism, how would one then verify the copy? Successful duplication of a computer file can be verified, because the file is already a perfectly defined artifact that can be checked for one-to-one correspondence with the copy (which is the whole advantage of digitization). But this is not the case for natural entities.
page 236 upper bottom
Critics of contemporary genetic science see in it a new form of the old mechanism.
Richard Lewontin, for example, asserts: “If we had the complete DNA sequence of an organism and unlimited computational power, we could not compute the organism, because the organism does not compute itself from its genes… There exists, and has existed for a long time, a large body of evidence that demonstrates that the ontogeny of an organism is the consequence of a unique interaction between the genes it carries, the temporal sequence of external environments through which it passes during its life, and random events of molecular interactions within individual cells. It is these interactions that must be incorporated into any proper account of how an organism is formed.” [The Triple Helix: gene, organism, and environment , Harvard 2000, p17-18]
page 236 bottom/237 top
It denies the role of the environment…junk DNA…that happen to be extraneous…
While the function of “junk” DNA remains largely unknown, this does not mean it has no function. More than 98% of the human genome does not encode protein sequences. Nevertheless, this material includes “pseudo-genes” that can serve to create mutant alleles, material for evolution. [en.wikipedia: junk DNA]