CHAPTER 9: The Ideal of Perfect Knowledge

 

page 130 lower middle

For instance, the uncertainty principle is often held to represent a qualitatively different kind of limit to knowledge…

There are as many interpretations of Heisenberg’s “uncertainty” relations as of quantum theory as a whole. On one interpretation, the uncertainty is psychological or epistemological—a limit on knowledge, or at any rate, usable knowledge. On another, properties of micro systems are themselves “indeterminate” (an ontological version). On a third, uncertainty is a statistical scatter of measured values, reflecting real fluctuations, which might be either truly random or else deterministically chaotic (a “classical” version). A fourth interpretation has uncertainty introduced by the measuring apparatus or the act of measurement. See David Wick The Infamous Boundary: seven decades of heresy in quantum physics Copernicus Press (Springer-Verlag) 1995 p153-56: “The main function of Heisenberg’s laws, like Bohr’s complementarity interpretation, is psychological: it lets one stop worrying that the real picture has been missed.”

 

page 130 bottom

…these beings are hypothetical observers with extraordinary cognitive powers.

While Descartes’ “evil genius” was supposed to actively interfere in one’s perceptual processes, his “homunculus” has a parallel in Laplace’s omniscient intelligence. Both the latter are passive witnesses separate from the reality they observe, and supposedly without effect upon it. One observes a private theater of perception, the other is a fly on the wall for some physical system, even the entire cosmos. Both concepts imply an infinite regression. The homunculus is supposed to explain perception, but simply stands in for the very process to be explained. (There must then be a witness within that witness, and so forth.) Laplace’s intelligence is supposed to account for every occurrence, future or past, yet this must include the very prediction being made! One has nested observers within observers, the other nested accounts within accounts. Both are based on the dualist fallacy of mind as distinct from the system in which it is embedded. Laplace’s deterministic universe is the Cartesian theater turned inside out! And just as there cannot be pure sense data isolated from cognitive processing, strictly speaking there cannot be knowledge of isolated systems, for knowledge involves interaction with the system.

 

page 132 middle

strictly mechanical terms, bypassing “intelligence” per se.

An idealized spring-loaded trap door, for example, between two compartments of gas, can be imagined to selectively admit molecules in one direction only. It was then argued that thermal fluctuations would make the door leak, in such a way as to defeat it’s functioning as a one-way valve operating at no energy/entropy cost.

 

page 132 middle

…comes from erasing rather than measuring or storing information.

Cf. John D. Norton “Eaters of the Lotus: Landauer’s Principle and the Return of Maxwell’s Demon” pre-print draft, April 2004, p5: “Thus the burden of an exorcism of Maxwell’s demon is to show that there is a hidden increase in thermodynamic entropy associated with the operation of the demon that will protect the second law… The present orthodoxy is that Landauer’s principle successfully locates this hidden increase in the process of memory erasure… This form of Landauer’s principle entails that entropy of erasure is thermodynamic entropy.” That is, by confusing thermodynamic entropy and information entropy (probability), Landauer’s principle mistakenly ignores the thermodynamically irreversible act of “removing the partition.” This act characterizes the ambiguous nature of the demon, who in one circumstance represents physical (reversible) processes and in another non-physical (and irreversible) mental acts.

 

page 132 bottom

I call this general situation the “problem of cognitive domains.”

A domain is generally the set of elements upon which some operation is to be performed, such as a mathematical function or mapping. A cognitive domain is some defined level of information processing—a data set which, after having been operated upon, may serve as the input to some other level of further operations. The problem of cognitive domains is the dilemma of circularity that arises when the domain that is the output of a process is recycled as its input. This occurs, for example, when the physical world that appears in conscious experience is presupposed in order to explain the construction of this very appearance in consciousness. The output of mental processing is recycled as its input. This is a general epistemic dilemma that applies not only to the mental processing involved in perception, but to every form of cognition, including science.

 

page 133 middle

…the meaning of time…where no time-keeping processes could exist.

The most accurate modern clocks involve vibrating electrons rather than rotating planets, but according to the standard model there was an epoch before electrons were bound to atoms, and in which no such cyclical processes could have existed. The time scale imposed (as in “the first three minutes”) literally extrapolates present-era timekeeping backward into an era before its underlying processes existed, in a sense recycling the present as the past.

 

page 134 upper middle

How do liquidity and solidity… arise from the behavior of molecules?

The problem of how macro properties arise from micro properties is a different problem from the emergence of qualia (percepts in consciousness), since no category boundary is crossed. Physical properties at any level remain within the domain of third-person description. However, one must be circumspect about what is meant by “property” and how this is distinct from “quality.” The liquidity of water, for example, is both a physical property and a percept.

 

page 135 upper middle

…asymmetry of size or energy between ordinary objects…and the quanta of light through which they are observed.

The “state” of a system, as an ongoing reality in the interval between observations, is a macroscopic concept, derived from observation made with light, which does not appreciably affect measurements on large bodies. It cannot be presumed to apply meaningfully in the micro domain, where a photon of light may radically affect the state of an atom. Quantum experiments use macroscopic apparatus; measurements necessarily involve amplification to that scale, which entails uncertainty. So, on the one hand, we cannot know the system’s state precisely; yet, on the other, we are conditioned by ordinary experience to expect to be able to. We also derive from ordinary experience the metaphors in terms of which to interpret measurements (e.g. particle or wave behavior). The dependence of quantum properties on the specifics of measurement is no more or less than the physics parallel of the “Equation of Experience”: cognition necessarily involves both subject and object conjointly. This was the essence of Bohr’s insight.

 

page 136 top

It expresses the species’ blind faith in its cognitive adequacy.

When cognition is adequate to survival, there is little need to question it or draw attention to it. One might compare this naïve awareness of the world (as opposed to awareness of the modeling process behind that experience) to the normal experience of seeing the world right-side up when in fact the image on the retina is inverted. In both cases, the brain processing involved transparently presents the experience that works for us.

 

page 136 middle

…more accurate…and therefore more adequate for human survival.

Cf. John D. Barrow Theories of Everything: the quest for ultimate explanation Fawcett/Balantine 1991, p246: “Our brains are the outcome of some evolutionary history that has no preordained goal. But the most probable outcome of this history will be a mental apparatus for gathering, representing, and using information about the world in order to predict its future course, a representation that becomes more and more accurate in its reflection of the true underlying reality.” One the other hand, one is reminded of Nicholas Taleb’s cautionary tale about the secure feeling of the well-fed turkey, based on its experience prior to Thanksgiving. [cf. Taleb The Black Swan] Cognitive success at navigating a putative external environment is ultimately measured by evolutionary success. Because one has no “direct” access to the external world, one cannot assert that the model is adequate because it reflects reality; at best, one can say that adequate cognitive modeling does not lead to extinction, and that this is what it means to “reflect reality.” In our case, failure can only be known too late!

 

page 136 bottom

…ascribe this to some extraneous perturbing cause rather than fault the theory.

There are many instances when that sort of speculation has paid off—as in the discovery of Neptune. The existing theory (Newton’s Law of Gravitation) could not entirely account for the motion of Uranus, unless a new perturbing body was assumed. On that assumption, it was possible to calculate the orbit of the unseen new planet, and to predict where to look for it. Interestingly, an alternative theory had been that Newton’s law did not follow an exact inverse square relationship at “large” distances. This type of explanation is more akin to Einstein’s later revision of Newton’s law. Yet, the same kind of theorizing, in the case of Mercury’s shifting perihelion, would have been a distraction from the fundamental rethinking represented by General Relativity.