Consistency, Measurement and Decoherence


Image: Quantum entanglement, courtesy of Science Advances, Moreau et al 12 Jul 2019. Source.

The purpose behind this article is to argue that the measurement problem in quantum mechanics is a pseudo-problem, that decoherence-based physics and standard quantum mechanics have already done the heavy lifting in providing an answer a long time ago and that all that's really missing from the account of Heisenberg, Bohr, Pauli, and others is a proper understanding of the classical limit.

Decoherent Histories, the interpretation of quantum mechanics discovered by Robert Griffiths, Roland Omnes, and James Hartle & Murray Gell-Mann offers a consistent and more general account of standard quantum mechanics. Decoherent Histories is the "Copenhagen done right" interpretation of quantum theory. It allows us to define a quasi-classical realm and describe the entire history of a set up and measurement without violating unitarity.

Stated as clearly as possible, the measurement problem brings into conflict three postulates as inconsistent.

1. the wave function gives a complete description of a system

2. the wave function always evolves according to a linear equation of the system

3. measurements of the system always have a definite outcome

Proof of their inconsistency is fairly straight forward and probably familiar to most people reading this post. Consider a Stern-Gerlach measuring device to measure the spin of a spin half particle. Where the initial wave function is

$\left |  \psi _{j} \right \rangle = \left |  \omega _{j}  \right \rangle\otimes \left ( \alpha \left | z^{+} \right \rangle + \beta \left |  z^{-} \right \rangle \right )\otimes \left | D^{a} \right \rangle \otimes \left |  D^{b} \right \rangle$

For $j = 0,1$ but as the possible paths of the particles diverge, this evolves over time $t_{2}$ to the wave function 

$\left |  \psi _{2} \right \rangle = \left ( \alpha \left |  \omega _{2}^{a} \right \rangle \otimes  \left | z^{+} \right \rangle + \beta \left | \omega _{2}^{b}\otimes \right \left | z^{-} \right \rangle \right )\otimes \left | D^{a} \right \rangle \otimes \left |  D^{b} \right \rangle$

Finally it registers at both detectors at time $t_{3}$ where the wave function is now

$\left |  \psi _{3} \right \rangle= \alpha \left | D^{a*} \right \rangle\otimes \left |  D^{b} \right \rangle + \beta \left | D^{a} \right \rangle \otimes \left |  D^{b*}$

The measurement problem is now apparent; where $\alpha$ and $\beta$ are non-zero, $\psi$ evolves unitarily and becomes a superposition of two macroscopically distinct states. In conflict with observation, where an experimenter is guaranteed to observe only one outcome but not a superposition of both. 

Standard quantum mechanics gives us a straight forward answer, that for all "practical purposes" $\alpha$ and $\beta$ are just the probability of finding the particle detector in a particular state. The wave function $\psi$ what we use to describe the evolution of a system is just a set of numbers that summarises our knowledge of that system, and something we can use to make predictions. What happens when an experimenter "collapses" the wave function is essentially just Bayesian conditioning, and they have to use the knowledge they've gained of the state to the system in any future predictions or retro-dictions.

All of this is perfectly sensible; there is one problem that draws criticism however, standard Copenhagen mechanics, with its emphasis on measurement from a classical measuring device, assumed a movable quantum-classical boundary which made it unclear how to apply this to a closed system like the universe itself.

David Bohm, Hugh Everett and others who first started taking decoherence seriously, discovered a better way. Decoherence is a good framework for calculating the boundary between when classical physics is a good approximation and when quantum effects become important. It makes the density matrix of the measured system plus the environment, diagonal in some basis, which explains why we won't observe any macroscopic superposition in the real world (there's no von Neumann chain) as the sign on president Truman's desk once read, "the buck stops here". 

A decohered state is a mixed state and mixed states cannot evolve into pure states; a pure state has zero entropy because $\rho$ is zero and entropy S is given by the formula for von Neumann entropy $s = -tr(\rho ln\rho )$. Whereas a classical measuring device will always have some non-zero entropy because it includes classical degrees of ignorance. 

This explanation isn't totally satisfying to everyone, because some physicists insist there needs to be something added to the theory to referent our observations, of an apparent exact basis after measurement. Consider the argument made by John Bell, in Speakable and Unspeakable. There are a potentially infinite number of ways to expand the relevant components of the wave function, corresponding to the set $\psi_{j}$ so why doesn't the density matrix diagonalise into basis' of for example

$\psi = \frac{\phi _{1}\pm \phi _{2}}{\sqrt{2}}$ or $\psi = \frac{\phi _{1}\pm i\phi _{2}}{\sqrt{2}}$

Decoherence tends to diagonalise into the position basis because interactions are local in that basis, and that makes it the "preferred" one, but this will never be exact because there's also a kinematic term.

Introduction to Decoherent Histories

Decoherent Histories takes "histories" as the basic element of quantum theory, it solves the measurement problem by including the measurement result as part of a sample space, together with the unitary time evolution of the Schrodinger equation, which are ascribed probabilities using Born's rule. This leads to the single framework rule, a core principle of Decoherent Histories. 

It states that, any way of assigning properties to a quantum system, has to be based on a projective decomposition of the identity.  

$I = \sum_{j} P_{j};$   $P_{j}=P_{j}^{\dagger }=P_{j}^{2};$   $P_{j}P_{k}=\delta _{jk}P_{j}$

The orthogonality of the projection operators ensures that they represent mutually exclusive alternatives; that they sum to the identity entails that only one can be true. Moreover, the SFR is not a prohibition on constructing multiple frameworks, on the contrary physicists have the freedom to construct as many alternative histories as they want, but these predictions cannot be combined into a single history. 

Conceptually this is a refinement of the Copenhagen interpretation, as the freedom of choosing the "observer" is the same freedom as defining a set of "Consistent Histories", but where we can now talk meaningfully about a wave function of the universe. What appears to us as "collapse" is just one part of that wave function that decoheres with respect to another; resulting in microscopically irreversible outcomes that are subject to classical rules of probability.

Decoherent Histories vs Many Worlds

Decoherent Histories is not the only interpretation that relies on decoherence, the Many Worlds Interpretation is similar in some respects, except that its advocates assume that the wave function is a direct representation of reality, which include classical degrees of freedom. When the wave function decoheres it prevents us from granting any special status to eigenstates associated with any observable operators. 

Instead interactions with the environment cause entangled states to lose coherence with each other, and the entire multiverse continues to evolve until each state can be described by individual wave functions of those states. Each state represents an entirely separate universe that "branches" off each time there are different possible outcomes of an experiment. 

Traditionally there are three problems for the MWI that (a) "branches" or "worlds" are not well defined, (b) the preferred basis problem and (c) the derivation of Born-like probabilities from a deterministic wave equation. The last of these problems I've already written about, so considering only the first two. 

Decoherence is often proposed as a solution to the preferred basis problem, particularly after David Wallace's paper, Decoherence and Ontology (Or: How I learned to stop worrying and love FAPP). Wallace argues against the premise of the problem, arguing that an approximate basis for position provided by decoherence is enough to explain the appearance of definite states in macroscopic objects. All that one need to describe the existence of higher-order objects is the emergence of certain types of patterns and structure. Wallace et al. also rely on decoherence as a solution to the first problem as well, arguing that the Decoherent Histories formalism, gives an approximate description of what one means by “world”.

Combining these interpretations is not something I'm convinced will work, for one decoherence is a never-ending process, the diagonalization of the density matrix like any exponential decay is never complete. So, it doesn’t make sense to ever talk about completely separate worlds, there’s no meaning or objective sense in which  one can say “now parallel worlds have split” because the process is never sharply defined. Secondly, the Many Worlds Interpretation is inconsistent with the single framework rule, so at least Decoherent Histories would have to be changed to make the two consistent.


Comments