By @jfschaff, on 2025-05-25

What Are Quantum Measurements?

Abstract

This article argues that the standard quantum theory is incomplete and not predictive because it does not define what quantum measurements are. It calls for more research to identify the measurement process as an emerging phenomenon.


In this article, I claim that the standard quantum theory theory is not consistent and not predictive because it lacks a clear definition of what a quantum measurement is.

I first recall what the Copenhagen interpretation of quantum theory is. This interpretation is referred to as the standard theory because it has become the only operational theory. I then emphasize some measurement-related consequences of accepting the standard theory. The core of this article's argument is then exposed: I claim that the quantum measurement is ill-defined, and underline why this is a major issue.

In the last part, I give some elements derived from experience that give a glimpse at what quantum measurement may be. This provides avenues for exploration to better define quantum measurements, and perhaps find a theory in which the Born rule would not be an axiom, but rather an emerging consequence of a more fundamental theory.

The Copenhagen Interpretation

The Copenhagen interpretation of quantum mechanics was settled in the 1920s in an attempt to find an operational quantum theory. The main ingredients of this theory, which has become the standard quantum theory, are the following:

  1. Quantum systems evolve deterministically under a wave equation.
  2. When a measurement is performed on a quantum system, the way the measurement is performed amounts to choosing a privileged basis of eigenstates in which the measurement is performed.
  3. If the quantum system is in a superposition of these eigenstates, at the end of the measurement, the system has been dramatically altered, so that it is only in a single eigenstate.
  4. For a single realization of the experiment, it is not possible to predict in which eigenstate the system will end up after the measurement.
  5. It is only possible to predict the statistical distribution of the outcomes of a large number of identical experiments.
  6. The probability that the system ends up in a given eigenstate is equal to the square of the scalar product of the system's state before measurement with this eigenstate (Born rule).

Before going into more detail, let us note what is not problematic about this theory.

First, the fact that at the time this theory was developed, the equation of evolution was the Schrödinger equation. This equation is still useful and operational in a non-relativistic framework but is invalid outside this framework. This led to the development of quantum field theories, which are no longer formulated in terms of evolution equations, but in terms of Lagrangian densities and path integrals. Regardless of these details, the evolution remains deterministic in these theories and produces states that are generally superposed and/or entangled. This does not seem to pose any particular difficulty.

The choice of a basis by the way the measurement is performed is well illustrated by the example of measuring an interference pattern on a photographic screen or CCD camera. In this case, the privileged basis is the basis of points on a plane in space, although pixelated. This basis is very different from the basis on which the interference pattern is well described (structures extended in space). The measurement of a particle involved in an interference pattern causes its localization in space at the place where it was detected. Note that, while this localization has little meaning for a photon on a photographic plate or a CCD, as it is absorbed and destroyed by the measurement, it makes much more sense for a material particle (atom, electron, etc.) that can be detected and localized in space without being destroyed, and can then be measured a second time or even many times thereafter. This type of experiment has only continued to confirm the validity of the standard approach.

Points 4, 5, and 6 pose a philosophical problem, but not necessarily a physical one. They require physicists to abandon the notion of determinism which is deeply linked to the scientific approach (the same causes produce the same effects). It inscribes in nature, as a fundamental law, a cause-and-effect relationship not at the single experiment level, but "on average." A kind of "statistical determinism".

Let us note that these laws cannot be questioned, as they have been confirmed an innumerable number of times by experiments.

The purpose of this article is therefore not to question the highly operational standard theory, as it works very well in practice, but rather to explore the idea that it might be less fundamental than we think, and could be an emergent phenomenon from a more fundamental theory that remains to be found.

Point 5 is often called "wave function collapse". The measurement profoundly disrupts the system by altering it. This profound alteration by measurement is a peculiarity of quantum theory compared to other physical theories, which generally pay little attention to the measurement processes.

Let us emphasize here that the fact that a measurement profoundly modifies a system does not pose a particular difficulty. Indeed, two identical experiments that are carried out, one with a measurement phase, the other without this measurement phase, constitute, in quantum physics, radically different experiments. Therefore, it is not problematic that the results be dramatically different. This does not call into question the belief that is foundational to the scientific method, which holds that the same causes produce the same effects (be it per experiment or "on average").

The major difference between quantum physics and other physical theories is that, in these other theories, the systems under study emit so many signals into their environment that their "measurement" is permanent. In this context, measurement merely involves capturing data already emitted by the systems, and does not require additional interaction with these systems.

To illustrate this, let us consider the example of measuring the temperature of an object. It is enough to capture the black body radiation emitted by the object to determine its temperature. Whether or not the temperature is measured only marginally modifies the experiment: the radiation, instead of being absorbed by the wall of the experiment room, is absorbed by a dedicated sensor.

If, on the other hand, we measured the temperature using an older technique of attaching a thermometer to the object, then we would find ourselves in a measurement resembling a quantum measurement. The act of measuring constitutes a modification of the experiment which can be significant and can thus alter the result (in this case related to the heat transfer between the thermometer and the object).

The Issue with the Copenhagen Approach

What, then, is problematic with the standard theory? Especially since it is "operational" in the sense that it has allowed physicists to make tremendous progress for almost a century now?

The issue is the following: this theory does not define which practical operations constitute measurements and which do not.

Therefore, this canonical approach predicts nothing, as it does not allow, a priori, to know, even statistically, the outcomes of experiments. In other words, in the theory mentioned above, the phrase "when a measurement is performed" is meaningless, because nothing in the theory states what a "measurement" is.

In practice, this is not immediately obvious because experimenters have learned which operations constitute measurements and which do not. Experimenters have, in a way, compiled a list of special cases: such and such operation is indeed a measurement of such and such quantity, while other operations are not. And when they were in doubt, they conducted experiments. If the wave packet collapsed, then it confirmed that the operation in question was indeed a measurement.

Let us illustrate with an example in optics. We "know" that half-wave and quarter-wave plates do not measure anything, but deterministically transform polarization states. And we "know" that a polarizer "measures" the polarization by absorbing or letting the photons pass. But why and how do we know that? Because we have conducted the experiments.

In the same way, physicists have compiled a database of special cases, of things that measure and things that do not measure. The Copenhagen theory would therefore be more complete if, in addition to its axioms summarized above, it included a whole list of specific cases to clarify what constitute measurements and what do not. Such a list would be very long and would resemble the physics books from before Newton's theory, in which each page described a new experiment and its outcomes (pendulums, collisions between balls, trajectories of a cannonball, etc.) without any unification of the experiments by a more general underlying theory (in this example, Newtonian mechanics).

The key argument of this article is to assert that a complete quantum theory should allow, from its fundamental laws, to specify more precisely or to derive what processes constitute "Born measurements" and what do not.

What We Do Know About Quantum Measurements

What we know about measurement systems is that they must interact strongly with the systems being studied.

This is not surprising, but it provides an indication that the phenomenon of quantum measurement (wave function collapse) may be an emergent phenomenon from fundamental interactions.

We know that when the interactions are precisely controlled, and the systems are "simple" (few particles, well-known states), we are able to create quantum systems in complex entangled and superposed states that continue to evolve in a deterministic manner. Therefore not all strong interactions lead to measurements.

We know that, ultimately, a measurement is always the result of an intense interaction, and one that may even destroy the measured particles. It is important to note that experiments are conducted at our human scale, and that measurement systems always involve a very large number of particles, in thermal states and at high temperatures.

This provides an indication that quantum measurement may be a phenomenon emerging from complexity or statistical physics (a high number of particles, states not precisely described at the microscopic level).

The randomness of the outcome of a measurement may be a consequence of the randomness of the measurement device itself, whose microstate is not identical from one realization of the experiment to the next.

If this were true, real determinism may be restored. The random nature of quantum mechanics could be similar to the randomness of weather forecasts, in which chaos makes precise predictions impossible, even though the underlying theory is deterministic.