Die Naturwissenschaften 1935. Volume 23, Issue 48.

The Present Status of Quantum Mechanics

By E. Schrödinger, Oxford.

Contents

§ 1. The Physics of Models.

§ 2. The Statistics of Model Variables in Quantum Mechanics.

§ 3. Examples of Probabilistic Predictions.

§ 4. Can the Theory be Built on Ideal Quantities?

§ 5. Are the Variables in Fact Smeared Out?

§ 6. The Deliberate Change of Knowledge-Theoretical Viewpoint.

§ 7. The Function as a Catalogue of Expectations

§ 8. Theory of Measurement; Part I.

§ 9. The Function as Description of the State.

§ 10. Theory of Measurement. Part II.

§ 11. The Lifting of Entanglement.

The Result Depending on the Free Will of the Experimenter.

§ 12. An Example.

§ 13. Continuation of the Example: All Possible Measurements are Uniquely Entangled.

§ 14. The Time-Evolution of Entanglement. Objections to the Special Role of Time.

§ 15. Law of Nature or Computational Trick?

§ 1. The Physics of Models.

In the second half of the last century an idealized description of the physical sciences emerged from the great successes of the kinetic theory of gases and the mechanical theory of heat. It was the crown on centuries of research and was the ultimate realization of millenia of expectation. It is called the classical description of physics and has the following main characteristics.

Using the experimental data, but without neglecting the imagination, one constructs a picture of the physical objects of which one wants to explain the experimentally observed behaviour. This picture is far more detailed than any careful observation could ever reveal. In all its exactness it is like a mathematical construct or a geometrical figure, which can be completely determined from a number of determining elements, just like in a triangle for example, one side and its adjacent angles completely determine the third angle, the other two sides, the three heights, the radius of the inscribed circle, etc. There is an important difference between this picture and a geometrical figure, however, namely that it is also fully determined in the fourth dimension, i.e. the time. That is to say, it is a construct that changes with time (as is obvious), that is, it can be in different states, and when one such state is given by the necessary determining elements, then not only are all other characteristics determined at a given moment in time, but even at any later time; similarly to how the configuration at the basis of a triangle determines that at the top. It is part of the nature of the construct to change in a certain way. That is, when it is left undisturbed in a particular initial state, it will traverse a particular sequence of states in a continuous manner, attaining each successive state at a certain fully determined point in time. This is the nature of its being, it is the hypothesis, as mentioned above, which was assumed for intuitive reasons.

Obviously, one was not so naive as to think that one could actually determine exactly what would happen in the world. To make clear that this is not the case, the precise description as given above, is called a model. The extreme precision stipulated, though it can in practice never be attained, has the simple justification that the consequences of any particular hypothesis about the model can be checked without introducing new assumptions in the middle of the long calculations needed to derive these consequences. The way to go is in principle completely determined, and a 'clever clog' might be able to read them straight from the given data. In that way, one knows at least where the arbitrary assumptions have been made, and hence where they have to be improved if the model does not agree with the observations. If, after many different types of experiments, the object behaves just like the model, then one is pleased, and believes that the model is in essence a true picture of the real system. If, on the other hand, it no longer agrees with a new or refined experiment, this does not mean that one is then less pleased. For, in principle this is the way in which the model, and thus our understanding of reality, is constantly improved and adjusted

The main aim of the classical method using a precise model is to isolate the necessary arbitrariness in the assumptions, almost like the yolk and the white of an egg, to allow for the adjustment to improved experience. Perhaps this method is based on the belief that somehow the initial state really determines the evolution completely, i.e. that a complete model which corresponds exactly with reality would determine the outcome of all experiments precisely. It is, however, rather more probable that the adjustment process is infinite, and that a "complete model" is a contradiction in terms, a bit like "the largest integer".

A clear understanding of what is meant by a classical model, its determining elements, and its state, is fundamental in all the following. Especially, one must not confuse a particular model with a particular state thereof. It is perhaps best to give an example. Rutherford's model of the hydrogen atom consists of two point masses. One can take as determining elements for example the two sets of 3 right-angled coordinates of the two points, and the two sets of 3 components of their velocities in the direction of the coordinate axes - twelve in total. Instead, one can also choose: the coordinates and velocity components of the centre of mass, and in addition, the distance between the two point masses, the two angles determining the direction of the line connecting the points in space, and the velocities (i.e. derivatives w.r.t. time) with which these quantities are changing in time at a given moment of time. These are of course also twelve in total. It is not part of the concept of "Rutherford model of the hydrogen atom", that these determining elements should have particular values. Such values would determine a given state of the model. A clear description of the entire set of possible states - without any relation between them - constitutes the model, or the model in an arbitrary state. But the model consists of more than two points in arbitrary position and with arbitrary velocities. It also determines how every state changes in time, as long as no exterior influence is present. This knowledge is given by the following statements: the points have masses m resp. M, and charges -e and e, and therefore attract each other with a force e2/r2, when r is their distance.

These statements, with particular values for m, M, and e (but not for r of course) are part of the description of the model (not just that of a particular state). m, M, and e are not determining elements, whereas r is. In the second choice of elements above, r is in fact the seventh, and if we use the first set of determining elements, it is given by

r =  [(x1 - x2)2 + (y1 - y2)2 + (z1 - z2)2 ].

The number of determining elements (often also called variables, as opposed to model constants like m, M, and e) is unlimited. Twelve suitably chosen variables determine all others, and hence the state, but no twelve are privileged. Other particular examples are: the energy, the three components of the angular momentum and the kinetic energy of the centre of mass. These latter elements have in fact a special property: although they are variables, i.e. they have different values in different states, they are constant in time. They are also called constants of the motion, as opposed to model constants.

§ 2. The Statistics of Model Variables in Quantum Mechanics.

At the heart of the present theory of Quantum Mechanics (Q.M.) is a new principle which, although it may still need reformulation, will in my opinion, remain at the heart of the theory. It says that models with determining elements which completely determine all other variables in the classical sense as outlined above, cannot describe Nature faithfully.

It might seem that for anybody who believes this, classical models are of no further use. But this is not the case. In fact, the classical models are used not just to show the contrast with the new principle, but also to express the reduced relation that remains between the same variables in these same models. This goes as follows.

A. The classical concept of state is lost, in the sense that at most a well-chosen half of the complete set of variables can be assigned a definite value; in Rutherford's model for example the 6 right-angled coordinates, or the 6 velocity components (there are other possible groups). The other half remains completely indeterminate, while other quantities can have various degrees of indeterminateness. In general all variables in a complete set (twelve variables in Rutherford's model) will have inaccurately determined values. The easiest way to describe the degree of inaccuracy is to choose the variables in so-called canonically conjugate pairs as in classical mechanics. A simple example is the coordinate x of a point mass and the component of the momentum px (mass times velocity) in the same direction. Such variables restrict each other in the accuracy with which they can be known simultaneously in that the product of their standard deviations (indicates by the suffix cannot be less than a certain universal constant, i.e. 

x pxh

(Heisenberg's uncertainty relation.)

(h = 1.041 x 10-27 ergsec. In the literature one usually denotes this by an h with a stroke through it, whereas h stands for ours multiplied by 2

B. When, at a given moment in time, not all variables are fully determined by a subset, then of course they cannot be determined either at a later moment in time from accessible data at an earlier time. This might be called a breach of causality, but it is basically nothing new compared to A. When at no time a classical state is fully determined, its time evolution cannot be defined. What does change in time is the statistics or probabilities, and those in a fully deterministic way. During the time evolution, some variables can become more accurately defined, others less so. Overall, one can say that the total indeterminateness does not change, as follows from the fact that the restriction on the accuracies as described under A are the same at every instant of time.

What do the expressions "inaccurate", "statistics", "probability" refer to? Q.M. tells us the following. It accepts all possible variables from the classical model and declares each to be directly measurable, even with arbitrary accuracy, as long as it is considered in isolation. If, after a suitably chosen, restricted number of measurements, a maximal knowledge has been obtained as allowed by the rule under A, then the mathematical apparatus of the new theory can give us a well-defined probability distribution for every variable at the same time as well as any later time. This means that it gives the fraction of the number of times that each variables takes a certain value or lies in a certain small interval. It suggests that this is indeed the probability that the given variable, at a given moment in time, will assume the particular value or lie in the particular interval. A single experiment can verify this probabilistic prediction at best in an approximate way, namely only when the variable is reasonably sharply determined, i.e. it lies in all probability within a small interval. To check the prediction fully, the complete experiment, including preparatory measurements, has to be repeated many times, and can only take into consideration those cases in which the preparatory measurements had given exactly the same results. In those cases, the predicted statistics for a given variable, given the measured values in the preparatory measurements, should then agree with those obtained in the experiment. This is the theory.

One should be careful not to criticize this theory just because it is difficult to express: that is caused by the inadequacy of our language. However, another objection suggests itself. Hardly any classical physicist dared to propose, when constructing a model, that its determining elements are actually directly measurable at the object. Only derived consequences from the model were actually experimentally verifiable. And all experience has shown that long before the wide gap between theory and experimental technique had been bridged, the model would have changed substantially by constant adjustments to new experimental results. While the new theory declares on the one hand that the classical model is unsuitable for describing the relation between determining elements, it is on the other hand so bold as to prescribe what measurements could in principle be performed on the object. To those that invented the classical picture, this must have seemed like an incredible exaggeration of their abilities, a thoughtless presumption of future development.

Was it not a remarkable predestination that the researchers from the classical period, who did not even know what measurement really means, nevertheless, in their innocence, were able to give us a map to orient us as to what one can basically measure on a hydrogen atom, for example !?

I hope to clarify later that the current theory was forced upon us. For the moment, I shall continue the exposition.

§ 3. Examples of Probabilistic Predictions.

All predictions, therefore, are as before about determining elements of a classical model, positions and velocities of point masses, energies, angular momenta, etc. But, unlike the classical theory, only probabilities of results can be predicted. Let us have a closer look a this. Officially, it is always the case that by means of a number of presently performed measurements and their results, the probabilities of results of other measurements, either performed immediately or after some time, are derived. How does this work in practice? In some important and typical cases it is as follows.

If the energy of a Planckian oscillator is measured, the probability that one finds a value between E and E' can only be nonzero if the interval between E and E' contains a value from the sequence 3hhhh

For each interval which does not contain any of these values, the probability is zero, that is, other values are excluded. These numbers are odd multiples of the model constant h (h = Planck's constant,  = the oscillator frequency). Two things attract the attention. First of all, there is no reference to previous measurements; these are not necessary. Secondly, the statement certainly does not lack in precision, on the contrary, it is far more accurate than any real measurement could ever be.

Another typical example is the value of the angular momentum. In Fig. 1, let M be a moving point mass, where the arrow represents the length and direction of its momentum (i.e. mass times velocity). O is an arbitrary fixed point in space, the origin of a coordinate system say; not a point with physical meaning therefore, but a geometrical point of reference. In classical mechanics, the value of the angular momentum of M w.r.t. O is the product of the length of the arrow for the momentum and the length of the perpendicular OF.

Fig. 1. Angular momentum:

M is a material point, O is a geometric point of reference. The arrow represents the momentum ( = mass times velocity) of M. The angular momentum is then the product of the length of the arrow and the length of OF.

In Q.M. the angular momentum behaves quite similarly to the energy of the oscillator. Again, the probability is zero for every interval that does not contain a value from the following sequence:

That is, only values from this sequence can appear. Again, this holds without reference to any prior measurement. And one can well imagine how important this precise statement is: much more important than the knowledge of which of these values actually occurs, or with what probability each value occurs in particular cases. Moreover, notice that the point of reference does not play any role: no matter where it is chosen, the result is always a value from this sequence. For the model, this claim makes no sense, for the perpendicular OF changes continuously as the point O is shifted, whereas the momentum arrow remains unchanged. We see from this example how Q.M. does make use of the model to read off which quantities can be measured and about which sensible predictions can be made, but on the other hand does not consider it suitable for expressing relations between these quantities.

Does one not get the feeling that in both cases the essence of what can be said has been forced into the straightjacket of a prediction for the probability that a classical variable has one or another measurement value? Does one not get the impression that this is in fact about fundamentally new properties, which have only the name in common with their classical counterparts? These are by no means exceptional cases; on the contrary, precisely the most valuable predictions of the new theory have this character. There are indeed also problems of a type for which the original description is approximately valid. But these are not nearly as important. And those that one could construct as examples where this description is completely correct, have no meaning. "Given the position of the electron in a hydrogen atom at time t=0; construct the statistics of the positions at a later time." This is of no interest.

It may sound as if all predictions are about the visual model. But in fact, the most valuable predictions cannot be easily visualised, and the most easily visualised characteristics are of little value.

§ 4. Can the Theory be Built on Ideal Quantities?

In Q.M. the classical model plays the role of Proteus. Each of its determining elements can in certain circumstances become the subject of interest and acquire a certain authenticity. But never all at the same time; sometimes these and sometimes those, but always at most half of a complete set of variables, which would provide a clear picture of the instantaneous state of the model. What happens in the mean time with the others? Are they not real at all, or do they perhaps have a fuzzy reality; or are they always all real, but is it simply impossible to have simultaneous knowledge about them as in Rule A of

§ 2 ?

The latter interpretation is extremely attractive to those who are familiar with the statistical viewpoint developed during the second half of the last century, especially if one realises that it was this viewpoint that gave rise to the quantum theory, namely in the form of a central problem of the statistical theory of heat: Max Planck's theory of thermal radiation, Dec. 1899. The essence of that theory is exactly that one almost never knows all determining elements of a system, but usually far fewer. To describe a real object at any given moment, one therefore uses not just one state of the model, but rather a so-called Gibbs ensemble. This is an ideal, i.e. imaginary, collection of states mirroring our restricted knowledge about the real object. The object then is supposed to behave in the same way as an arbitrary state from this collection. This idea has had tremendous success. Its greatest triumph was in those cases where not all states from the collection correspond to an identical observed behaviour of the object. It turned out that the object in that case indeed varies in its behaviour exactly as predicted (thermodynamic fluctuations). It is tempting equally to relate the often fuzzy predictions of Q.M. to an ideal collection of states, one of which applies in any individual case, but one does not know which.