The Variability of Fundamental Constants
Do physical constants fluctuate?
The 'physical constants' are numbers used by scientists in their calculations. Unlike the constants of mathematics, such as ¼, the values of the constants of nature cannot be calculated from first principles; they depend on laboratory measurements.
As the name implies, the so-called physical constants are supposed to be changeless. They are believed to reflect an underlying constancy of nature. In this chapter I discuss how the values of the fundamental physical constants have in fact changed over the last few decades, and suggest how the nature of these changes can be investigated further.
There are many constants listed in handbooks of physics and chemistry, such as melting points and boiling points of thousands of chemicals, going on for hundreds of pages: for instance the boiling point of ethyl alcohol is 78.5°C at standard temperature and pressure; its freezing point is -117.3°C. But some constants are more fundamental than others. The following list gives the seven most generally regarded as truly fundamental.
|Velocity of light||c|
|Mass of the electron||me|
|Mass of the proton||mp|
|Universal gravitational constant||G|
All these constants are expressed in terms of units; for example, the velocity of light is expressed in terms of meters per second. If the units change, so will the constants. And units are [arbitrary], dependent on definitions that may change from time to time: the meter, for instance, was originally defined in 1790 by a decree of the French National Assembly as one ten-millionth of the quadrant of the earth's meridian passing through Paris. The entire metric system was based upon the meter and imposed by law. But the original measurements of the earth's circumference were found to be in error. The meter was then defined, in 1799, in terms of a standard bar kept in France under official supervision. In 1960 the meter was redefined in terms of the wavelength of light emitted by krypton atoms; and in 1983 it was redefined again in terms of the speed of light itself, as the length of the path traveled by light in 1/299,792,458 of a second.
As well as any changes due to changing units, the official values of the fundamental constants vary from time to time as new measurements are made. They are continually adjusted by experts and international commissions. Old values are replaced by new ones, based on the latest 'best values' obtained in laboratories around the world. Below, I consider four examples: the gravitational constant (G>); the speed of light; Planck's constant; and also the fine structure constant a, which is derived from the charge on the electron, the velocity of light, and Planck's constant.
The 'best' values are already the result of considerable selection. First, experimenters tend to reject unexpected data on the grounds that they must be errors. Second, after the most deviant measurements have been weeded out, variations within a given laboratory are smoothed out by averaging the values obtained at different times, and the final value is then subjected to a series of somewhat arbitrary corrections. Finally, the results from different laboratories around the world are selected, adjusted, and averaged to arrive at the latest official value.
Faith in eternal truths
In practice, then, the values of the constants change. But in theory they are supposed to be changeless. The conflict between theory and empirical reality is usually brushed aside without discussion, because all variations are assumed to be due to experimental errors, and the latest values are assumed to be the best.
But what if the constants really change? What if the underlying nature of nature changes? Before this subject can even be discussed, it is necessary to think about one of the most fundamental assumptions of science as we know it: faith in the uniformity of nature. For the committed believer, these questions are nonsensical. Constants must be constant.
Most constants have been measured only in this small region of the universe for a few decades, and the actual measurements have varied erratically. The idea that all constants are the same everywhere and always is not an extrapolations from the data. If it were an extrapolation it would be outrageous. The values of the constants as actually measured on earth have changed considerably over the last fifty years. To assume they had not changed for fifteen billion years anywhere in the universe goes far beyond the meager evidence. The fact that this assumption is so little questioned, so readily taken for granted, shows the strength of scientific faith in eternal truths.
According to the traditional creed of science, everything is governed by fixed laws and eternal constants. The laws of nature are the same in all times and at all places. In fact they transcend space and time. They are more like eternal Ideas--in the sense of Platonic philosophy--than evolving things. They are not made of matter, energy, fields, space, or time; they are not made of anything. In short, they are immaterial and non-physical. Like Platonic Ideas they underlie all phenomena as their hidden reason or logos, transcending space and time.
Of course, everyone agrees that the laws of nature as formulated by scientists change from time to time, as old theories are partially or completely superseded by new ones. For example, Newton's theory of gravitation, depending on forces acting at a distance in absolute time and space, was replaced by Einstein's theory of the gravitational field consisting of curvatures of space-time itself. But both Newton and Einstein shared the Platonic faith that underlying the changing theories of natural science there are true eternal laws, universal and immutable. And neither challenged the constancy of constants: indeed both gave great prestige to this assumption, Newton through his introduction of the universal gravitational constant, and Einstein through treating the speed of light as absolute. In modern relativity theory, c is a mathematical constant, a parameter relating the units used for time to the units used for space; its value is fixed by definition. The question as to whether the speed of light actually differs from c, although theoretically conceivable, seems of peripheral interest.
For the founding fathers of modern science, such as Copernicus, Kepler, Galileo, Descartes, and Newton, the laws of nature were changeless Ideas in the divine mind. God was a mathematician. The discovery of the mathematical laws of nature was a direct insight into the eternal Mind of God. Similar sentiments have been echoes by physicists ever since.
Until the 1960s, the universe of orthodox physics was still eternal. But evidence for the expansion of the universe has been accumulating for several decades, and the discovery of the cosmic microwave background radiation in 1965 finally triggered off a great cosmological revolution. The Big Bang theory took over. Instead of an eternal machine-like universe, gradually running down toward thermodynamic heat death, the picture was now one of a growing, developing, evolutionary cosmos. And if there was a birth of the cosmos, an initial 'singularity', as physicists put it, then once again age-old questions arise. Where and what did everything come from? Why is the universe as it is? In addition, a new question arises. If all nature evolves, why should the laws of nature not evolve as well? If laws are immanent in evolving nature, then the laws should evolve too.
Today these questions are usually discussed in terms of the anthropic cosmological principle, as follows: Out of the many possible universes, only one with the constants set at the values found today could have given rise to a world with life as we know it and allowed the emergence of intelligent cosmologists capable of discussing it. If the values of the constants had been different, there would have been no stars, nor atoms, nor planets, nor people. Even if the constants were only slightly different, we would not be here. For example, with just a small change in the relative strengths of the nuclear and electromagnetic forces there could be no carbon atoms, and hence no carbon-based forms of life such as ourselves. 'The Holy Grail of modern physics is to explain why these numerical constants . . . have the particular numerical values they do.'
Some physicists incline toward a kind of neo-Deism, with a mathematical creator-God who fine-tuned the constants in the first place, selecting from many possible universes the one in which we can evolve. Others prefer to leave God out of it. One way of avoiding the need for a mathematical mind to fix the constants of nature is to suppose that our universe arose from a froth of possible universes. The primordial bubble that gave rise to our universe was one of many. But our universe has to have the constants it does by the very fact we are here. Somehow our presence imposes a selection. There may be innumerable alien and uninhabitable universes quite unknown to us, but this is the ony one we can know.
This kind of speculation has been carried even further by Lee Smolin, who has proposed a kind of cosmic Darwinism. Through black holes, baby universes may be budded off from pre-existing ones and take on a life of their own. Some of these might have slight mutations in the values of their constants and hence evolve differently. Only those that form stars can form black holes and hence have babies. So by a principle of cosmic fecundity, only universes like ours would reproduce, and there may be many more or less similar habitable universes. But this very speculative theory still does not explain why any universes should exist in the first place, nor what determines the laws that govern them, nor what maintains, carries, or remembers the mutant constants in any particular universe.
Notice that all these metaphysical speculations, extravagant though they seem, are thoroughly conventional in that they take for granted both eternal laws and constant constants, at least within a given universe. These well-established assumptions make the constancy of constants seem like an assured truth. Their changelessness is an act of faith. ...If measurements show variations in the constants, as they often do, then the variations are dismissed as experimental errors; the latest figure is the best available approximation to the 'true' value of the constant.
Some variations may well be due to errors, and such errors decrease as instruments and methods of measurement improve. All kinds of measurements have inherent limitations on their accuracy. But not all the variations in the measured values of the constants need necessarily be due to error, or to the limitations of the apparatus used. Some may be real. In an evolving universe, it is conceivable that the constants evolve along with nature. They might even vary cyclically, if not chaotically.
Theories of changing constants
Several physicists, among them Arthur Eddington and Paul Dirac, have speculated that at least some of the 'fundamental constants' may change with time. In particular, Dirac proposed that the universal gravitational constant, G, may be decreasing with time: the gravitational force weakening as the universe expands. But those who make such speculations are usually quick to avow that they are not challenging the idea of eternal laws; they are merely proposing that eternal laws govern the variation of the constants.
The proposal that the laws themselves evolve is more radical. The philosopher Alfred North Whitehead pointed out that if we drop the old idea of Platonic laws imposed on nature, and think instead of laws being immanent in nature, then they must evolve along with the nature:
Since the laws of nature depend on the individual characters of the things constituting nature, as the things change, then consequently the laws will change. Thus the modern evolutionary view of the physical universe should conceive of the laws of nature as evolving concurrently with the things constituting the environment. Thus the conception of the Universe as evolving subject to fixed eternal laws should be abandoned.
I prefer to drop the metaphor of 'law' altogether, with its outmoded image of God as a kind of law-giving emperor, as well as an omnipotent and universal law-enforcement agency. Instead, I have suggested that the regularities of nature may be more like habits. According to the hypothesis of morphic resonance, a kind of cumulative memory is inherent in nature. Rather than being governed by an eternal mathematical mind, nature is shaped by habits, subject to natural selection. And some habits are more fundamental than others; for example, the habits of hydrogen atoms are very ancient and widespread, found throughout the universe, while the habits of hyenas are not. Gravitational and electromagnetic fields, atoms, galaxies and stars are governed by archaic habits, dating back to the earliest periods in the history of the universe. From this point of view the 'fundamental constants' are quantitative aspects of deep-seated habits. They may have changed at first, but as they became increasingly fixed through repetition, the constants may have settled down to more or less stable values. In this respect the habit hypothesis agrees with the conventional assumption of constancy, though for very different reasons.
Even if speculations about the evolution of constants are set aside, there are at least two more reasons why constants may vary. First, they may depend on the astronomical environment, changing as the solar system moves within the galaxy, or as the galaxy moves away from other galaxies. And second, the constants may oscillate or fluctuate. They may even fluctuate in a seemingly chaotic manner. Modern chaos theory has enabled us to recognize that chaotic behavior, as opposed to old-style determinism, is normal in most realms of nature. So far the 'constants' have survived unchallenged from an earlier era of physics: the vestiges of a lingering Platonism. But what if they, too, vary chaotically?
The variability of the universal gravitational constant
In spite of the central importance of the universal gravitational constant, it is the least well defined of all the fundamental constants. Attempts to pin it down to many places of decimals have failed; the measurements are just too variable. The editor of the scientific journal Nature has described as 'a blot on the face of physics' the fact that G still remains uncertain to about one part in 5,000. Indeed, in recent years the uncertainty has been so great that the existence of entirely new forces has been postulated to explain gravitational anomalies.
In the early 1980s, Frank Stacey and his colleagues measured G in deep mines and boreholes in Australia. Their value was about 1 percent higher than currently accepted. For example, in one set of measurements in the Hilton mine in Queensland the value of G was found to be 6.734 ± 0.002, as opposed to the currently accepted value of 6.672 ± 0.003. The Australian results were repeatable and consistent, but no one took much notice until 1986. In that year Ephrain Fischbach, at the University of Washington, Seattle, sent shock waves around the world of science by claiming that laboratory tests also showed a slight deviation from Newton's law of gravity, consistent with the Australian results. Fischbach proposed the existence of a hitherto unknown repulsive force, the so-called fifth force (the four known forces being the strong and weak nuclear forces, the electromagnetic force, and the gravitational force)
The possible existence of a fifth force is not particularly relevant to possible changes in G with time. But the very fact that the question of an extra force affecting gravitation could even be raised and seriously considered in the late twentieth century serves to emphasize how imprecise the characterization of gravity remains more than three centuries after the publication of Newton's Principia.
The suggestion by Paul Dirac and other theoretical physicists that G may be decreasing as the universe expands has been taken quite seriously by some metrologists. However, the change proposed by Dirac was very small, about 5 parts in 1011 per year. This is way below the limits of detection using conventional methods of measuring G on Earth. The 'best' results in the last twenty years differ from each other by more than 5 parts in 104. In other words, the change Dirac was suggesting is some ten million times smaller than the differences between recent 'best' values.
In order to test Dirac's hypothesis, a variety of indirect methods have been tried. Some depend on geological evidence, such as the slopes of fossils and dunes, from which the gravitational forces at the time they were formed can be calculated; others depend on records of eclipses over the last 3,000 years; others on modern astronomical methods.
The problem with all these indirect lines of evidence is that they depend on a complex tissue of theoretical assumptions, including the constancy of the other constants of nature. They are persuasive only within the framework of the present paradigm. That is to say that if one assumes the correctness of modern cosmological theories, themselves presupposing the constancy of G, the data are internally consistent, provided that all actual variations from experiment to experiment, or method to method, are assumed to be a result of error.
The fall in the speed of light from 1928 to 1945
According to Einstein's theory of relativity, the speed of light in a vacuum is invariant: it is an absolute constant. Much of modern physics is based on that assumption. There is therefore a strong theoretical prejudice against raising the question of possible changes in the velocity of light. In any case, the question is now officially closed. Since 1972 the speed of light has been fixed by definition. The value is defined as 299,792.458 ± 0.001 # 2 kilometers per second.
As in the case of the universal gravitational constant, early measurements of c differed considerably from the present official value. For example, the determination by Römer in 1676 was about 30 percent lower, and that by Fizeau in 1849 about 5 percent higher.
In 1929, Birge published his review of all the evidence available up to 1927 and came to the conclusion that the best value for velocity of light was 299,796 ± 4 km/s. He pointed out that the probable error was far less than in any of the other constants, and concluded that 'the present value of c is entirely satisfactory, and can be considered as more or less permanently established.' However, even as he was writing, considerably lower values of c were being found, and by 1934 it was suggested by Gheury de Bray that the data pointed to a cyclic variation in the velocity of light.
From around 1928 to 1945, the velocity of light appeared to be about 20 km/s lower than before and after this period. The 'best' values, found by the leading investigators using a variety of techniques, were in impressively close agreement with each other, and the available data were combined and adjusted by Birge in 1941 and Dorsey in 1945.
In the late 1940s the speed of light went up again. Not surprisingly, there was some turbulence at first as the old value was overthrown. The new value was about 20 km/s higher, close to that prevailing in 1927. A new consensus developed. How long this consensus would have lasted if based on continuing measurements is a matter for speculation. In practice, further disagreement was prevented by fixing the speed of light in 1972 by definition.
How can the lower velocity from 1928 to 1945 be explained? If it was simply a matter of experimental error, why did the results of different investigators and different methods agree so well? And why were the estimated errors so low?
One possibility is that the velocity of light really does fluctuate from time to time. Perhaps it really did drop for nearly twenty years. But this is not a possibility that has been seriously considered by researchers in the field, except for de Bray. So strong is the assumption that it must be fixed that the empirical data have to be explained away. This remarkable episode in the history of the speed of light is now generally attributed to the psychology of metrologists:
The tendency for experiments in a given epoch to agree with one another has been described by the delicate phrase 'intellectual phase locking.' Most metrologists are very conscious of the possible existence of such effects; indeed ever-helpful colleagues delight in pointing them out! . . . .Aside from the discovery of mistakes, the near completion of the experiment brings more frequent and stimulating discussion with interested colleagues and the preliminaries to writing up the work add fresh perspective. All of these circumstances combine to prevent what was intended to be 'the final result' from being so in practice, and consequently the accusation that one is most likely to stop worrying about corrections when the value is closest to other results is easy to make and difficult to refute.
But if changes in the values of constants in the past are attributed to the experimenters' psychology, then, as other eminent metrologists have observed, 'this raises a disconcerting question: How do we know that this psychological factor is not equally important today?' In the case of the velocity of light, however, this question is now academic. Not only is the velocity fixed by definition, but the very units in which velocity is measured, distance and time, are defined in terms of light itself.
The second used to be defined as 1/86,400 of a mean solar day, but it is now defined in terms of the frequency of light emitted by a particular kind of excitation of caesium-133 atoms. A second is 9,192,631,770 times the period of vibration of the light. Meanwhile, since 1983 the meter has been defined in terms of the velocity of light, itself fixed by definition.
As Brian Petley has pointed out, it is conceivable that:
(i) the velocity of light might change with time, or (ii) have a directional dependence in space, or (iii) be affected by the motion of the Earth about the Sun, or motion within our galaxy or some other reference frame.
Nevertheless, if such changes really happened, we would be blind to them. We are now shut up within an artificial system where such changes are not only impossible by definition, but would be undetectable in practice because of the way the units are defined. Any change in the speed of light would change the units themselves in such a way that the velocity in kilometers per second remained exactly the same.
The rise of Planck's constant
Planck's constant, h, is a fundamental feature of quantum physics and relates the frequency of a radiation, v, with its quantum of energy, E, according to the formula E=hv. It has the dimensions of action (energy x time).
We are often told that quantum theory is brilliantly successful and amazingly accurate. For example: 'The laws that have been found to describe the quantum world. . . are the most accurate and precise tools we have ever found for the successful description and prediction of the workings of Nature. In some cases the agreement between the theory's predictions and what we measure are good to better than one part in a billion.'
I heard and read such statements so often that I used to assume that Planck's constant must be known with tremendous accuracy to many places of decimals. This seems to be the case if one looks it up in a scientific handbook--so long as one does not also look at previous editions. In fact its official value has changed over the years, showing a marked tendency to increase.
The biggest change occurred between 1929 and 1941, when it went up by more than 1 percent. This increase was largely due to a substantial change in the value of the charge on the electron, e. Experimental measurements of Planck's constant do not give direct answers, but also involve the charge on the electron and/or the mass of the electron. If either or both of these other constants change, then so does Planck's constant.
Millikan's work on the charge on the electron turned out to be one of the roots of the trouble. Even though other researchers found substantially higher values, they tended to be disregarded. 'Millikan's great renown and authority brought about the opinion that the question of the magnitude of e had practically got its definitive answer.' For some twenty years Millikan's value prevailed, but evidence went on building up that e was higher. As Richard Feynman has expressed it:
It's interesting to look at the history of measurements of the charge on the electron after Millikan. If you plot them as function of time, you find that one is a little bigger than Millikan's, the next one's a little bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number that is higher. Why didn't they discover that the new number was higher right away? It's a thing that scientists are ashamed of--this history--because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they would look for and find a number closer to Millikan's value when they didn't look so far. And so they eliminated the numbers that were too far off, and did other things like that.
In the late 1930s, the discrepancies could no longer be ignored, but Millikan's high-prestige value could not simply be abandoned either; instead it was corrected by using a new value for the viscosity of air, an important variable in his oil-drop technique, bringing it into alignment with the new results. In the early 1940s, even higher values of e led to a further upward revision of the official figure. Sure enough, reasons were found to correct Millikan's value yet again, raising it to agree with the new value. Every time e increased, so Planck's constant had to be raised as well.
Interestingly, Planck's constant continued to creep upwards from the 1950s to the 1970s. Each of these increases exceeded the estimated error in the previously accepted value. The latest value shows a slight decline.
|Author||Date||h(x 10-34 joule seconds)|
|Bearden and Watts||1951||6.623 63 ± 0.000 16|
|Cohen et al.||1955||6.625 17 ± 0.000 23|
|Condon||1963||6.625 60 ± 0.000 17|
|Cohen and Taylor||1973||6.626 176 ± 0.000 036|
|1988||6.626 075 5 ± 0.000 004 0|
Several attempts have been made to look for changes in Planck's constant by studying the light from quasars and stars assumed to be very distant on the basis of the red shift in their spectra. The idea was that if Planck's constant has changed, the properties of the light emitted billions of years ago should be different from more recent light. Little difference was found, leading to the seemingly impressive conclusion that h varies by less than 5 parts in 1013 per year. But critics of such experiments have pointed out that these constancies are inevitable, since the calculations depend on the implicit assumption that h is constant; the reasoning is circular. (Strictly speaking, the starting assumption is that the product hc is constant; but since c is constant by definition, this amounts to assuming the constancy of h.)
Fluctuations in the fine-structure constant
One of the problems of looking for changes in a fundamental constant is that if changes are found in the constant, then it is difficult to know whether it is the constant itself that is changing, or the units in which it is measured. However, some of the constants are dimensionless, expressed as pure numbers, and hence the question of changes in units does not arise. One example is the ratio of the mass of the proton to the mass of the electron. Another is the fine-structure constant. For this reason, some metrologists have emphasized that 'secular changes in physical "constants" should be formulated in terms of such numbers.'
Accordingly, in this section I look at the evidence for changes in the fine-structure constant, a, formed from the charge on the electron, the velocity of light, and Planck's constant, according to the formula the fine structure constant = [charge on the electron, squared]/2 [Planck's constant][the velocity of light][the permittivity of free space]. It gives a measure of the strength of electromagnetic interactions, and is sometimes expressed as its reciprocal, approximately 1/137. This constant is treated by some theoretical physicists as one of the key cosmic numbers that a Theory of Everything should be able to explain.
Between 1929 and 1941 the fine-structure constant increased by about 0.2 percent, from 7.283 x 10-3 to 7.2976 x 10-3. This change was largely attributable to the increased value for the charge on the electron, partly offset by the fall in the speed of light, both of which I have already discussed. As in the case of the other constants, there was a scatter of results from different investigators, and the 'best' values were combined and adjusted from time to time by reviewers. As in the case of the other constants, the changes were generally larger than would be expected on the basis of the estimated errors. For example, the increase from 1951 to 1963 was twelve times greater than the estimated error in 1951 (expressed as the standard deviation); the increase from 1963 to 1973 was nearly five tims the estimated error in 1963.
|Author||Date||a x 10-3|
|Bearden and Watts||1951||7.296 953 ± 0.000 028|
|Condon||1963||7.297 200 ± 0.000 033|
|Cohen and Taylor||1973||7.297 350 6 ± 0.000 006 0|
Several cosmologists have speculated that the fine-structure constant might vary with the age of the universe, and attempts have been made to check this possibility by analyzing the light from stars and quasars, assuming that their distance is proportional to the red-shift of their light. The results suggest that there has been little or no change in the constant. But as with all other attempts to infer the constancy of constants from astronomical observations, many assumptions have to be made, including the constancy of other constants, the correctness of current cosmological theories, and the validity of red-shifts as indicators of distance. All of these assumptions have been and are still being questioned by dissident cosmologists.
Do constants really change?
As we have seen with the four examples above, the empirical data from laboratory experiments reveal all sorts of variations as time goes on. Similar variations are found in the values of the other fundamental constants. These do not trouble true believers in constancy, because they can always be explained in terms of experimental error of one kind or another. Because of continual improvements in techniques, the greatest faith is always placed in the latest measurements, and if they differ from previous ones, the older ones are automatically discredited (except when the older ones are endowed with a high prestige, as in the case of Millikan's measurement of e). Also, at any given time, there is a tendency for metrologists to overestimate the accuracy of contemporary measurements, as shown by the way that later measurements often differ from earlier ones by amounts greater than the estimated error. Alternatively, if metrologists are estimating their errors correctly, then the changes in the values of the constants show that the constants really are fluctuating. The clearest example is the fall in the speed of light from 1928 to 1945. Was there a real change in the course of nature, or was it due to a collective delusion among metrologists?
So far there have been only two main theories about the fundamental constants. First, they are truly constant, and all variations in the empirical data are due to errors of one kind or another. As science progresses, these errors are reduced. With ever-increasing precision we come closer and closer to the constants' true values. This is the conventional view. Second, several theoretical physicists have speculated that one or more of the constants may vary in some smooth and regular manner with the age of the universe, or over astronomical distances. Various tests of these ideas using astronomical observations seem to have ruled out such changes. But these tests beg the question. They are founded on the assumptions that they set out to prove: that constants are constant, and that present-day cosmology is correct in all essentials.
There has been little consideration of the third possibility, which is the one I am exploring here, namely the possibility that constants may fluctuate, within limits, around average values which themselves remain fairly constant. The idea of changeless laws and constants is the last survivor from the era of classical physics in which a regular and (in principle) totally predictable mathematical order was supposed to prevail at all times and in all places. In reality, we find nothing in the kind in the course of human affairs, in the biological realm, in the weather, or even in the heavens. The chaos revolution has revealed that this perfect order was a beguiling illusion. Most of the natural world is inherently chaotic.
The fluctuating values of the fundamental constants in experimental measurements seem just as compatible with small but real changes in their values, as they are with a perfect constancy obscured by experimental errors. I now propose a simple way of distinguishing between these possibilities. I concentrate on the gravitational constant, because this is the most variable. But the same principles could be applied to any of the other constants too.
An experiment to detect possible fluctuations in the universal gravitational constant
The principle is simple. At present, when measurements are made in a particular laboratory, the final value is based on an average of a series of individual measurements, and any unexplained variations between these measurements are attributed to random errors. Clearly, if there were real underlying fluctuations, either owing to changes in the earth's environment or to inherently chaotic fluctuations in the constant itself, these would be ironed out by the statistical procedures, showing up simply as random errors. As long as these measurements were confined to a single laboratory, there would be no way of distinguishing between these possibilities.
What I propose is a series of measurements of the universal gravitational constant to be made at regular intervals--say monthly--at several different laboratories all over the world, using the best available methods. Then, over a period of years, these measurements would be compared. If there were underlying fluctuations in the value of G, for whatever reason, these would show up at the various locations. In other words, the 'errors' might show a correlation--the values might tend to be high in some months and low in others. In this way, underlying patterns of variation could be detected that could not be dismissed as random error.
It would then be necessary to look for other explanations that did not involve a change in G, including possible changes in the units of measurement. How these inquiries would turn out is impossible to foresee. The important thing is to start looking for correlated fluctuations. And precisely because fluctuations are being looked for, there is more chance of finding them. By contrast, the current theoretical paradigm leads to a sustained effort by everyone concerned to iron out variations, because constants are assumed to be truly constant.
Unlike the other experiments proposed in this book, this one would involve a fairly large-scale international effort. Even so, the budget would not need to be huge if it took place in established laboratories already equipped to make such measurements. And it is even possible that it could be done by students. Several inexpensive methods for determining G have been described, based on the classical method of Cavendish using a torsion balance, and an improved student method has recently been developed which is accurate to 0.1 percent.
One of the advantages of the continual improvement in precision of metrological techniques is that it should become increasingly feasible to detect small changes in the constants. For example, a far greater accuracy in measurements of G should be possible when experiments can be done in spacecraft and satellites, and appropriate techniques are already being proposed and discussed. Here is an area where a big question really would need big science.
But there is in fact one way that this research could be done on a very low budget to start with: by examining the existing raw data for measurements of G at various laboratories over the last few decades. This would require the cooperation of the scientists concerned, because raw data are kept in scientists' notebooks and laboratory files, and many scientists are reluctant to allow others access to these private records. But given this cooperation, there may already be enough data to look for worldwide fluctuations in the value of G.
The implications of fluctuating fundamental constants would be enormous. The course of nature could no longer be imagined as blandly uniform; we would recognize that there are fluctuations at the very heart of physical reality. And if different fundamental constants varied at different rates, these changes would create differing qualities of time, not unlike those envisaged by astrology, but with a more radical basis."