Docsity
Docsity

Prepare-se para as provas
Prepare-se para as provas

Estude fácil! Tem muito documento disponível na Docsity


Ganhe pontos para baixar
Ganhe pontos para baixar

Ganhe pontos ajudando outros esrudantes ou compre um plano Premium


Guias e Dicas
Guias e Dicas

Steven Weinberg - The First Three Minutes [a modern view of the origin of the universe], Notas de estudo de Astronomia

Os primeiros 3 minutos

Tipologia: Notas de estudo

2010
Em oferta
30 Pontos
Discount

Oferta por tempo limitado


Compartilhado em 13/10/2010

mayk-coelho-1
mayk-coelho-1 🇧🇷

4.5

(11)

28 documentos

Pré-visualização parcial do texto

Baixe Steven Weinberg - The First Three Minutes [a modern view of the origin of the universe] e outras Notas de estudo em PDF para Astronomia, somente na Docsity! Steven Weinberg The First Three Minutes A modem view of the origin of the universe FLAMINGO Published by Fontana Paperbacks Contents Preface 9 1 Introduction: the Giant and the Cow 13 2 The Expansion of the Universe 20 3 The Cosmic Microwave Radiation Background 52 4 Recipe for a Hot Universe 81 5 The First Three Minutes 102 6 A Historical Diversion 120 7 The First One-hundredth Second 130 8 Epilogue: the Prospect Ahead 145 Afterword 151 TABLES : 1. Properties of Some Elementary Particles 163 2. Properties of Some Kinds of Radiation 164 Glossary 165 Preface 11 (including my own) listed under 'Suggestions for Further Reading' (p. 189). I should also make clear what subject I intended this book to cover. It is definitely not a book about all aspects of cosmology. There is a 'classic' part of the subject, which has to do mostly with the large-scale structure of the present universe: the debate over the extragalactic nature of the spiral nebulae; the discovery of the red shifts of distant galaxies and their dependence on distance; the general relativ- istic cosmological models of Einstein, de Sitter, Lemaitre, and Friedmann; and so on. This part of cosmology has been described very well in a number of distinguished books, and I did not intend to give another full account of it here. The present book is concerned with the early universe, and in particular with the new understanding of the early universe that has grown out of the discovery of the cosmic microwave radiation background in 1965. Of course, the theory of the expansion of the universe is an essential ingredient in our present view of the early uni- verse, so I have been compelled in Chapter 2 to provide a brief introduction to the more 'classic' aspects of cosmology. I believe that this chapter should provide an adequate back- ground, even for the reader completely unfamiliar with cosmology, to understand the recent developments in the theory of the early universe with which the rest of the book is concerned. However, the reader who wants a thorough introduction to the older parts of cosmology is urged to consult the books listed under 'Suggestions for Further Reading'. On the other hand, I have not been able to find any coherent historical account of the recent developments in cosmology. I have therefore been obliged to do a little digging myself, particularly with regard to the fascinating question of why there was no search for the cosmic microwave radia- tion background long before 1965. (This is discussed in Chapter 6.) This is not to say that I regard this book as a definitive history of these developments - I have far too much 12 The First Three Minutes respect for the effort and attention to detail needed in the history of science to have any illusions on that score. Rather, I would be happy if a real historian of science would use this book as a starting point, and write an adequate history of the last thirty years of cosmological research. I am extremely grateful to Erwin Glikes and Farrell Phillips of Basic Books for their valuable suggestions in preparing this manuscript for publication. I have also been helped more than I can say in writing this book by the kind advice of my colleagues in physics and astronomy. For taking the trouble to read and comment on portions of the book, I wish especially to thank" Ralph Alpher, Bernard Burke, Robert Dicke, George Field, Gary Feinberg, William Fowler, Robert Herman, Fred Hoyle, Jim Peebles, Arno Penzias, Bill Press, Ed Purcell and Robert Wagoner. My thanks are also due to Isaac Asimov, I. Bernard Cohen, Martha Liller and Philip Morrison for information on various special-topics. I am particularly grateful to Nigel Calder for reading through the whole of the first draft, and for his perceptive comments. I cannot hope that this book is now entirely free of errors and obscurities, but I am certain that it is a good deal clearer and more accurate than it could have been without all the generous assistance I have been fortunate enough to receive. Cambridge, Massachusetts July 1976 S T E V E N W E I N B E R G Introduction: the Giant and the Cow The origin of the universe is explained in the Younger Edda, a collection of Norse myths compiled around 1220 by the Icelandic magnate Snorri Sturleson. In the beginning, says the Edda, there was nothing at all. 'Earth was not found, nor Heaven above, a Yawning-gap there was, but grass nowhere.' To the north and south of nothing lay regions of frost and fire, Niflheim and Muspelheim. The heat from Muspelheim melted some of the frost from Niflheim, and from the liquid drops there grew a giant, Ymer. What did Ymer eat? It seems there was also a cow, Audhumla. And what did she eat? Well, there was also some salt. And so on. I must not offend religious sensibilities, even Viking reli- gious sensibilities, but I think it is fair to say that this is not a very satisfying picture of the origin of the universe. Even leaving aside all objections to hearsay evidence, the story raises as many problems as it answers, and each answer requires a new complication in the initial conditions. We are not able merely to smile at the Edda, and forswear all cosmogonical speculation - the urge to trace the history of the universe back to its beginning is irresistible. From the start of modem science in the sixteenth and seventeenth centuries, physicists and astronomers have returned again and again to the problem of the origin of the universe. However, an aura of the disreputable always surrounded such research. I remember that during the time that I was a student and then began my own research (on other problems) in the 1950s, the study of the early universe was widely regarded as not the sort of thing to which a respectable scientist would devote his time. Nor was this Judgement 16 The First Three Minutes fore was not preordained, but fixed instead by a balance between processes of creation and annihilation. From this balance we can infer that the density of this cosmic soup at a temperature of a hundred thousand million degrees was about four thousand million (4 X 109) times that of water. There was also a small contamination of heavier particles, protons and neutrons, which in the present world form the constituents of atomic nuclei. (Protons are positively charged; neutrons are slightly heavier and electrically neutral.) The proportions were roughly one proton and one neutron for every thousand million electrons or positrons or neutrinos or photons. This number - a thousand million photons per nuclear particle - is the crucial quantity that had to be taken from observation in order to work out the standard model of the universe. The discovery of the cosmic radiation back- ground discussed in Chapter 3 was in effect a measurement of this number. As the explosion continued the temperature dropped, reach- ing thirty thousand million (3 X 1010) degrees Centigrade after about one-tenth of a second; ten thousand million degrees after about one second; and three thousand million degrees after about fourteen seconds. This was cool enough so that the electrons and positrons began to annihilate faster than they could be recreated out of the photons and neutrinos. The energy released in this annihilation of matter temporarily slowed the rate at which the universe cooled, but the tempera- ture continued to drop, finally reaching one thousand million degrees at the end of the first three minutes. It was then cool enough for the protons and neutrons to begin to form into complex nuclei, starting with the nucleus of heavy hydrogen (or deuterium), which consists of one proton and one neutron. The density was still high enough (a little less than that of water) so that these light nuclei were able rapidly to assemble themselves into the most stable light nucleus, that of helium, consisting of two protons and two neutrons. At the end of the first three minutes the contents of the universe were mostly in the form of light, neutrinos, and anti- Introduction: the Giant and the Cow 17 neutrinos. There was still a small amount of nuclear material, now consisting of about 73 per cent hydrogen and 27 per cent helium, and an equally small number of electrons left over from the era of electron-positron annihilation. This matter continued to rush apart, becoming steadily cooler and less dense. Much later, after a few hundred thousand years, it would become cool enough for electrons to join with nuclei to form atoms of hydrogen and helium. The -resulting gas would begin under the influence of gravitation to form clumps, which would ultimately condense to form the galaxies and stars of the present universe. However, the ingredients with which the stars would begin their life would be just those prepared in the first three minutes. The standard model sketched above is not the most satis- fying theory imaginable of the origin of the universe. Just as in the Younger Edda, there is an embarrassing vagueness about the very beginning, the first hundredth of a second or so. Also, there is the unwelcome necessity of fixing initial conditions, especially the initial thousand-million-to-one ratio of photons to nuclear particles. We would prefer a greater sense of logical inevitability in the theory. For example, one alternative theory that seems philo- sophically far more attractive is the so-called steady-state model. In this theory, proposed in the late 1940S by Herman Bondi, Thomas Gold and (in a somewhat different formula- tion) Fred Hoyle, the universe has always been just about the same as it is now. As it expands, new matter is continually created to fill up the gaps between the galaxies. Potentially, all questions about why the universe is the way it is can be answered in this theory by showing that it is the way it is because that is the only way it can stay the same. The problem of the early universe is banished; there was no early universe. How then did we come to the 'standard model'? And how has it supplanted other theories, like the steady-state model? It is a tribute to the essential objectivity of modem astro- physics that this consensus has been brought about, not by 18 The First Three Minutes shifts in philosophical preference or by the influence of astro- physical mandarins, but by the pressure of empirical data. The next two chapters will describe the two great clues, furnished by astronomical observation, which have led us to the standard model - the discoveries of the recession of distant galaxies and of a weak radio static filling the universe. This is a rich story for the historian of science, filled with false starts, missed opportunities, theoretical preconceptions, and the play of personalities. Following this survey of observational cosmology, I will try to put the pieces of data together to make a coherent picture of physical conditions in the early universe. This will put us in a position to go back over the first three minutes in greater detail. A cinematic treatment seems appropriate: frame by frame, we will watch the universe expand and cool and cook. We will also try to look a little way into an era that is still clothed in mystery - the first hundredth of a second, and what went before. Can we really be sure of the standard model? Will new discoveries overthrow it and replace the present standard model with some other cosmogony, or even revive the steady- state model? Perhaps. I cannot deny a feeling of unreality in writing about the first three minutes as if we really know what we are talking about. However, even if it is eventually supplanted, the standard model will have played a role of great value in the history of cosmology. It is now respectable (though only in the last decade or so) to test theoretical ideas in physics or astro- physics by working out their consequences in the context of the standard model. It is also common practice to use the standard model as a theoretical basis for justifying pro- grammes of astronomical observation. Thus, the standard model provides an essential common language which allows theorists and observers to appreciate what each other is doing. If some day the standard model is replaced by a better theory, it will probably be because of observations or calcula- tions that drew their motivation from the standard model. The Expansion of the Universe 21 much closer at the same time in the past - so close, in fact, that neither galaxies nor stars nor even atoms or atomic nuclei could have had a separate existence. This is the era we call 'the early universe', which serves as the subject of this book. Our knowledge of the expansion of the universe rests entirely on the fact that astronomers are able to measure the motion of a luminous body in a direction directly along the line of sight much more accurately than they can measure its motion at right angles to the line of sight. The technique makes use of a familiar property of any sort of wave motion, known as the Doppler effect. When we observe a sound or light wave from a source at rest, the time between the arrival of wave crests at our instruments is the same as the time between crests as they leave the source. On the other hand, if the source is moving away from us, the time between arrivals of successive wave crests is increased over the time between their departures from the source, because each crest has a little farther to go on its journey to us than the crest before. The time between crests is just the wavelength divided by the speed of the wave, so a wave sent out by a source moving away from us will appear to have a longer wave- length than if the source were at rest. (Specifically, the fractional increase in the wavelength is given by the ratio of the speed of the wave source to the speed of the wave itself, as shown in mathematical note 1, page 175.) Similarly, if the source is moving towards us, the time between arrivals of wave crests is decreased because each successive crest has a shorter distance to go, and the wave appears to have a shorter wavelength. It is just as if a travelling salesman were to send a letter home regularly once a week during his travels: while he is travelling away from home, each succes- sive letter will have a little farther to go than the one before, so his letters will arrive a little more than a week apart; on the homeward leg of his journey, each successive letter will have a shorter distance to travel, so they will arrive more frequently than once a week. It is easy these days to observe the Doppler effect on sound 22 The First Three Minutes waves — just go out to the edge of a highway and notice that the engine of a fast automobile sounds higher pitched (i.e. a shorter wavelength) when the auto is approaching than when it is going away. The effect was apparently first pointed out for both light and sound waves by Johann Christian Doppler, professor of mathematics at the Realschule in Prague, in 1842. The Doppler effect for sound waves was tested by the Dutch meteorologist Christopher Heinrich Dietrich Buys-Ballot in an endearing experiment in 1845- as a moving source of sound he used an orchestra of trumpeters standing in an open car of a railroad train, whizzing through the Dutch countryside near Utrecht. Doppler thought that his effect might explain the different colours of stars. The light from stars that happen to be moving away from the earth would be shifted towards longer wavelengths, and since red light has a wavelength longer than the average wavelength for visible light, such a star might appear redder than average. Similarly, light from stars that happen to be moving towards the earth would be shifted towards shorter wavelengths, so the star might appear un- usually blue. It was soon pointed out by Buys-Ballot and others that the Doppler effect has essentially nothing to do with the colour of a star - it is true that the blue light from a receding star is shifted towards the red, but at the same time some of the star's normally invisible ultra-violet light is shifted into the blue part of the visible spectrum, so the over- all colour hardly changes. Stars have different colours chiefly because they have different surface temperatures. However, the Doppler effect did begin to be of enormous importance to astronomy in 1868, when it was applied to the study of individual spectral lines. It had been discovered years earlier, by the Munich optician Joseph Frauenhofer in 1814-15, that when light from the sun is allowed to pass through a slit and then through a glass prism, the resulting spectrum of colours is crossed with hundreds of dark lines, each one an image of the slit. (A few of these lines had been noticed even earlier, by William Hyde Wollaston in 1802, The Expansion of the Universe 23 but were not carefully studied at that time.) The dark lines were always found at the same colours, each corresponding to a definite wavelength of light. The same dark spectral lines were also found by Frauenhofer in the same positions in the spectrum of the moon and the brighter stars. It was soon realized that these dark lines are produced by the selective absorption of light of certain definite wavelengths, as the light passes from the hot surface of a star through its cooler outer atmosphere. Each line is due to the absorption of light by a specific chemical element, so it became possible to determine that the elements of the sun, such as sodium, iron, magnesium, calcium and chromium, are the same as those found on earth. (Today we know that the wavelengths of the dark lines are just those for which a photon of that wavelength would have precisely the right energy to raise the atom from a state of lower energy to one of its excited states.) In 1868 Sir William Huggins was able to show that the dark lines in the spectra of some of the brighter stars are shifted slightly to the red or the blue from their normal posi- tion in the spectrum of the sun. He correctly interpreted this as a Doppler shift, due to the motion of the star away from or towards the earth. For instance, the wavelength of every dark line in the spectrum of the star Capella is longer than the wavelength of the corresponding dark line in the spectrum of the sun by 0.01 per cent; this shift to the red indicates that Capella is receding from us at 0.01 per cent of the speed of light, or 30 kilometres per second. The Doppler effect was used in the following decades to discover the velocities of solar prominences, of double stars, and of the rings of Saturn. The measurement of velocities by the observation of Dop- pler shifts is an intrinsically accurate technique, because the wavelengths of spectral lines can be measured with very great precision; it is not unusual to find wavelengths given in tables to eight significant figures. Also, the technique preserves its accuracy whatever the distance of the light source, provided only that there is enough light to pick out spectral lines against the radiation of the night sky. 26 The First Three Minutes in some, including M31 and M33. However, the best tele- scopes of the eighteenth and nineteenth centuries were unable to resolve the elliptical or spiral nebulae into stars, and their nature remained in doubt. It seems to have been Immanuel Kant who first proposed that some of the nebulae are galaxies like our own. Picking up Wright's theory of the Milky Way, Kant in 1755 in his Universal Natural History and Theory of the Heavens sug- gested that the nebulae 'or rather a species of them' are really circular discs about the same size and shape as our own galaxy. They appear elliptical because most of them are viewed at a slant, and of course they are faint because they are so far away. The idea of a universe filled with galaxies like our own became widely though by no means universally accepted by the beginning of the nineteenth century. However, it remained an open possibility that these elliptical and spiral nebulae might prove to be mere clouds within our own galaxy, like other objects in Messier's catalogue. One great source of con- fusion was the observation of exploding stars in some of the spiral nebulae. If these nebulae were really independent galaxies, too far away for us to pick out individual stars, then the explosions would have to be incredibly powerful to be so bright at such a great distance. In this connection, I cannot resist quoting one example of nineteenth-century scientific prose at its ripest. Writing in 1893, the English historian of astronomy Agnes Mary Clerke remarked: The well-known nebula in Andromeda, and the great spiral in Canes Venatici are among the more remarkable of those giving a continuous spectrum; and as a general rule, the emissions of all such nebulae as present the appearance of star-clusters grown misty through excessive distance, are of the same kind. It would, however, be eminently rash to conclude thence that they are really aggregations of such sun-like bodies. The improbability of such an inference has been greatly enhanced by the occurrence, at an interval of The Expansion of the Universe 27 a quarter of a century, of stellar outbursts in two of them. For it is practically certain that, however distant the nebulae, the stars were equally remote; hence, if the con- stituent particles of the former be suns, the incomparably vaster orbs by which their feeble light was well-nigh obliterated must, as was argued by Mr Proctor, have been on a scale of magnitude such as the imagination recoils from contemplating. Today we know that these stellar outbursts were indeed 'on a scale of magnitude such as the imagination recoils from contemplating'. They were supernovas, explosions in which one star approaches the luminosity of a whole galaxy. But this was not known in 1893. The question of the nature of the spiral and elliptical nebulae could not be settled without some reliable method of determining how far away they are. Such a yardstick was at last discovered after the completion of the l00" telescope at Mount Wilson, near Los Angeles. In 1923 Edwin Hubble was for the first time able to resolve the Andromeda Nebula into separate stars. He found that its spiral arms included a few bright variable stars, with the same sort of periodic variation of luminosity as was already familiar for a class of stars in our galaxy known as Cepheid variables. The reason this was so important was that in the preceding decade the work of Henrietta Swan Leavitt and Harlow Shapley of the Harvard College Observatory had provided a tight relation between the observed periods of variation of the Cepheids and their absolute luminosities. (Absolute luminosity is the total radiant power emitted by an astronomical object in all directions. Apparent luminosity is the radiant power received by us in each square centimetre of our telescope mirror. It is the apparent rather than the absolute luminosity that deter- mines the subjective degree of brightness of astronomical objects. Of course, the apparent luminosity depends not only on the absolute luminosity, but also on the distance; thus, knowing both the absolute and the apparent luminosities of 28 The First Three Minutes an astronomical body, we can infer its distance.) Hubble, observing the apparent luminosity of the Cepheids in the Andromeda Nebula, and estimating their absolute luminosity from their periods, could immediately calculate their dis- tance, and hence the distance of the Andromeda Nebula, using the simple rule that apparent luminosity is proportional to the absolute luminosity and inversely proportional to the square of the distance. His conclusion was that the Andromeda Nebula is at a distance of 900,000 light years, or more than ten times farther than the most distant known objects in our own galaxy. Several recalibrations of the Cepheid period- luminosity relation by Walter Baade and others have by now increased the distance of the Andromeda Nebula to over two million light years, but the conclusion was already clear Relation between Red Shift and Distance: Shown opposite are bright galaxies in five galaxy clusters, together with their spectra. The spectra of the galaxies are the long, horizontal white smears, crossed with a few short, dark vertical lines. Each position along these spectra corresponds to light from the galaxy with a definite wavelength; the dark vertical lines arise from absorption of light within the atmospheres of stars in these galaxies. (The bright vertical lines above and below each galaxy's spectrum are merely standard comparison spectra, superimposed on the spectrum of the galaxy to aid in deter- mining wavelengths.) The arrows below each spectrum indicate the shift of two specific absorption lines (the H and K lines of calcium) from their normal position, towards the right (red) end of the spectrum. If interpreted as a Doppler effect, the red shift of these absorption lines indicates a velocity ranging from 1200 kilometres per second for the Virgo cluster galaxy to 61,000 kilometres per second for the Hydra cluster. With a red shift proportional to distance, this indicates that these galaxies are at successively greater distances. (The distances given here are computed with a Hubble constant of 15.3 kilometres per second per million light years.) This interpret- ation is confirmed by the fact that the galaxies appear pro- gressively smaller and dimmer with increasing red shift. (Hale Observatories photograph.) The Expansion of the Universe 31 below, I will use the label 'typical' to indicate galaxies that do not have any large peculiar motion of their own, but are simply carried along with the general cosmic flow of galaxies.) This hypothesis is so natural (at least since Copernicus) that it has been called the Cosmological Principle by the English astrophysicist Edward Arthur Milne. As applied to the galaxies themselves, the Cosmological Principle requires that an observer in a typical galaxy should see all the other galaxies moving with the same pattern of velocities, whatever typical galaxy the observer happens to be riding in. It is a direct mathematical consequence of this principle that the relative speed of any two galaxies must be proportional to the distance between them, just as found by Hubble. To see this, consider three typical galaxies A, B, and C, strung out in a straight line (see figure 1). Suppose that the distance between A and B is the same as the distance between B and C. Whatever the speed of B as seen from A, the Cosmological Principle requires that C should have the same speed relative to B. But note then that C, which is twice as far away from A as is B, is also moving twice as fast relative to A as is B. We can add more galaxies in our chain, always with the result that the speed of recession of any galaxy relative to any other is proportional to the distance between them. As often happens in science, this argument can be used both forward and backward. Hubble, in observing a pro- portionality between the distances of galaxies and their speeds of recession, was indirectly verifying the truth of the Cosmo- logical Principle. This is enormously satisfying philosophically -why should any part of the universe or any direction be any different from any other? It also helps to reassure us that the astronomers really are looking at some appreciable part of the universe, not a mere local eddy in a vaster cosmic maelstrom. Contrariwise, we can take the Cosmological Prin- ciple for granted on a priori grounds, and deduce the relation 1 of proportionality between distance and velocity, as done in 32 The First Three Minutes the last paragraph. In this way, through the relatively easy measurement of Doppler shifts, we are able to judge the distance of very remote objects from their velocities. The Cosmological Principle has observational support of another sort, apart from the measurement of Doppler shifts. After making due allowances for the distortions due to our own galaxy and the rich nearby cluster of galaxies in the constellation Virgo, the universe seems remarkably isotropic; that is, it looks the same in all directions. (This is shown even more convincingly by the microwave background radiation discussed in the next chapter.) But ever since Copernicus we have learned to beware of supposing that there is anything special about mankind's location in the universe. So if the universe is isotropic around us, it ought to be isotropic about every typical galaxy. However, any point in the universe can Figure I. Homogeneity and the Hubble Law. A string of equally spaced galaxies Z, A, B, C, . . . are shown, with velocities as measured from A or B or C indicated by the lengths and directions of the attached arrows. The principle of homogeneity requires that the velocity of C as seen by B is equal to the velocity of B as seen by A; adding these two velocities gives the velocity of C as seen by A, indicated by an arrow twice as long. Proceeding in this way, we can fill out the whole pattern of velocities shown in the figure. As can be seen, the velocities obey the Hubble law: the velocity of any galaxy as seen by any other is proportional to the distance between them. This is the only pattern of velocities consistent with the principle of homogeneity. The Expansion of the Universe 33 be carried into any other point by a series of rotations around fixed centres (see figure 2), so if the universe is isotropic around every point, it is necessarily also homogeneous. Before going any further, a number of qualifications have to be attached to the Cosmological Principle. First, it is obviously not true on small scales - we are in a galaxy which belongs to a small local group of other galaxies (including M31 and M33), which in turn lies near the enormous cluster Figure 2. Isotropy and Homogeneity. If the universe is isotropic about both galaxy 1 and galaxy 2, then it is homogeneous. In order to show that conditions at two arbitrary points A and B are the same, draw a circle through A around galaxy 1, and another circle through B around galaxy 2. Isotropy around galaxy 1 requires that conditions are the same at A and at the point C where the circles intersect. Likewise, isotropy around galaxy 2 requires that conditions are the same at B and C. Hence they are the same at A and B. 36 The First Three Minutes the Ursa Major II cluster of galaxies. It was found to be receding at a speed of 42,000 kilometres per second - 14 per cent of the speed of light. The distance, then estimated as 260 million light years, was at the limit of Mount Wilson's capability, and Hubble's work had to stop. With the advent after the war of larger telescopes at Palomar and Mount Hamilton, Hubble's programme was taken up again by other astronomers (notably Allan Sandage of Palomar and Mount Wilson), and continues to the present time. The conclusion generally drawn from this half-century of observation is that the galaxies are receding from us, with speeds proportional to the distance (at least for speeds not too close to that of light). Of course, as already emphasized in our discussion of the Cosmological Principle, this does not mean that we are in any specially favoured or unfavoured position in the cosmos; every pair of galaxies is moving apart at a relative speed proportional to their separation. The most important modification of Hubble's original conclusions is a revision of the extragalactic distance scale: partly as a result of a recalibration of the Leavitt-Shapley Cepheid period- luminosity relation by Walter Baade and others, the distances to far galaxies are now estimated to be about ten times larger than was thought in Hubble's time. Thus, the Hubble con- stant is now believed to be only about 15 kilometres per second per million light years. What does all this say about the origin of the universe? If the galaxies are rushing apart, then they must once have been closer together. To be specific, if their velocity has been constant, then the time it had taken any pair of galaxies to reach their present separation is just the present distance between them divided by their relative velocity. But with a velocity which is proportional to their present separation, this time is the same for any pair of galaxies - they must have all been close together at the same time in the past! Taking the Hubble constant as 15 kilometres per second per million light years, the time since the galaxies began to move apart would be a million light years divided by 15 kilometres per The Expansion of the Universe 37 second, or 20 thousand million years. We shall refer to the 'age' calculated in this way as the 'characteristic expansion time'; it is simply the reciprocal of the Hubble constant. The true age of the universe is actually less than the characteristic expansion time because, as we shall see, the galaxies have not been moving at constant velocities, but have been slowing down under the influence of their mutual gravitation. There- fore, if the Hubble constant is 15 kilometres per second per million light years, the age of the universe must be less than 20,000 million years. Sometimes we summarize all this by saying briefly that the size of the universe is increasing. This does not mean that the universe necessarily has a finite size, although it well may have. This language is used because in any given time, the separation between any pair of typical galaxies increases by the same fractional amount. During any interval that is short enough so that the galaxies' velocities remain approxi- mately constant, the increase in the separation between a pair of typical galaxies will be given by the product of their relative velocity and the elapsed time, or, using the Hubble law, by the product of the Hubble constant, the separation, and the time. But then the ratio of the increase in separation to the separation itself will be given by the Hubble constant times the elapsed time, which is the same for any pair of galaxies. For instance, during a time interval of 1 per cent of the characteristic expansion time (the reciprocal of the Hubble constant), the separation of every pair of typical galaxies will increase by 1 per cent. We would then, speaking loosely, say that the size of the universe has increased by 1 per cent. I do not want to give the impression that everyone agrees with this interpretation of the red shift. We do not actually observe galaxies rushing away from us; all we are sure of is that the lines in their spectra are shifted to the red, i.e. towards longer wavelengths. There are eminent astronomers who doubt that the red shifts have anything to do with Doppler shifts or with an expansion of the universe. Halton 38 The First Three Minutes Arp, of the Hale Observatories, has emphasized the existence of groupings of galaxies in the sky in which some galaxies have very different red shift from the others; if these group- ings represent true physical associations of neighbouring galaxies, they could hardly have grossly different velocities. Also, it was discovered by Maarten Schmidt in 1963 that a certain class of objects which have the appearance of stars nevertheless have enormous red shifts, in some cases over 300 per cent! If these 'quasi-stellar objects' are as far away as their red shifts indicate, they must be emitting enormous amounts of energy to be so bright. Finally, it is not easy to determine the relation between velocity and distance at really large distances. There is, however, an independent way to confirm that the galaxies are really moving apart, as indicated by the red shifts. As we have seen, this interpretation of the red shifts implies that the expansion of the universe began somewhat less than 20 thousand million years ago. It will therefore tend to be confirmed if we can find any other evidence that the universe is actually that old. In fact, there is a good deal of evidence that our galaxy is about 10-15 thousand million years old. This estimate comes both from the relative abundance of various radioactive isotopes in the earth (especially the uranium isotopes, U-235 and U-238) and from calculation of the evolution of stars. There is certainly no direct connection between the rates of radioactivity or stellar evolution and the red shift of distant galaxies, so the presump- tion is strong that the age of the universe deduced from the Hubble constant really does represent a true beginning. In this connection, it is historically interesting to recall that during the 1930s and 1940s the Hubble constant was believed to be much larger, about 170 kilometres per second per million light years. By our previous reasoning the age of the universe would then have to be one million light years divided by 170 kilometres per second, which is about 2000 million years, or even less if we take gravitational braking into account. But it has been well known since the studies The Expansion of the Universe 41 as an effect of the curvature of space and time. In 1917, a year after the completion of his general theory of relativity, Einstein tried to find a solution of his equations that would describe the spacetime geometry of the whole universe. Following the cosmological ideas then current, Einstein looked specifically for a solution that would be homogeneous, isotropic, and, unfortunately, static. However, no such solu- tion could be found. In order to achieve a model that fit these cosmological presuppositions, Einstein was forced to mutilate his equations by introducing a term, the so-called cosmological constant, which greatly marred the elegance of the original theory, but which could serve to balance the attractive force of gravitation at large distances. Einstein's model universe was truly static, and predicted no red shifts. In the same year, 1917, another solution of Einstein's modified theory was found by the Dutch astronomer W. de Sitter. Although this solution appeared to be static, and was therefore acceptable according to the cosmological ideas of the times, it had the remarkable property of predict- ing a red shift proportional to the distance! The existence of large nebular red shifts was not then known to European astronomers. However, at the end of World War I news of the observation of large red shifts reached Europe from America, and de Sitter's model acquired instant celebrity. In fact, in 1922 when the English astronomer Arthur Edding- ton wrote the first comprehensive treatise on general relativity, he analysed the existing red-shift data in terms of the de Sitter model. Hubble himself said that it was the de Sitter model that drew astronomers' attention to the importance of a dependence of red shift on distance, and this model may have been in the back of his mind when he discovered the proportionality of red shift to distance in 1929. Today this emphasis on the de Sitter model seems mis- placed. For one thing, it is not really a static model at all- it looked static because of the peculiar way that spatial co- ordinates were introduced, but the distance between 'typical' observers in the model actually increases with time, and it is 42 The First Three Minutes this general recession that produces the red shift. Also, the reason that the red shift turned out to be proportional to the distance in de Sitter's model is just that this model satisfies the Cosmological Principle, and, as we have seen, we expect a proportionality between relative velocity and distance in any theory that satisfies this principle. At any rate, the discovery of the recession of distant galaxies soon aroused interest in cosmological models that are homogeneous and isotropic but not static. A 'cosmological constant' was then not needed in the field equations of gravitation, and Einstein came to regret that he had ever considered any such change in his original equations. In 1922 the general homogeneous and isotropic solution of the original Einstein equations was found by the Russian mathematician Alexandre Friedmann. It is these Friedmann models, based on the original Einstein field equations, and not the Einstein or de Sitter models, that provide the mathematical back- ground for most modem cosmological theories. The Friedmann models are of two very different types. If the average density of the matter of the universe is less than or equal to a certain critical value, then the universe must be spatially infinite. In this case the present expansion of the universe will go on for ever. On the other hand, if the density of the universe is greater than this critical value, then the gravitational field produced by the matter curves the universe back on itself; it is finite though unbounded, like the surface of a sphere. (That is, if we set off on a journey in a straight line, we do not reach any sort of edge of the universe, but simply come back to where we began.) In this case the gravitational fields are strong enough eventually to stop the expansion of the universe, so that it will eventually implode back to indefinitely large density. The critical density is proportional to the square of the Hubble constant; for the presently popular value of 15 kilometres per second per million light years, the critical density equals 5 X 10-30 grams per cubic centimetre, or about three hydrogen atoms per thousand litres of space. The Expansion of the Universe 43 The motion of any typical galaxy in the Friedmann models is precisely like that of a stone thrown upward from the surface of the earth. If the stone is thrown fast enough or, what amounts to the same thing, if the mass of the earth is small enough, then the stone will gradually slow down, but will nevertheless escape to infinity. This corresponds to the case of a cosmic density less than the critical density. On the other hand, if the stone is thrown with insufficient speed, then it will rise to a maximum height and then plunge back down- ward. This of course corresponds to a cosmic density above the critical density. This analogy makes clear why it was not possible to find static cosmological solutions of Einstein's equations - we might not be too surprised to see a stone rising from or fall- ing to the surface of the earth, but we would hardly expect to find one hanging still in mid-air. The analogy also helps us to avoid a common misconception about the expanding universe. The galaxies are not rushing apart because of some mysterious force that is pushing them apart, just as the rising stone in our analogy is not being repelled by the earth. Rather, the galaxies are moving apart because they were thrown apart by some sort of explosion in the past. It was not realized in the 1920s, but many of the detailed properties of the Friedmann models can be calculated quanti- tatively using this analogy, without any reference to general relativity. In order to calculate the motion of any typical galaxy relative to our own, draw a sphere with us at the centre and the galaxy of interest on the surface; the motion of this galaxy is precisely the same as if the mass of the universe consisted only of the matter within this sphere, with nothing outside. It is just as if we dug a cave deep in the interior of the earth, and observed the way that bodies fall - we would find that the gravitational acceleration towards the centre depended only on the amount of matter closer to the centre than our cave, as if the surface of the earth were just at the depth of our cave. This remarkable result is embodied in a theorem, valid in both Newton's and Einstein's theories of 46 The First Three Minutes simple power of time: the two-thirds power if the density of radiation could be neglected, or the one-half power if the density of radiation exceeded that of matter. (See mathe- matical note 3, page 178.) The one aspect of the Friedmann cosmological models that cannot be understood without general relativity is the relation between the density and the geometry - the universe is open and infinite or closed and finite according to whether the velocity of galaxies is greater or less than the escape velocity. One way to tell whether or not the galactic velocities exceed escape velocity is to measure the rate at which they are slowing down. If this deceleration is less (or greater) than Separation between typical galaxies Age of universe Figure 4. Expansion and Contraction of the Universe. The separation between typical galaxies is shown (in arbitrary units) as a function of time, for two possible cosmological models. In the case of an 'open universe', the universe is infinite; the density is less than the critical velocity; and the expansion, though slowing down, will continue forever. In the case of a 'closed universe', the universe is finite; the density is greater than the critical density; and the expansion will eventually cease and be followed by a contraction. These curves are calculated using Einstein's field equations without a cosmological constant, for a matter-dominated universe. The Expansion of the Universe 47 a certain amount, then escape velocity is (or is not) exceeded. In practice, this means that one must measure the curvature of the graph of red shift versus distance for very distant galaxies (see figure 5). As one proceeds from a more dense finite universe to a less dense infinite universe, the curve of red shift versus distance flattens out at very large distances. The study of the shape of the red-shift-distance curve at great distances is often called the 'Hubble programme'. A tremendous effort has been put into this programme by Hubble, Sandage, and recently others as well. So far the results have been quite inconclusive. The trouble is that in estimating the distance to far galaxies it is impossible to pick out Cepheid variables or brightest stars to use as distance indicators; rather, we must estimate the distance from the apparent luminosity of the galaxies themselves. But how do we know that the galaxies we study all have the same absolute luminosity? (Recall that apparent luminosity is the radiant power received by us per unit telescope area, while absolute luminosity is the total power emitted in all directions by the astronomical object; apparent luminosity is proportional to absolute luminosity and inversely proportional to the square of the distance.) There are terrible dangers from selection effects-as we look out farther and farther, we tend to pick out galaxies of greater and greater absolute luminosity. An even worse problem is galactic evolution. When we look at very distant galaxies we see them as they were thousands of millions of years ago, when the light rays started on their journey to us. If typical galaxies were brighter then than now, we will underestimate their true distance. One possibility, raised very recently by J. P. Ostriker and S. D. Tremaine of Princeton, is that the larger galaxies evolve not only because their individual stars evolve, but also because they gobble up small neighbouring galaxies! It is going to be a long time before we can be sure that we have an adequate quantitative understanding of these various kinds of galactic evolution. At present, the best inference that can be drawn from the Hubble programme is that the deceleration of distant galaxies Figure 5. Red Shift vs. Distance. The red shift is shown here as a function of distance, for four possible cosmological theories. (To be precise, the 'distance' here is 'luminosity distance'— the distance inferred for an object of known intrinsic or absolute luminosity from observations of its apparent luminosity.) The curves labelled 'density twice critical', 'density critical', and 'density zero' are calculated in the Friedmann model, using Einstein's field equations for a matter-dominated universe, without a cosmological constant; they correspond respectively to a universe that is closed, just barely open, or open. (See figure 4.) The curve marked 'steady state' will apply to any theory in which the appearance of the universe does not change with time. Current observations are not in good agreement with the 'steady-state' curve, but they do not definitely decide among the other possibilities, because in non-steady-state theories galactic evolution makes determination of distance very problematical. All curves are drawn with the Hubble constant taken as 15 kilometres per second per million light years (corresponding to a characteristic expansion time of 20,000 million years), but the curves can be used for any other value of the Hubble constant by simply rescaling all distances. The Expansion of the Universe 51 future of the universe, they give a pretty clear picture of its past. The observations discussed in this chapter have opened to us a view of the universe that is as simple as it is grand. The universe is expanding uniformly and isotropically - the same pattern of flow is seen by observers in all typical galaxies, and in all directions. As the universe expands, the wavelengths of light rays are stretched out in proportion to the distance between the galaxies. The expansion is not believed to be due to any sort of cosmic repulsion, but is rather just the effect of the velocities left over from a past explosion. These velocities are gradually slowing down under the influence of gravitation; this deceleration appears to be quite slow, suggesting that the matter density of the universe is low and its gravitational field is too weak either to make the universe spatially finite or eventually to reverse the expansion. Our calculations allow us to extrapolate the expan- sion of the universe backward in time, and reveal that the expansion must have begun between 10,000 and 20,000 million years ago. 3 The Cosmic Microwave Radiation Background The story told in the last chapter is one with which the astronomers of the past would have felt at home. Even the setting is familiar: great telescopes exploring the night sky from mountain tops in California or Peru, or the naked-eye observer in his tower, to 'oft out-watch the Bear'. As I men- tioned in the Preface, this is also a story that has been told many times before, often in greater detail than here. Now we come to a different kind of astronomy, to a story that could not have been told a decade ago. We will be dealing not with observations of light emitted in the last few hundred million years from galaxies more or less like our own, but with observations of a diffuse background of radio static left over from near the beginning of the universe. The setting also changes, to the roofs of university physics buildings, to balloons or rockets flying above the earth's atmosphere, and to the fields of northern New Jersey. In 1964 the Bell Telephone Laboratory was in possession of an unusual radio antenna on Crawford Hill at Holmdel, New Jersey. The antenna had been built for communication via the Echo satellite, but its characteristics — a 20-foot horn reflector with ultralow noise-made it a promising instru- ment for radio astronomy. A pair of radio astronomers, Arno A. Penzias and Robert W. Wilson, set out to use the antenna to measure the intensity of the radio waves emitted from our galaxy at high galactic latitudes, i.e. out of the plane of the Milky Way. This kind of measurement is very difficult. The radio waves from our galaxy, as from most astronomical sources, are best described as a sort of noise, much like the 'static' one hears The Cosmic Microwave Radiation Background 53 on a radio set during a thunderstorm. This radio noise is not easily distinguished from the inevitable electrical noise that is produced by the random motions of electrons within the radio antenna structure and the amplifier circuits, or from the radio noise picked up by the antenna from the earth's atmosphere. The problem is not so serious when one is study- ing a relatively 'small' source of radio noise, like a star or a distant galaxy. In this case one can switch the antenna beam back and forth between the source and the neighbouring empty sky; any spurious noise coming from the antenna structure, amplifier circuits, or the earth's atmosphere will be about the same whether the antenna is pointed at the source or the nearby sky, so it would cancel out when the two are compared. However, Penzias and Wilson were intend- ing to measure the radio noise coming from our own galaxy - in effect, from the sky itself. It was therefore crucially impor- tant to identify any electrical noise that might be produced within their receiving system. Previous tests of this system had in fact revealed a little more noise than could be accounted for, but it seemed likely that this discrepancy was due to a slight excess of electrical noise in the amplifier circuits. In order to eliminate such problems, Penzias and Wilson made use of a device known as a 'cold load' - the power coming from the antenna was compared with the power produced by an artificial source cooled with liquid helium, about four degrees above absolute zero. The electrical noise in the amplifier circuits would be the same in both cases, and would therefore cancel out in the comparison, allowing a direct measurement of the power coming from the antenna. The antenna power measured in this way would consist only of contributions from the antenna structure, from the earth's atmosphere, and from any astro- nomical sources of radio waves. Penzias and Wilson expected that very little electrical noise would be produced within the antenna structure. How- ever, in order to check this assumption, they started their observations at a relatively short wavelength of 7.35 centi- 56 The First Three Minutes that radio engineers often describe the intensity of radio noise in terms of a so-called antenna temperature, which is slightly different from the 'equivalent temperature' described above. For the wavelengths and intensities observed by Penzias and Wilson, the two definitions are virtually identical.) Penzias and Wilson found that the equivalent temperature of the radio noise they were receiving was about 3.5 degrees Centigrade above absolute zero (or more accurately, between 2.5 and 4.5 degrees above absolute zero). Temperatures measured on the Centigrade scale, but referred to absolute zero rather than the melting point of ice, are reported in 'degrees Kelvin'. Thus, the radio noise observed by Penzias and Wilson could be described as having an 'equivalent temperature' of 3.5 degrees Kelvin, or 3.5° K for short. This was much greater than expected, but still very low in absolute terms, so it is not surprising that Penzias and Wilson brooded over their result for a while before publishing it. It certainly was not immediately clear that this was the most important cosmological advance since the discovery of the red shifts. The meaning of the mysterious microwave noise soon began to be clarified through the operation of the 'invisible college' of astrophysicists. Penzias happened to telephone a fellow radio astronomer, Bernard Burke of MIT, about other matters. Burke had just heard from yet another colleague, Ken Turner of the Carnegie Institution, of a talk that Turner had in turn heard at Johns Hopkins, given by a young theorist from Princeton, P. J. E. Peebles. In this talk Peebles argued that there ought to be a background of radio noise left over from the early universe, with a present equivalent tempera- ture of roughly 10° K. Burke already knew that Penzias was measuring radio noise temperatures with the Bell Laboratories horn antenna, so he took the occasion of the telephone con- versation to ask how the measurements were going. Penzias said that the measurements were going fine, but that there was something about the results he didn't understand. Burke suggested to Penzias that the physicists at Princeton might The Cosmic Microwave Radiation Background 57 have some interesting ideas on what it was that his antenna was receiving. In his talk, and in a preprint written in March 1965, Peebles had considered the radiation that might have been present in the early universe. 'Radiation' is of course a general term, encompassing electromagnetic waves of all wavelengths - not only radio waves, but infra-red light, visible light, ultra- violet light, X rays, and the very short-wavelength radiation called gamma rays. (See table, page 164.) There are no sharp distinctions; with changing wavelength one kind of radiation blends gradually into another. Peebles noted that if there had not been an intense background of radiation present during the first few minutes of the universe, nuclear reactions would have proceeded so rapidly that a large fraction of the hydro- gen present would have been 'cooked' into heavier elements, in contradiction with the fact that about three-quarters of the present universe is hydrogen. This rapid nuclear cooking could have been prevented only if the universe was filled with radiation having an enormous equivalent temperature at very short wavelengths, which could blast nuclei apart as fast as they could be formed. We are going to see that this radiation would have sur- vived the subsequent expansion of the universe, but that its equivalent temperature would continue to fall as the universe expanded, in inverse proportion to the size of the universe. (As we shall see, this is essentially an effect of the red shift discussed in Chapter 2.) It follows that the present universe should also be filled with radiation, but with an equivalent temperature vastly less than it was in the first few minutes. Peebles estimated that, in order for the radiation background to have kept the production of helium and heavier elements in the first few minutes within known bounds, it would have to have been so intense that its present temperature would be at least 10 degrees Kelvin. The figure of 10° K was somewhat of an overestimate, and this calculation was soon supplanted by more elaborate and 58 The First Three Minutes accurate calculations by Peebles and others, which will be discussed in Chapter 5. Peebles's preprint was in fact never published in its original form. However, the conclusion was substantially correct: from the observed abundance of hydro- gen we can infer that the universe must in the first few minutes have been filled with an enormous amount of radia- tion which could prevent the formation of too much of the heavier elements; the expansion of the universe since then would have lowered its equivalent temperature to a few degrees Kelvin, so that it would appear now as a background of radio noise, coming equally from all directions. This immediately appeared as the natural explanation of the dis- covery of Penzias and Wilson. Thus, in a sense the antenna at Holmdel is in a box - the box is the whole universe. How- ever, the equivalent temperature recorded by the antenna is not the temperature of the present universe, but rather the temperature that the universe had long ago, reduced in pro- portion to the enormous expansion that the universe has undergone since then. Peebles's work was only the latest in a long series of similar cosmological speculations. In fact, in the late 1940s a 'big bang' theory of nucleosynthesis had been developed by George Gamow and his collaborators, Ralph Alpher and Robert Herman, and was used in 1948 by Alpher and Herman to predict a radiation background with a present temperature of about 5° K. Similar calculations were carried out in 1964 by Ya. B. Zeldovich in Russia and independently by Fred Hoyle and R. J. Tayler in England. This earlier work was not at first known to the groups at Bell Laboratories and Princeton, and it did not have an effect on the actual discovery of the radiation background, so we may wait until Chapter 6 to go into it in detail. We will also take up in Chapter 6 the puzzling historical question of why none of this earlier theoretical work had led to a search for the cosmic microwave background. Peebles's 1965 calculation had been instigated by the ideas of a senior experimental physicist at Princeton, Robert H. The Cosmic Microwave Radiation Background 61 the universe is expanding, so its contents must once have been much more compressed than now. The temperature of a fluid will generally rise when the fluid is compressed, so we can also infer that the matter of the universe was much hotter in the past. We believe in fact that there was a time, which as we shall see lasted perhaps for the first 700,000 years of the universe, when the contents of the universe were so hot and dense that they could not yet have clumped into stars and galaxies, and even the atoms were still broken up into their constituent nuclei and electrons. Under these unpleasant conditions a photon could not travel immense distances without hindrance, as it can in our present universe. A photon would find in its path a huge number of free electrons which could efficiently scatter or absorb it. If the photon is scattered by an electron it will generally either lose a little energy to the electron or gain a little energy from it, depending on whether the proton initially has more or less energy than the electron. The 'mean free time' that the photon could travel before it was absorbed or suffered an appreciable change in energy would have been quite short, much shorter than the characteristic time of the expansion of the universe. The corresponding mean free times for the other particles, the electrons and atomic nuclei, would have been even shorter. Thus, although in a sense the universe was expanding very rapidly at first, to an individual photon or electron or nucleus the expansion was taking plenty of time, time enough for each particle to be scattered or absorbed or re-emitted many times as the universe expanded. Any system of this sort, in which the individual particles have time for many interactions, is expected to come to a state of equilibrium. The numbers of particles with properties (position, energy, velocity, spin, and so on) in a certain range will settle down to a value such that an equal number of particles are knocked out of the range every second as are knocked into it. Thus, the properties of such a system will not be determined by any initial conditions, but rather by the requirement that the equilibrium be maintained. Of course, 62 The First Three Minutes 'equilibrium' here does not mean that the particles are frozen -each one is continually being knocked about by its neigh- bours. Rather, the equilibrium is statistical - it is the way that the particles are distributed in position, energy, and so on, that does not change, or changes slowly. Equilibrium of this statistical kind is usually known as 'thermal equilibrium', because a state of equilibrium of this kind is always characterized by a definite temperature which must be uniform throughout the system. Indeed, strictly speaking, it is only in a state of thermal equilibrium that temperature can be precisely defined. The powerful and profound branch of theoretical physics known as 'statistical mechanics' provides a mathematical machinery for comput- ing the properties of any system in thermal equilibrium. The approach to thermal equilibrium works a little like the way the price mechanism is supposed to work in classical economics. If demand exceeds supply, the price of goods will rise, cutting the effective demand and encouraging increased production. If supply exceeds demand, prices will drop, increasing effective demand and discouraging further produc- tion. In either case, supply and demand will approach equality. In the same way, if there are too many or too few particles with energies, velocities, and so on, in some par- ticular range, then the rate at which they leave this range will be greater or less than the rate at which they enter, until equilibrium is established. Of course, the price mechanism does not always work exactly the way it is supposed to in classical economics, but here too the analogy holds-most physical systems in the real world are quite far from thermal equilibrium. At the centres of stars there is nearly perfect thermal equilibrium, so we can estimate what conditions are like there with some confidence, but the surface of the earth is nowhere near equilibrium, and we cannot be sure whether or not it will rain tomorrow. The universe has never been in perfect thermal equilibrium, because after all it is expanding. However, during the early period when the rates of scattering or absorption of The Cosmic Microwave Radiation Background 63 individual particles were much faster than the rate of cosmic expansion, the universe could be regarded as evolving 'slowly' from one state of nearly perfect thermal equilibrium to another. It is crucial to the argument of this book that the uni- verse has once passed through a state of thermal equilibrium. According to the conclusions of statistical mechanics, the properties of any system in thermal equilibrium are entirely determined once we specify the temperature of the system and the densities of a few conserved quantities (about which more in the next chapter). Thus, the universe preserves only a very limited memory of its initial conditions. This is a pity, if what we want is to reconstruct the very beginning, but it also offers a compensation in that we can infer the course of events since the beginning without too many arbitrary assumptions. We have seen that the microwave radiation discovered by Penzias and Wilson is believed to be left over from a time when the universe was in a state of thermal equilibrium. Therefore, in order to see what properties we expect for the observed microwave radiation background, we have to ask: what are the general properties of radiation in thermal equilibrium with matter? As it happens, this is precisely the question which histori- cally gave rise to the quantum theory and the interpretation of radiation in terms of photons. By the 1890s it had become known that the properties of radiation in a state of thermal equilibrium with matter depend only on the temperature. To be more specific, the amount of energy per unit volume in such radiation within any given range of wavelengths is given by a universal formula, involving only the wavelength and the temperature. The same formula gives the amount of radia- tion inside a box with opaque walls, so a radio astronomer can use this formula to interpret the intensity of the radio noise he observes in terms of an 'equivalent temperature'. Essentially the same formula also gives the amount of radia- tion emitted per second and per square centimetre at any 66 The First Three Minutes equal to that of the material contents of the universe. The importance of Planck's calculation went far beyond the problem of black-body radiation, because in it he intro- duced a new idea, that energies come in distinct chunks, or 'quanta'. Planck originally considered only the quantization of the energy of the matter in equilibrium with radiation, but Einstein suggested a few years later that radiation itself comes in quanta, later called photons. These developments eventually led in the 1920S to one of the great intellectual revolutions in the history of science, the replacement of classical mechanics by an entirely new language, that of quantum mechanics. We are not going to be able to go far into quantum mechanics in this book. However, it will help us in under- standing the behaviour of radiation in an expanding universe to take a look at how the picture of radiation in terms of photons leads to the general features of the Planck dis- tribution. The reason that the energy density of black-body radiation falls off for very large wavelengths is simple: it is hard to fit radiation into any volume whose dimensions are smaller than the wavelength. This much could be (and was) understood without the quantum theory, simply on the basis of the older wave theory of radiation. On the other hand, the decrease of the energy density of black-body radiation for very short wavelengths could not be understood in a nonquantum picture of radiation. It is a well-known consequence of statistical mechanics that at any given temperature it is difficult to produce any kind of particle or wave or other excitation whose energy is greater than a certain definite amount, proportional to the tempera- ture. However, if wavelets of radiation could have arbitrarily small energies, then there would be nothing to limit the total amount of black-body radiation of very short wavelengths. Not only was this in contradiction with experiment- it would have led to the catastrophic result of the total energy of black-body radiation being infinite! The only way out was The Cosmic Microwave Radiation Background 67 to suppose that the energy comes in chunks or 'quanta', with the amount of energy in each chunk increasing with decreas- ing wavelength, so that at any given temperature there would be very little radiation at the short wavelengths for which the chunks are highly energetic. In the final formulation of this hypothesis due to Einstein, the energy of any photon is inversely proportional to the wavelength; at any given temperature, black-body radiation will contain very few photons that have too large an energy, and therefore very few that have too short a wavelength, thus explaining the fall- off of the Planck distribution at short wavelengths. To be specific, the energy of a photon with a wavelength of one centimetre is 0.000124 electron volts, and proportion- ally more at shorter wavelengths. The electron volt is a convenient unit of energy, equal to the energy gained by one electron in moving across a voltage drop of one volt. For instance, an ordinary 1.5 volt flashlight battery expends 1.5 electron volts for every electron that it pushes through the filament of the light bulb. (In terms of. the metric units of energy, one electron volt is 1.602 X 10-12 ergs, or 1.602 X 10-19 joules.) According to Einstein's rule, the energy of a photon at the 7.35 centimetre microwave wavelength to which Penzias and Wilson were tuned was 0.000124 electron volts divided by 7.35, or 0.000017 electron volts. On the other hand, a typical photon in visible light would have a wavelength of about a twenty-thousandth of a centimetre (5 X 10-5 cm), so its energy would be 0.000124 electron volts times 20,000, or about 2.5 electron volts. In either case the energy of a photon is very small in macroscopic terms, which is why photons seem to blend together into continuous streams of radiation. Incidentally, chemical reaction energies are generally of the order of an electron volt per atom or per electron. For instance, to rip the electron out of a hydrogen atom altogether takes 13.6 electron volts, but this is an exceptionally violent chemical event. The fact that photons in sunlight also have energies of the order of an electron volt or so is tremendously 68 The First Three Minutes important to us; it is what allows these photons to produce chemical reactions essential to life, such as photosynthesis. Nuclear reaction energies are generally of the order of a million electron volts per atomic nucleus, which is why a pound of plutonium has roughly the explosive energy of a million pounds of TNT. The photon picture allows us easily to understand the chief qualitative properties of black-body radiation. First, the principles of statistical mechanics tell us that the typical photon energy is proportional to the temperature, while Einstein's rule tells us that any photon's wavelength is inversely proportional to the photon energy. Hence, putting these two rules together, the typical wavelength of photons in black-body radiation is inversely proportional to the tem- perature. To put it quantitatively, the typical wavelength near which most of the energy of black-body radiation is concen- trated is 0.29 centimetres at a temperature of 1°K, and proportionally less at higher temperatures. For instance, an opaque body at an ordinary 'room' tem- perature of 300° K (=27°C) will emit black-body radiation with a typical wavelength of 0.29 centimetres divided by 300, or about a thousandth of a centimetre. This is in the range of infra-red radiation, and is too long a wavelength for our eyes to see. On the other hand, the surface of the sun is at a temperature of about 5800° K, and in consequence the light it emits is peaked at a wavelength of about 0.29 centimetres divided by 5800, that is, about five hundred- thousandths of a centimetre (5 X 10-8 cm) or, equivalently, about 5000 Angstrom units. (One Angstrom unit is one hundred-millionth or 10-8 of a centimetre.) As already men- tioned, this is in the middle of the range of wavelengths that our eyes evolved to be able to see, and which we call 'visible' wavelengths. The fact that these wavelengths are so short explains why it was not until the beginning of the nineteenth century that light was discovered to have a wave nature; it is only when we examine the light that passes through really small holes that we can notice phenomena characteristic of The Cosmic Microwave Radiation Background 71 formula as the universe expanded, even though it was no longer in thermal equilibrium with the matter. (See mathe- matical note 4, page 181.) The only effect of the expansion is to increase the typical photon wavelength in proportion to the size of the universe. The temperature of the black-body radia- tion is inversely proportional to the typical wavelength, so it would fall as the universe expanded, in inverse proportion to the size of the universe. For instance, Penzias and Wilson found that the intensity of the microwave static they had discovered corresponded to a temperature of roughly 3° K. This is just what would be expected if the universe has expanded by a factor of 1000 since the time when the temperature was high enough (3000° K) to keep matter and radiation in thermal equilibrium. If this interpretation is correct, the 3° K radio static is by far the most ancient signal received by astronomers, having been emitted long before the light from the most distant galaxies that we can see. But Penzias and Wilson had measured the intensity of the cosmic radio static at only one wavelength, 7.35 centimetres. Immediately it became a matter of extreme urgency to decide whether the distribution of radiant energy with wavelength is described by the Planck black-body formula, as would be expected if this really were red-shifted fossil radiation left over from some epoch when the radiation and matter of the universe were in thermal equilibrium. If so, then the 'equiva- lent temperature', calculated by matching the observed radio noise intensity to the Planck formula, should have the same value at all wavelengths as at the 7.35 centimetre wavelength studied by Penzias and Wilson. As we have seen, at the time of the discovery by Penzias and Wilson there already was another effort under way in New Jersey to detect a cosmic microwave radiation back- ground. Soon after the original pair of papers by the Bell Laboratories and Princeton groups, Roll and Wilkinson announced their own result: the equivalent temperature of the radiation background at a wavelength of 3.2 centimetres 72 The First Three Minutes was between 2.5 and 3.5 degrees Kelvin. That is, within experimental error, the intensity of the cosmic static at 3.2 centimetres wavelength was greater than at 7.35 centimetres by just the ratio that would be expected if the radiation is described by the Planck formula! Since 1965 the intensity of the fossil microwave radiation has been measured by radio astronomers at over a dozen wavelengths ranging from 73.5 centimetres down to 0.33 centimetres. Every one of these measurements is consistent with a Planck distribution of energy versus wavelength, with a temperature between 2.7° K and 3° K. However, before we jump to the conclusion that this really is black-body radiation, we should recall that the 'typical' wavelength, at which the Planck distribution reaches its maximum, is 0.29 centimetres divided by the temperature in degrees Kelvin, which for a temperature of 3° K works out to just under 0.1 centimetre. Thus all these microwave measurements have been on the long wavelength side of the maximum in the Planck distribution. But we have seen that the increase in energy density with decreasing wavelength in this part of the spectrum is just due to the difficulty of putting large wavelengths in small volumes, and would be expected for a wide variety of radiation fields, including radiation that was not produced under conditions of thermal equilibrium. (Radio astronomers refer to this part of the spectrum as the Rayleigh-Jeans region, because it was first analysed by Lord Rayleigh and Sir James Jeans.) In order to verify that we really are seeing black-body radiation, it is necessary to go beyond the maximum of the Planck distribu- tion into the short-wavelength region, and check that the energy density really does fall off with decreasing wavelength, as expected on the basis of the quantum theory. At wave- lengths shorter than 0.1 centimetre we are really outside the realm of the radio or microwave astronomers, and into the newer discipline of infra-red astronomy. Unfortunately the atmosphere of our planet, which is nearly transparent at wavelengths above 0.3 centimetres, becomes The Cosmic Microwave Radiation Background 73 increasingly opaque at shorter wavelengths. It does not seem likely that any ground-based radio observatory, even one located at mountain altitude, will be able to measure the cosmic radiation background at wavelengths much shorter than 0.3 centimetres. Oddly enough, the radiation background was measured at shorter wavelengths, long before any of the astronomical work discussed so far in this chapter, and by an optical rather than by a radio or infra-red astronomer! In the constellation Ophiuchus ('the serpent bearer') there is a cloud of interstellar gas which happens to lie between the earth and a hot but otherwise unremarkable star, t, Oph. The spectrum of S; Oph is crossed with a number of unusual dark bands, indicating that the intervening gas is absorbing light at a set of sharp wavelengths. These are the wavelengths at which photons have just the energies required to induce transitions in the molecules of the gas cloud, from states of lower to states of higher energy. (Molecules, like atoms, exist only in states of distinct, or 'quantized', energy.) Thus, observing the wavelengths where the dark bands occur, it is possible to infer something about the nature of these molecules, and of the states in which they are found. One of the absorption lines in the spectrum of t, Oph is at a wavelength of 3875 Angstrom units (38.75 millionths of a centimetre), indicating the presence in the interstellar cloud of a molecule, cyanogen (CN), consisting of one carbon and one nitrogen atom. (Strictly speaking, CN should be called a 'radical', meaning that under normal conditions it combines rapidly with other atoms to form more stable molecules, such as the poison, hydrocyanic acid [HCN]. In interstellar space CN is quite stable.) In 1941 it was found by W. S. Adams and A. McKellar that this absorption line is actually split, consisting of three components with wavelengths 3874.608 Angstroms, 3875.763 Angstroms, and 3873.998 Angstroms. The first of these absorption wavelengths corresponds to a transition in which the cyanogen molecule is lifted from its state of lowest energy (the 'ground state') to a vibrating state, 76 The First Three Minutes the distribution of the cosmic radiation background with direction as well as with wavelength. All observations so far are consistent with a radiation background that is perfectly isotropic, i.e. independent of direction. As mentioned in the preceding chapter, this is one of the most powerful arguments in favour of the Cosmological Principle. However, it is very difficult to distinguish a possible direction dependence that is intrinsic to the cosmic radiation background from one that is merely due to effects of the earth's atmosphere; indeed, in measurements of the radiation background temperature, the radiation background is distinguished from the radiation from our atmosphere by assuming that it is isotropic. The thing that makes the direction dependence of the microwave radiation background such a fascinating subject for study is that the intensity of this radiation is not expected to be perfectly isotropic. There might be fluctuations in the intensity with small changes in direction, caused by the actual lumpiness of the universe either at the time the radiation was emitted or since then. For instance, galaxies in the first stages of formation might show up as warm spots in the sky, with slightly higher black-body temperature than average, extending perhaps over half a minute of arc. In addition, there almost certainly is a small smooth variation of the radiation intensity around the whole sky, caused by the earth's motion through the universe. The earth is going around the sun at a speed of 30 kilometres per second, and the solar system is being carried along by the rotation of our galaxy at a speed of about 250 kilometres per second. No one knows precisely what velocity our galaxy has relative to the cosmic distribution of typical galaxies, but presumably it moves at a few hundred kilometres per second in some direc- tion. If, for example, we suppose that the earth is moving at a speed of 300 kilometres per second relative to the average matter of the universe, and hence relative to the radiation background, then the wavelength of the radiation coming from ahead or astern of the earth's motion should be decreased or increased, respectively, by the ratio of 300 kilo- The Cosmic Microwave Radiation Background 77 metres per second to the speed of light, or 0.1 per cent. Thus, the equivalent radiation temperature should vary smoothly with direction, being about 0.1 per cent higher than average in the direction towards which the earth is going and about 0.1 per cent lower than average in the direction from which we have come. For the last few years the best upper limit on any direction dependence of the equivalent radiation tempera- ture has been just about 0.1 per cent, so we have been in the tantalizing position of being almost but not quite able to measure the velocity of the earth through the universe. It may not be possible to settle this question until measurements can be made from satellites orbiting the earth. (As final correc- tions were being made in this book I received a Cosmic Background Explorer Satellite Newsletter No. 1 from John Mather of NASA. It announces the appointment of a team of six scientists, under Rainier Weiss of MIT, to study the possible measurement of the infra-red and microwave radia- tion backgrounds from space. Bon voyage.) We have observed that the cosmic microwave radiation background provides powerful evidence that the radiation and matter of the universe were once in a state of thermal equilibrium. However, we have not yet drawn much cosmo- logical insight from the particular observed numerical value of the equivalent radiation temperature, 3° K. In fact, this radiation temperature allows us to determine the one crucial number that we will need to follow the history of the first three minutes. As we have seen, at any given temperature, the number of photons per unit volume is inversely proportional to the cube of a typical wavelength, and hence directly proportional to the cube of the temperature. For a temperature of precisely 1° K there would be 20,282.9 photons per litre, so the 3° K radiation background contains about 550,000 photons per litre. However, the density of nuclear particles (neutrons and protons) in the present universe is somewhere between 6 and 0.03 particles per thousand litres. (The upper limit is twice the critical density discussed in Chapter 2; the lower limit is 78 The First Three Minutes a low estimate of the density actually observed in visible galaxies.) Thus, depending on the actual value of the particle density, there are between 100 million and 20,000 million photons for every nuclear particle in the universe today. Furthermore, this enormous ratio of photons to nuclear particles has been roughly constant for a very long time. During the period that the radiation has been expanding freely (since the temperature dropped below about 3000° K) the background photons and the nuclear particles have been neither created nor destroyed, so their ratio has naturally remained constant. We will see in the next chapter that this ratio was roughly constant even earlier, when individual photons were being created and destroyed. This is the most important quantitative conclusion to be drawn from measurements of the microwave radiation back- ground - as far back as we can look in the early history of the universe there have been between 100 million and 20,000 million photons per neutron or proton. In order not to sound unnecessarily equivocal, I will round off this number in what follows, and suppose for purposes of illustration that there are now and have been just 1000 million photons per nuclear particle in the average contents of the universe. One very important consequence of this conclusion is that the differentiation of matter into galaxies and stars could not have begun until the time when the cosmic temperature became low enough for electrons to be captured into atoms. In order for gravitation to produce the clumping of matter into isolated fragments that had been envisioned by Newton, it is necessary for gravitation to overcome the pressure of matter and the associated radiation. The gravitational force within any nascent clump increases with the size of the clump, while the pressure does not depend on the size; hence at any given density and pressure, there is a minimum mass which is susceptible to gravitational clumping. This is known as the 'Jeans mass', because it was first introduced in theories of the formation of stars by Sir James Jeans in 1902. It turns out that the Jeans mass is proportional to the three-halves 4 Recipe for a Hot Universe The observations discussed in the last two chapters have revealed that the universe is expanding, and that is it filled with a universal background of radiation, now at a tempera- ture of about 3° K. This radiation appears to be left over from a time when the universe was effectively opaque, when it was about 1000 times smaller and hotter than at present. (As always, when we speak of the universe being 1000 times smaller than at present we mean simply that the distance between any given pair of typical particles was 1000 times less then than now.) As a final preparation for our account of the first three minutes we must look back to yet earlier times, when the universe was even smaller and hotter, using the eye of theory rather than optical or radio telescopes to examine the physical conditions that prevailed. At the end of Chapter 3 we noted that when the universe was 1000 times smaller than at present, and its material contents were just on the verge of becoming transparent to radiation, the universe was also passing from a radiation- dominated era to the present matter-dominated era. During the radiation-dominated era there was not only the same enormous number of photons per nuclear particle that exists now, but the energy of the individual photons was sufficiently high so that most of the energy of the universe was in the form of radiation, not mass. (Recall that photons are the massless particles, or 'quanta', of which light, according to the quantum theory, is composed.) Hence, it should be a good approximation to treat the universe during this era as if it were filled purely with radiation, with essentially no matter at all. 82 The First Three Minutes One important qualification has to be attached to this conclusion. We will see in this chapter that the age of pure radiation actually began only at the end of the first few minutes, when the temperature had dropped below a few thousand million degrees Kelvin. At earlier times matter was important, but matter of a kind very different from that of which our present universe is composed. However, before we look that far back, let us first consider briefly the true era of radiation, from the end of the first few minutes up to the time, a few hundred thousand years later, when matter again became more important than radiation. In order to follow the history of the universe during this era, all we need to know is how hot everything was at any given moment. Or to put it a different way-how is the temperature related to the size of the universe as the universe expands? It would be easy to answer this question if the radiation could be considered to be expanding freely. The wavelength of each photon would have simply been stretched out (by the red shift) in proportion to the size of the universe, as the universe expanded. Furthermore, we have seen in the pre- ceding chapter that the average wavelength of black-body radiation is inversely proportional to its temperature. Thus the temperature would have decreased in inverse proportion to the size of the universe, just as it is doing right now. Fortunately for the theoretical cosmologist, the same simple relation holds even when we take into account the fact that the radiation was not really expanding freely - rapid collisions of photons with the relatively small number of electrons and nuclear particles made the contents of the universe opaque during the radiation-dominated era. While a photon was in free flight between collisions, its wavelength would have increased in proportion to the size of the universe, and there were so many photons per particle that the collisions simply forced the matter temperature to follow the radiation tem- perature, not vice versa. Thus, for instance, when the universe was ten thousand times smaller than now, the temperature Recipe for a Hot Universe 83 would have been proportionally higher than now, or about 30,000° K. So much for the true era of radiation. Eventually, as we look farther and farther back into the history of the universe, we come to a time when the tempera- ture was so high that collisions of photons with each other could produce material particles out of pure energy. We are going to find that the particles produced in this way out of pure radiant energy were just as important during the first few minutes as radiation, both in determining the rates of various nuclear reactions and in determining the rate of expansion of the universe itself. Therefore, in order to follow the course of events at really early times, we are going to need to know how hot the universe had to be to produce large numbers of material particles out of the energy of radiation, and how many particles were thus produced. The process by which matter is produced out of radiation can best be understood in terms of the quantum picture of light. Two quanta of radiation, or photons, may collide and disappear, all their energy and momentum going into the production of two or more material particles. (This process is actually observed indirectly in present-day high-energy nuclear physics laboratories.) However, Einstein's Special Theory of Relativity tells us that a material particle even at rest will have a certain 'rest energy' given by the famous formula E=mc2. (Here c is the speed of light. This is the source of the energy released in nuclear reactions, in which a fraction of the mass of atomic nuclei is annihilated.) Hence, in order for two photons to produce two material particles of mass m in a head-on collision, the energy of each proton must be at least equal to the rest energy mc2 of each particle. The reaction will still occur if the energy of the individual photons is greater than mc2; the extra energy will simply go into giving the material particles a high velocity. However, particles of mass m cannot be produced in collisions of two photons if the energy of the photons is below mc2, because there is then not enough energy to produce even the mass of these particular particles. 86 The First Three Minutes being their own antiparticles. The relation between particle and antiparticle is reciprocal: the positron is the antiparticle of the electron, and the electron is the antiparticle of the positron. Given enough energy, it is always possible to create any kind of particle-antiparticle pair in collisions of pairs of photons. (The existence of antiparticles is a direct mathematical consequence of the principles of quantum mechanics and Einstein's Special Theory of Relativity. The existence of the antielectron was first deduced theoretically by Paul Adrian Maurice Dirac in 1930. Not wanting to introduce an un- known particle into his theory, he identified the antielectron with the only positively charged particle then known, the proton. The discovery of the positron in 1932 verified the theory of antiparticles, and also showed that the proton is not the antiparticle of the electron; it has its own antiparticle, the antiproton, discovered in the 1950s at Berkeley.) The next lightest particle types after the electron and posi- tron are the muon, or m-, a kind of unstable heavy electron, and its antiparticle, the m+. Just as for electrons and positrons, the m- and m+ have opposite electrical charge but equal mass, and can be created in collisions of photons with each other. The m- and m+ each have a rest energy mc2 equal to 105.6596 million electron volts, and dividing by Boltzmann's constant, the corresponding threshold temperature is 1.2 million million degrees (1.2 X 1012 ° K). Corresponding threshold tempera- tures for other particles are given in table 1 on page 163. By inspection of this table we can tell which particles could have been present in large numbers at various times in the history of the universe: they are just the particles whose threshold temperatures were below the temperature of the universe at that time. How many of these material particles actually were present at temperatures above the threshold temperature? Under the conditions of high temperature and density that prevailed in the early universe, the number of particles was governed by the basic condition of thermal equilibrium: the number of Recipe for a Hot Universe 87 particles must have been just high enough so that precisely as many were being destroyed each second as were being created. (That is, demand equals supply.) The rate at which any given particle-antiparticle pair will annihilate into two photons is about equal to the rate at which any given pair of photons of the same energy will turn into such a particle and antiparticle. Hence, the condition of thermal equilibrium requires that the number of particles of each type, whose threshold temperature is below the actual temperature, should be about equal to the number of photons. If there are fewer particles than photons, they will be created faster than they are destroyed, and their number will rise; if there are more particles than photons, they will be destroyed faster than they are created, and their number will drop. For instance, at temperatures above the threshold of 6000 million degrees the number of electrons and positrons must have been about the same as the number of photons, and the universe at these times can be considered to be composed predominantly of photons, electrons and positrons, not just photons alone. However, at temperatures above the threshold tempera- ture, a material particle behaves much like a photon. Its average energy is roughly equal to the temperature times Boltzmann's constant, so that high above the threshold tem- perature its average energy is much larger than the energy in the particle's mass, and the mass can be neglected. Under such conditions the pressure and energy density contributed by material particles of a given type are simply proportional to the fourth power of the temperature, just as for photons. Thus, we can think of the universe at any given time as being composed of a variety of types of 'radiation', one type for each species of particle whose threshold temperature was below the cosmic temperature at that time. In particular, the energy density of the universe at any time is proportional to the fourth power of the temperature and to the number of species of particles whose threshold temperature is below the cosmic temperature at that time. Conditions of this sort, with temperatures so high that particle-antiparticle pairs are as 88 The First Three Minutes common in thermal equilibrium as photons, do not exist any- where in the present universe, except perhaps in the cores of exploding stars. However, we have enough confidence in our knowledge of statistical mechanics to feel safe in making theories about what must have happened under such exotic conditions in the early universe. To be precise, we should keep in mind that an anti- particle like the positron (e+ counts as a distinct species. Also, particles like photons and electrons exist in two distinct states of spin, which should be counted as separate species. Finally, particles like the electron (but not the photon) obey a special rule, the 'Pauli exclusion principle', which prohibits two particles from occupying the same state; this rule effec- tively lowers their contribution to the total energy density by a factor of seven-eighths. (It is the exclusion principle that prevents all the electrons in an atom from falling into the same lowest-energy shell; it is therefore responsible for the complicated shell structure of atoms revealed in the periodic table of the elements.) The effective number of species for each type of particle is listed along with the threshold tem- peratures in table 1 on page 163. The energy density of the universe at a given temperature is proportional to the fourth power of the temperature and to the effective number of species of particles whose threshold temperatures lie below the temperature of the universe. Now let's ask when the universe was at these elevated temperatures. It is the balance between the gravitational field and the outward momentum of the contents of the universe that governs the rate of expansion of the universe. And it is the total energy density of photons, electrons, positrons, etc., that provided the source of the gravitational field of the universe at early times. We have seen that the energy density of the universe depended essentially only on the temperature, so the cosmic temperature can be used as a sort of clock, cooling instead of ticking as the universe expands. To be more specific, it can be shown that the time required for the energy density of the universe to fall from one value to Recipe for a Hot Universe 91 And why stop at elementary particles - do we also have to specify the numbers of different types of atoms, of molecules, of salt and pepper? In this case, we might well decide that the universe is too complicated and too arbitrary to be worth understanding. Fortunately, the universe is not that complicated. In order to see how it is possible to write a recipe for its contents, it is necessary to think a little more about what is meant by the condition of thermal equilibrium. I have already emphasized how important it is that the universe has passed through a state of thermal equilibrium - it is what allows us to speak with such confidence about the contents of the universe at any given time. Our discussion so far in this chapter has amounted to a series of applications of the known properties of matter and radiation in thermal equilibrium. When collisions or other processes bring a physical system to a state of thermal equilibrium, there are always some quantities whose values do not change. One of these 'con- served quantities' is the total energy; even though collisions may transfer energy from one particle to another, they never change the total energy of the particles participating in the collision. For each such conservation law there is a quantity that must be specified before we can work out the properties of a system in thermal equilibrium - obviously, if some quantity does not change as a system approaches thermal equilibrium, its value cannot be deduced from the conditions for equilibrium, but must be specified in advance. The really remarkable thing about a system in thermal equilibrium is that all its properties are uniquely determined once we specify the values of the conserved quantities. The universe has passed through a state of thermal equilibrium, so to give a complete recipe for the contents of the universe at early times, all we need is to know what were the physical quan- tities which were conserved as the universe expanded, and what were the values of these quantities. Usually, as a substitute for specifying the total energy con- tent of a system in thermal equilibrium, we specify the 92 The First Three Minutes temperature. For the kind of system we have mostly been considering up till now, consisting solely of radiation and equal numbers of particles and antiparticles, the temperature is all that need be given in order to work out the equilibrium properties of the system. But in general there are other con- served quantities in addition to the energy, and it is necessary to specify the densities of each one. For instance, in a glass of water at room temperature, there are continual reactions in which a water molecule breaks up into a hydrogen ion (a bare proton, the nucleus of hydrogen with the electron stripped off) and a hydroxyl ion (an oxygen atom bound to a hydrogen atom, with an extra electron), or in which hydrogen and hydroxyl ions rejoin to form water molecules. Note that in each such reaction the disappearance of a water molecule is accompanied by the appearance of a hydrogen ion, and vice versa, while hydrogen ions and hydroxyl ions always appear or disappear together. Thus, the conserved quantities are the total number of water molecules plus the number of hydrogen ions, and the number of hydrogen ions minus the number of hydroxyl ions. (Of course, there are other conserved quantities, like the total number of water molecules plus hydroxyl ions, but these are just simple combinations of the two fundamental conserved quantities.) The properties of our glass of water can be completely deter- mined if we specify that the temperature is 300° K (room temperature on the Kelvin scale), that the density of water molecules plus hydrogen ions is 3.3 X 1022 molecules or ions per cubic centimetre (roughly corresponding to water at sea- level pressures), and that the density of hydrogen ions minus hydroxyl ions is zero (corresponding to zero net charge). For instance, it turns out that under these conditions there is one hydrogen ion for about every ten million (107) water mole- cules - this is what is meant by the statement that the pH of water is 7. Note that we do not have to specify this in our recipe for a glass of water; we deduce the proportion of hydrogen ions from the rules for thermal equilibrium. On the other hand, we cannot deduce the densities of the conserved Recipe for a Hot Universe 93 quantities from the conditions for thermal equilibrium - for instance, we can make the density of water molecules plus hydrogen ions a little greater or less than 3.3 X 1022 mole- cules per cubic centimetre by raising or lowering the pressure - so we need to specify them in order to know what is in our glass. This example also helps us to understand the shifting meaning of what we call 'conserved' quantities. For instance, if our water is at a temperature of millions of degrees, as inside a star, then it is very easy for molecules or ions to dissociate, and for the constituent atoms to lose their elec- trons. The conserved quantities are then the numbers of elec- trons and of oxygen and hydrogen nuclei. The density of water molecules plus hydroxyl atoms under these conditions has to be calculated from the rules of statistical mechanics rather than specified in advance; of course, it turns out to be quite small. (Snowballs are rare in hell.) Actually, nuclear reactions do occur under these conditions, so even the numbers of nuclei of each species are not absolutely fixed, but these numbers change so slowly that a star can be regarded as evolving gradually from one equilibrium state to another. Ultimately, at the temperatures of several thousand million degrees that we encounter in the early universe, even atomic nuclei dissociate readily into their constituents, protons and neutrons. Reactions occur so rapidly that matter and anti- matter can easily be created out of pure energy, or annihilated back again. Under these conditions the conserved quantities are not the numbers of particles of any specific kind. Instead, the relevant conservation laws are reduced to just that small number which (as far as we know) are respected under all possible conditions. There are believed to be just three con- served quantities whose densities must be specified in our recipe for the early universe: 1. Electric Charge. We can create or destroy pairs of particles with equal and opposite electric charge, but the net electric 96 The First Three Minutes inverse size of the universe.) Therefore, the charge, baryon number and lepton number per photon remain fixed, and our recipe can be given once and for all by specifying the values of the conserved quantities as a ratio to the number of photons. (Strictly speaking, the quantity which varies as the inverse cube of the size of the universe is not the number of photons per unit volume but the entropy per unit volume. Entropy is a fundamental quantity of statistical mechanics, related to the degree of disorder of a physical system. Aside from a con- ventional numerical factor, the entropy is given to a good enough approximation by the total number of all particles in thermal equilibrium, material particles as well as photons, with different species of particles given the weights shown in table 1 on page 163. The constants that we really should use to characterize our universe are the ratios of charge to entropy, baryon number to entropy, and lepton number to entropy. However, even at very high temperatures the number of material particles is at most of the same order of magnitude as the number of photons, so we will not be making a serious error if we use the number of photons instead of the entropy as our standard of comparison.) It is easy to estimate the cosmic charge per photon. As far as we know, the average density of electric charge is zero throughout the universe. If the earth and the sun had an excess of positive over negative charges (or vice versa) of only one part in a million million million million million million (1036), the electrical repulsion between them would be greater than their gravitational attraction. If the universe is finite and closed, we can even promote this observation to the status of a theorem: the net charge of the universe must be zero, for otherwise the lines of electrical force would wind round and round the universe, building up to an infinite electric field. But whether the universe is open or closed, it is safe to say that the cosmic electric charge per photon is negligible. The baryon number per photon is also easy to estimate. Recipe for a Hot Universe 97 The only stable baryons are the nuclear particles, the proton and neutron, and their antiparticles, the antiproton and anti- neutron. (The free neutron is actually unstable, with an average life of 15.3 minutes, but nuclear forces make the neutron absolutely stable in the atomic nuclei of ordinary matter.) Also, as far as we know, there is no appreciable amount of antimatter in the universe. (More about this later.) Hence, the baryon number of any part of the present universe is essentially equal to the number of nuclear particles. We observed in the preceding chapter that there is now one nuclear particle for every 1000 million photons in the micro- wave radiation background (the exact figure is uncertain), so the baryon number per. photon is about one thousand- millionth (10-9). This is really a remarkable conclusion. To see its implica- tions, consider a time in the past when the temperature was above ten million million degrees (1013 ° K), the threshold temperature for neutrons and protons. At that time the uni- verse would have contained plenty of nuclear particles and antiparticles, about as many as photons. But the baryon number is the difference between the numbers of nuclear particles and antiparticles. If this difference were 1000 million times smaller than the number of photons, and hence also about 1000 million times smaller than the total number of nuclear particles, then the number of nuclear particles would have exceeded the number of antiparticles by only one part in 1000 million. In this view, when the universe cooled below the threshold temperature for nuclear particles, the anti- particles all annihilated with corresponding particles, leaving the tiny excess of particles over antiparticles as a residue which would eventually turn into the world we know. The occurrence in cosmology of a pure number as small as one part per 1000 million has led some theorists to suppose that the number really is zero - that is, that the universe really contains an equal amount of matter and antimatter. Then the fact that the baryon number per photon appears to be one part in 1000 million would have to be explained by 98 The First Three Minutes supposing that, at some time before the cosmic temperature dropped below the threshold temperature for nuclear par- ticles, there was a segregation of the universe into different domains, some with a slight excess (a few parts per 1000 million) of matter over antimatter, and others with a slight excess of antimatter over matter. After the temperature dropped and as many particle-antiparticle pairs as possible annihilated, we would be left with a universe consisting of domains of pure matter and domains of pure antimatter. The trouble with this idea is that no one has seen signs of appreciable amounts of antimatter anywhere in the universe. The cosmic rays that enter our earth's upper atmosphere are believed to come in part from great distances in our galaxy, and perhaps in part from outside our galaxy as well. The cosmic rays are overwhelmingly matter rather than antimatter -in fact, no one has yet observed an antiproton or an anti- nucleus in the cosmic rays. In addition, we do not observe the photons that would be produced from annihilation of matter and antimatter on a cosmic scale. Another possibility is that the density of photons (or, more properly, of entropy) has not remained proportional to the inverse cube of the size of the universe. This could happen if there were some sort of departure from thermal equilibrium, some sort of friction or viscosity which could have heated the universe and produced extra photons. In this case, the baryon number per photon might have started at some reasonable value, perhaps around one, and then dropped to its present low value as more photons were produced. The trouble is that no one has been able to suggest any detailed mechanism for producing these extra photons. I tried to find one some years ago, with utter lack of success. In what follows I will ignore all these 'nonstandard' possi- bilities, and will simply assume that the baryon number per photon is what it seems to be: about one part in 1000 million. What about the lepton number density of the universe? The fact that the universe has no electric charge tells us immediately that there is now precisely one negatively Recipe for a Hot Universe 101 antileptons, is and was much smaller than the number of photons. There may have been some small excess of leptons over antileptons, like the small excess of baryons over anti- baryons mentioned earlier, which has survived to the present time. In addition, the neutrinos and antineutrinos interact so weakly that large numbers of them may have escaped anni- hilation, in which case there would now be nearly equal numbers of neutrinos and antineutrinos, comparable to the number of photons. We will see in the next chapter that this is indeed believed to be the case, but there does not seem to be the slightest chance in the foreseeable future of observing the vast number of neutrinos and antineutrinos around us. This then in brief is our recipe for the contents of the early universe. Take a charge per photon equal to zero, a baryon number per photon equal to one part in 1000 million, and a lepton number per photon uncertain but small. Take the temperature at any given time to be greater than the tempera- ture 3° K of the present radiation background by the ratio of the present size of the universe to the size at that time. Stir well, so that the detailed distributions of particles of various types are determined by the requirements of thermal, equilibrium. Place in an expanding universe, with a rate of expansion governed by the gravitational field produced by this medium. After a long enough wait, this concoction should turn into our present universe. The First Three Minutes We are now prepared to follow the course of cosmic evolu- tion through its first three minutes. Events move much more swiftly at first than later, so it would not be useful to show pictures spaced at equal time intervals, like an ordinary movie. Instead, I will adjust the speed of our film to the falling temperature of the universe, stopping the camera to take a picture each time that the temperature drops by a factor of about three. Unfortunately, I cannot start the film at zero time and infinite temperature. Above a threshold temperature of fifteen hundred thousand million degrees Kelvin (1.5 X 1012 ° K), the universe would contain large numbers of the particles known as pi mesons, which weigh about one-seventh as much as a nuclear particle. (See table 1 on page 163). Unlike the electrons, positrons, muons and neutrinos, the pi mesons inter- act very strongly with each other and with nuclear particles - in fact, the continual exchange of pi mesons among nuclear particles is responsible for most of the attractive force which holds atomic nuclei together. The presence of large numbers of such strongly interacting particles makes it extraordinarily difficult to calculate the behaviour of matter at super-high temperatures, so to avoid such difficult mathematical problems I will start the story in this chapter at about one-hundredth of a second after the beginning, when the temperature had cooled to a mere hundred thousand million degrees Kelvin, safely below the threshold temperatures for pi mesons, muons, and all heavier particles. In Chapter 7 I will say a little about what theoretical physicists think may have been going on closer to the very beginning. With these understandings, let us now start our film. The First Three Minutes 103 FIRST FRAME. The temperature of the universe is 100,000 million degrees Kelvin (1011 ° K). The universe is simpler and easier to describe than it ever will be again. It is filled with an undifferentiated soup of matter and radiation, each particle of which collides very rapidly with the other particles. Thus, despite its rapid expansion, the universe is in a state of nearly perfect thermal equilibrium. The contents of the universe are therefore dictated by the rules of statistical mechanics, and do not depend at all on what went before the first frame. All we need to know is that the temperature is 1011 ° K, and that the conserved quantities-charge, baryon number, lepton number - are all very small or zero. The abundant particles are those whose threshold tempera- tures are below 1011 ° K; these are the electron and its anti- particle, the positron, and of course the massless particles, the photon, neutrinos and antineutrinos. (Again, see table 1 on page 163.) The universe is so dense that even the neutrinos, which can travel for years through lead bricks without being scattered, are kept in thermal equilibrium with the electrons, positrons and photons by rapid collisions with them and with each other. (Again, I will sometimes simply refer to 'neutrinos' when I mean neutrinos and antineutrinos.) Another great simplification - the temperature of 1011 ° K is far above the threshold temperature for electrons and posi- trons. It follows that these particles, as well as the photons and neutrinos, are behaving just like so many different kinds of radiation. What is the energy density of these various kinds of radiation? According to table 1 on page 163, the electrons and positrons together contribute 7/4 as much energy as the photons, and the neutrinos and antineutrinos contribute the same as the electrons and positrons, so the total energy density is greater than the energy density for pure electro- magnetic radiation at this temperature, by a factor
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved