ForumsWEPRHow old do you think the universe is?

219 78396
dr_doughnut
offline
dr_doughnut
72 posts
Nomad

I don't personally believe in billions an millions of years, but I want to know what people think.

  • 219 Replies
Kasic
offline
Kasic
5,557 posts
Jester

And this goes on forever. It's Simple relatively.


That's not what we call relativity.

The universe has existed since shortly after the beginning of Time.


As time relies on there being space, "Time" as we know it would come into existence at the same instant.

God created one Universe.


Proof please. Bible is not proof as this is where you are getting your claim from.
wontgetmycatnip
offline
wontgetmycatnip
95 posts
Peasant

Also to answer the "Who created God?" question, the Universe probably created God so God Created the universe so the universe could create God so God could create the universe... And this goes on forever. It's Simple relatively.

http://3.bp.blogspot.com/-b5PgijqjWhA/Tox2fGjWpsI/AAAAAAAABo0/_TzQHdfELgA/s1600/circular-reasoning1.jpg
God Created the universe

Which god? Whose version? How did god create the universe? What did god create the universe from? How do you define god? How do you define universe? How would you practically demonstrate that claim?
MageGrayWolf
offline
MageGrayWolf
9,470 posts
Farmer

Also to answer the "Who created God?" question, the Universe probably created God so God Created the universe so the universe could create God so God could create the universe... And this goes on forever. It's Simple relatively.


That is what's called a paradox, not relativity.

God created one Universe. Then As it expanded (like a bubble) a little bit separated into another Bubble. And the bubbles multiplied and multiplied and Multiplied and one of those was our Universe, This entire Theory is the Multiverse Theory.


This is what we call mutilating a hypothesis or the beginnings of possibly good fiction.
killersup10
offline
killersup10
2,739 posts
Blacksmith

Probably around thirty years old. Yeah, that seems about right.


God Created the universe


Proof?
Kasic
offline
Kasic
5,557 posts
Jester

I said RelativeLY not Relativity.


The capital S on simple threw me off along with the lack of a comma. Don't blame us for misunderstanding when you don't use correct grammar. Capital letters denote proper nouns or terms and without a comma we assumed you meant the term Relativity.

I based my Theory on what I think probably happened.


You don't have a Theory if you just made that up by assimilating random parts of various things, you have a jumbled piece of fiction.

It's called reasoning.


Faulty reasoning. You're pulling in random, unrelated things that you think and smashing them together in a nonsensical mess.

God, he has no name,


God -is- God's name, when referring to the Christian god. Yahweh is the name for the Hebrew god and Allah is the name for the Islamic god. A lower case god is the noun for any sort of deity in the English language.

How the he** should I know? I wasn't there. All I know is he created it.


Do you really not see the logical failings in this statement?

"How the hell should I know, I wasn't there."
"All I know is he created it"

You're asserting the opposite, that you know, directly after saying that you have no way of knowing.

Don't ask me for Proof when you don't have any to back up your Theory/belief.


We do though.
MageGrayWolf
offline
MageGrayWolf
9,470 posts
Farmer

That's another way to say it. Look above. RelativeLY Learn to read.


I accept I misread the word, though it still an inaccurate used.

No it's called slightly changing the Mulitiverse Theory to fit you religion.


The multiverse is not a theory, it's a hypothesis. There are however theories suggesting such an existence. Unless you're willing to support what you tacked on with evidence your just mutilating it or writing fiction.

Don't ask me for Proof when you don't have any to back up your Theory/belief.


Theories by their nature are supported by evidence, if they were not they wouldn't be theories. Which one would you like evidence for? As for belief haven't posited one as you have.
Since you are making a claim support that claim or drop it. This trying to sidestep this requirement isn't going to fly.
MageGrayWolf
offline
MageGrayWolf
9,470 posts
Farmer

About the so called "Random Big Bang Explosion" as I like to call it.


It may not have been random but the only natural course things could have taken.
As for your evidence, here you go.
What is the evidence for the Big Bang?
Evidence for the Big Bang

See how easy that is? If you like I can even copy and paste it here so you don't have to trouble yourself with links.

The Multiverse or God Creating the Universe?


I will go with the God claim since that is the one that seems completely unbacked.

I should have said "It's relatively simple"


As simple as a time traveler shooting himself so he can't travel back to shoot himself.
Kasic
offline
Kasic
5,557 posts
Jester

Trust me there are people on this site with MUCH worse grammar than me.


I didn't say there weren't. All I said was you can't really blame us for a misunderstanding caused mostly by those small grammatical errors.

I stated (stated not started) the Multiverse Theory.


After you attached God to it.

" If you want to ask this let me first ask you What was the Big Bang created from if there was nothing?"


There's no solid evidence on that part yet. However, we're not making a claim at that point. From the way the universe is shaped and from the red shift effect we are inferring that at one point there was a singular point. That's what the Big Bang states in a very tiny summarized nutshell. It doesn't go on to say why it was like that.

You're the one making the claim that it was God who started it all. Since you assert that, you need evidence.

As for us, we simply don't know yet and there are various theories that aren't concrete.

Also please tell me what you meant by your last question on your first reply to me.


Uh...quote it. I don't know what you're referring to.

(btw unbacked isn't a word)


Oh well. Words that aren't words become words if they're used as words enough.

When people pray miracles happen. When they don't they don't explain that


You've got several large holes here.

1) What is a "miracle?" If you mean an act that isn't physically possible, then you have no evidence for those. If you mean a fortunate turn of events, if you poll from a large enough source the nearly impossible becomes inevitable.

2) You have no evidence that prayer works. All studies which have been done to determine if prayer has an effect ended with there being no effect or adverse consequences from stress, or minor improvements which fall well into the Placebo effect.

3) This would presume that you would need to be praying to the right god in order for prayer to bring about miracles. We have conflicting claims of prayers being answered/gods taking action in every religion that has a deity.
MageGrayWolf
offline
MageGrayWolf
9,470 posts
Farmer

God I hate the layouts of those sites just post it on here okay.


From the first link.
" The evidence for the Big Bang comes from many pieces of observational data that are consistent with the Big Bang. None of these prove the Big Bang, since scientific theories are not proven. Many of these facts are consistent with the Big Bang and some other cosmological models, but taken together these observations show that the Big Bang is the best current model for the Universe. These observations include:

The darkness of the night sky - Olbers' paradox.
The Hubble Law - the linear distance vs redshift law. The data are now very good.
Homogeneity - fair data showing that our location in the Universe is not special.
Isotropy - very strong data showing that the sky looks the same in all directions to 1 part in 100,000.
Time dilation in supernova light curves.

The observations listed above are consistent with the Big Bang or with the Steady State model, but many observations support the Big Bang over the Steady State:

Radio source and quasar counts vs. flux. These show that the Universe has evolved.
Existence of the blackbody CMB. This shows that the Universe has evolved from a dense, isothermal state.
Variation of TCMB with redshift. This is a direct observation of the evolution of the Universe.
Deuterium, 3He, 4He, and 7Li abundances. These light isotopes are all well fit by predicted reactions occurring in the First Three Minutes.

Finally, the angular power spectrum of the CMB anisotropy that does exist at the several parts per million level is consistent with a dark matter dominated Big Bang model that went through the inflationary scenario.
"

From the second link
"[i]2) Evidence

Having established the basic ideas and language of BBT, we can now look at how the data compares to what we expect from the theory. As we mentioned at the end of the last section, there is no single experiment that is sensitive to all aspects of BBT. Rather, any given observation provides insight into some combination of parameters and aspects of the theory and we need to combine the results of several different lines of inquiry to get the clearest possible global picture. This sort of approach will be most apparent in the last two sections where we discuss the evidence for the two most exotic aspects of current BBT: dark matter and dark energy.
a) Large-scale homogeneity

Going back to our original discussion of BBT, one of the key assumptions made in deriving BBT from GR was that the universe is, at some scale, homogeneous. At small scales where we encounter planets, stars and galaxies, this assumption is obviously not true. As such, we would not expect that the equations governing BBT would be a very good description of how these systems behave. However, as one increases the scale of interest to truly huge scales -- hundreds of millions of light-years -- this becomes a better and better approximation of reality.

As an example, consider the plot below showing galaxies from the Las Campanas Redshift Survey (provided by Ned Wright). Each dot represents a galaxy (about 20,000 in the total survey) where they have measured both the position on the sky and the redshift and translated that into a location in the universe. Imagine putting down many circles of a fixed size on that plot and counting how many galaxies are inside each circle. If you used a small aperture (where "small" is anything less than tens of millions of light years), then the number of galaxies in any given circle is going to fluctuate a lot relative to the mean number of galaxies in all the circles: some circles will be completely empty, while others could have more than a dozen. On the other hand, if you use large circles (and stay within the boundaries!), the variation from circle to circle ends up being quite small compared to the average number of galaxies in each circle. This is what cosmologists mean when they say that the universe is homogeneous. An even stronger case for homogeneity can be made with the CMBR, which we will discuss below.

http://www.talkorigins.org/faqs/astronomy/lcrs.gif

b) Hubble Diagram

The basic idea of an expanding universe is the notion that the distance between any two points increases over time. One of the consequences of this effect is that, as light travels through this expanding space, its wavelength is stretched as well. In the optical part of the electromagnetic spectrum, red light has a longer wavelength than blue light, so cosmologists refer to this process as redshifting. The longer light travels through expanding space, the more redshifting it experiences. Therefore, since light travels at a fixed speed, BBT tells us that the redshift we observe for light from a distant object should be related to the distance to that object. This rather elegant conclusion is made a bit more complicated by the question of what exactly one means by "distance" in an expanding universe (see Ned Wright's Many Distances section in his cosmology tutorial for a rundown of what "distance" can mean in BBT), but the basic idea remains the same.

Cosmological redshift is often misleadingly conflated with the phenomenon known as the Doppler Effect. This is the change in wavelength (either for sound or light) that one observes due to relative motion between the observer and the sound/light source. The most common example cited for this effect is the change in pitch as a train approaches and then passes the observer; as the train draws near, the pitch increases, followed by a rapid decrease as the train gets farther away. Since the expansion of the universe seems like some sort of relative motion and we know from the discussion above that we should see redshifted photons, it is tempting to cast the cosmological redshift as just another manifestation of the Doppler Effect. Indeed, when Edwin Hubble first made his measurements of the expansion of the universe, his initial interpretation was in terms of a real, physical motion for the galaxies; hence, the units on Hubble's Constant: kilometers per second per megaparsec.

In reality, however, the "motion" of distant galaxies is not genuine movement like stars orbiting the center of our galaxy, Earth orbiting the Sun or even someone walking across the room. Rather, space is expanding and taking the galaxies along for the ride. This can be seen from the formula for calculating the redshift of a given source. Redshift (z) is related to the ratio of the observed wavelength (W_O) and the emitted wavelength of light (W_E) as follows: 1 + z = W_O/W_E. The wavelength of light is expanded at the same rate as the universe, so we also know that: 1 + z = a_O/a_E, where a_O is the current value of the scale factor (usually set to 1) and a_E is the value of the scale factor when the light was emitted. As one can see, velocity is nowhere to be found in these equations, verifying our earlier claim. More detail on this point can be found at The Cosmological Redshift Reconsidered. If one insists (and is very careful about what exactly one means by "distance" and "velocity&quot, understanding the cosmological redshift as a Doppler shift is possible, but (for reasons that we will cover next) this is not the usual interpretation.

As we mentioned previously, even after Einstein developed GR, the consensus belief in astronomy was that the universe was static and had existed forever. In 1929, however, Edwin Hubble made a series of measurements at Mount Wilson Observatory near Pasadena, California. Using Cepheid variable stars in a number of galaxies, Hubble found that the redshift (which he interpreted as a velocity, as mentioned above) was roughly proportional to the distance. This relationship became known as Hubble's Law and sparked a series of theoretical papers that eventually developed into modern BBT.

At first glance, assembling a Hubble diagram and determining the value of Hubble's Constant seems quite easy. In practice, however, this is not the case. Measuring the distance to galaxies (and other astronomical objects) is never simple. As mentioned above, the only data that we have from the universe is light; imagine the difficulty of accurately estimating the distance to a person walking down the street without knowing how tall they are or being able to move your head. However, using a combination of geometry physics and statistics, astronomers have managed to come up with a series of interlocking methods, known as the distance ladder, which are reasonably reliable. The TO FAQ on determining astronomical distances provides a thorough run-down of these methods, their applicability and their limitations.

Conversely, the other side of the equation, the redshift, is relatively easy to measure given today's astronomical hardware. Unfortunately, when one measures the redshift of a galaxy, that value contains more than just the cosmological redshift. Like stars and planets, galaxies have real motions in response to their local gravitational environment: other galaxies, galaxy clusters and so on. This motion is called peculiar velocity in cosmological parlance and it generates an associated redshift (or blueshift!) via the Doppler Effect. For relatively nearby galaxies, the amplitude of this effect can easily dwarf the cosmological redshift. The most striking example of this is the Andromeda galaxy, within our own Local Group. Despite being around 2 million light years away, it is on a collision course with the Milky Way and the light from Andromeda is consequently shifted towards the blue end of the spectrum, rather than the red. The upshot of this complication is that, if we want to measure the Hubble parameter, we need to look at galaxies that are far enough away that the cosmological redshift is larger than the effects of peculiar velocities. This sets a lower limit of roughly 30 million light years and even once we get beyond this mark, we need to have a large number of objects to make sure that the effects of peculiar velocities will cancel each other.

The combination of these two complications explain (in part) why it has taken several decades for the best measurements of Hubble's Constant to converge on a consensus value. With current data sets, the nearly linear nature of the Hubble relationship is quite clear, as shown in the figure below (based on data from Riess (1996); provided by Ned Wright).

http://www.talkorigins.org/faqs/astronomy/hub_1996.gif

As mentioned previously, the standard version of BBT assumed that the dominant source of energy density for the last several billion years was cold, dark matter. Feeding this assumption into the equations governing the expansion of the universe, cosmologists expected to see that the expansion would slow down with the passage of time. However, in 1998, measurements of the Hubble relationship with distant supernovae seemed to indicate that the opposite was true. Rather than slowing down, the past few billion years have apparently seen the expansion of the universe accelerate (Riess 1998; newer measurements: Wang 2003, Tonry 2003). In effect, what was observed is that the light of the observed supernovae was dimmer than expected from calculating their distance using Hubble's law.

Within standard BBT, there are a number of possibilities to explain this sort of observation. The simplest possibility is that the geometry of the universe is open (negative curvature). In this sort of universe, the matter density is below the critical value and the expansion will continue until the effective energy density of the universe is zero. The second possibility is that the distant supernovae were artificially dimmed as the light passed from their host galaxies to observers here on Earth. This sort of absorption by interstellar dust is a common problem with observations where one has to look through our own galaxy's disk, so one could easily imagine something similar happening. This absorption is usually wavelength dependent, however, and the two teams investigating the distant supernovae saw no such effect. For the sake of argument, however, one could postulate a "gray dust" that dimmed objects equally at all wavelengths. The final possibility is that the universe contains some form of dark energy (see sections 1c and 2n). This would accelerate the expansion, but could keep the geometry flat.

At redshifts below unity (z < 1), these possibilities are all roughly indistinguishable, given the precision available in the measurements. However, for a universe with a mix of dark matter and dark energy, there is a transition point from the domination of the former to the latter (just like the transition between the radiation- and matter-dominated expansion prior to the formation of the CMBR). Before that time, dark matter was dominant, so the expansion should have been decelerating, only beginning to accelerate when the dark energy density surpassed that of the matter. This so-called cosmic jerk implies that supernovae before this point should be noticeably brighter than one would expect from a open universe (constant deceleration) or a universe with gray dust (constant dimming). New measurements at redshifts well above unity have shown that this "jerk" is indeed what we see -- about 8 billion years ago our universe shifted from slowly decelerating to an accelerated expansion, exactly as dark energy models predicted (Riess 2004).

c) Abundances of light elements

As we mentioned previously, standard BBT does not include the beginning of our universe. Rather, it merely tracks the universe back to a point when it was extremely hot and extremely dense. Exactly how hot and how dense it could be and still be reasonably described by GR is an area of active research but we can safely go back to temperatures and densities well above what one would find in the core of the sun.

In this limit, we have temperatures and densities high enough that protons and neutrons existed as free particles, not bound up in atomic nuclei. This was the era of primordial nucleosynthesis, lasting for most of the first three minutes of our universe's existence (hence the title of Weinberg's famous book "The First Three Minutes&quot. A detailed description of Big Bang Nucleosynthesis (BBN) can be found at Ned Wright's website, including the relevant nuclear reactions, plots and references. For our purposes a brief introduction will suffice.

Like in the core of our Sun, the free protons and neutrons in the early universe underwent nuclear fusion, producing mainly helium nuclei (He-3 and He-4), with a dash of deuterium (a form of hydrogen with a proton-neutron nucleus), lithium and beryllium. Unlike those in the Sun, the reactions only lasted for a brief time thanks to the fact that the universe's temperature and density were dropping rapidly as it expanded. This means that heavier nuclei did not have a chance to form during this time. Instead, those nuclei formed later in stars. Elements with atomic numbers up to iron are formed by fusion in stellar cores, while heavier elements are produced during supernovae. Further information on stellar nucleosynthesis can be found at the Wikipedia pages and in section 2g below.

Armed with standard BBT (easier this time since we know the expansion at that time was dominated by the radiation) and some nuclear physics, cosmologists can make very precise predictions about the relative abundance of the light elements from BBN. As with the Hubble diagram, however, matching the prediction to the observation is easier said than done. Elemental abundances can be measured in a variety of ways, but the most common method is by looking at the relative strength of spectral features in stars and galaxies. Once the abundance is measured, however, we have a similar problem to the peculiar velocities from the previous section: how much of the element was produced during BBN and how much was generated later on during stellar nucleosynthesis?

To get around this problem, cosmologists use two approaches:

Deuterium: Of the elements produced during BBN, deuterium has by far the lowest binding energy. As a result, deuterium that is produced in stars is very quickly consumed in other reactions and any deuterium we observe in the universe is very likely to be primordial. The downside of this approach is that primordial deuterium can also be destroyed in the outer layers of stars giving us an underestimate of the total abundance, but there are other methods (like looking in the Lyman alpha forest region of distant quasars) which avoid these problems.
Look Deep: One can try to look at stars and gas clouds which are very far away. Thanks to the finite speed of light, the larger the distance between the object and observers here on Earth, the more ancient the image. Hence, by looking at stars and gas clouds very far away, one can observe them at a time when the heavy element abundance was much lower. By going far enough back, one would eventually arrive at an epoch where no prior stars had had a chance to form, and thus the elemental abundances were at their primordial levels. At the moment, we cannot look back that far. These objects would have very high redshifts, taking the light into the infrared where observations from the ground are made very difficult by atmospheric effects. Likewise, the great distance makes them extremely dim, adding to our problems. Both of these problems should be helped greatly when the James Webb Space telescope enters service. What we can do now is to observe older stars, measure their elemental abundances, and try to extrapolate backwards.

Like most BBT predictions, the primordial element abundance depends on several parameters. The important ones in this case are the Hubble parameter (the expansion speed determines how quickly the universe goes from hot and dense enough for nucleosynthesis to cold and thin enough for it to stop) and the baryon density (in order for nucleosynthesis to happen, baryons have to collide and the density tells us how often that happened). The dependence on both parameters is generally expressed as a single dependence on the combined parameter OmegaB h2 (as seen in the figure below, provided by Ned Wright).

http://www.talkorigins.org/faqs/astronomy/abundance.gif

As this figure implies, there is a two-fold check on the theory. First of all, measurements of the various elemental abundances should yield a consistent value of OmegaB h2 (the intersection of the horizontal bands and the various lines). Second, independent measurements of OmegaB h2 from other observations (like the WMAP results in 2e) should yield a value that is consistent with the composite from the primordial abundances (the vertical band). Both approaches were used in the past; before the precise results of WMAP for the baryon density, the former was used more often. For a detailed account of the state of knowledge in 1997, look at Big Bang Nucleosynthesis Enters the Precision Era.

One of the major pieces of evidence for the Big Bang theory is consistent observations showing that, as one examines older and older objects, the abundance of most heavy elements becomes smaller and smaller, asymptoting to zero. By contrast, the abundance of helium goes to a non-zero limiting value. The measurements show consistently that the abundance of helium, even in very old objects, is still around 25% of the total mass of "normal" matter. And that corresponds nicely to the value which the BBT predicts for the production of He during primordial nucleosynthesis. For more details, see Olive 1995 or Izotov 1997. Also look at the plot below, comparing the prediction of the BBT to that of the Steady State model (data taken from Turck-Chieze 2004, plot provided by Ned Wright).

http://www.talkorigins.org/faqs/astronomy/He-vs-O.gif

Recent calculations as well as references to recent observations can be found in Mathews (2005). In earlier studies, there were some problems with galaxies which had apparently very low helium abundances (specifically I Zw 18); this problem was addressed and resolved in the meantime (cf. Luridiana 2003).
d) Existence of the Cosmic Microwave Background Radiation

Even though nuclei were created during BBN, atoms as we typically think of them still did not exist. Rather, the universe was full of a very hot, dense plasma made of free nuclei and electrons. In an environment like this, light cannot travel freely -- photons are constantly scattering off of charged particles. Likewise, any nucleus that became bound to an electron would quickly encounter a photon energetic enough to break the bond.

As with the era of BBN, however, the universe would not stay hot and dense enough to sustain this state. Eventually (after about 400,000 years), the universe cooled to the point where electrons and nuclei could form atoms (a process that is confusingly described as "recombination&quot. Since atoms are electrically neutral and only interact with photons of particular energies, most photon were suddenly able to travel much larger distances without interacting with any matter at all (this part of the process is generally described as "decoupling&quot. In effect, the universe became transparent and the photons around at that time have been moving freely throughout the universe since that time. And, since the universe has expanded a great deal since that time, the wavelengths of these photons have been stretched a great deal (by about a factor of 1000).

From this basic picture, we can make two very strong predictions for this relic radiation:

It should be highly uniform. One of the basic assumptions of BBT is that the universe is homogeneous and, given the time between the beginning of the universe and decoupling, any inhomogeneities (like those expected from inflation) would not have much time to grow.
It should have a blackbody spectrum. As we said before, prior to decoupling the universe was full of plasma and photons were constantly scattering off of all of the ionized matter. This makes the universe a perfect absorber; no photons could leave the universe, so they would put the whole universe (or at least that part that was causally connected) in thermal equilibrium. As such, we can actually describe the universe as having a unique temperature. In classical thermodynamics, photons emitted by a blackbody at a given temperature have a very specific distribution of energies and, as Tolman showed in 1934, a blackbody spectrum will remain a blackbody spectrum (albeit at a lower temperature) as it redshifts.

The existence of this relic radiation was first suggested by Gamow along with Alpher and Herman in 1948. Their initial predictions correctly stated that the temperature of the radiation, which would have been visible light at decoupling, would be shifted into the microwave region of the electromagnetic spectrum at this point. That, combined with the fact that the source of the radiation put it "behind" normal light sources like stars and galaxies, gave this relic its name: the Cosmic Microwave Background Radiation (CMBR or, equivalently, just CMB).

While they were correct in the broad strokes, the Gamow, Alpher & Herman estimates for the exact temperature were not so precise. The initial range was somewhere between 1 K and 5 K, using somewhat different models for the universe (Alpher 1949), and in a later book Gamow pushed this estimate as high as 50 K. The best estimates today put the temperature at 2.725 K (Mather 1999). While this may seem to be a large discrepancy, it is important to bear in mind that the prediction relies strongly on a number of cosmological parameters (most notably Hubble's Constant) that were not known very accurately at the time. We will come back to this point below, but let us take a moment to discuss the measurements that led to the current value (Ned Wright's CMB page is also worth reading for more detail on the early history of CMBR measurements).

The first intentional attempt to measure the CMBR was made by Dicke and Wilkinson in 1965 with an instrument mounted on the roof of the Princeton Physics department. While they were still constructing their experiment, they were inadvertently scooped by two Bell Labs engineers working on microwave transmission as a communications tool. Penzias and Wilson had built a microwave receiver but were unable to eliminate a persistent background noise that seemed to affect the receiver no matter where they pointed it in the sky, day or night. Upon contacting Dicke for advice on the problem, they realized what they had observed and eventually received the Nobel Prize for Physics in 1978. More detail about the discovery is available here.

Since then, measurements of the temperature and energy distribution of the CMBR have improved dramatically. Measuring the CMBR from the ground is difficult because microwave radiation is strongly absorbed by water vapor in the atmosphere. To circumvent this problem, cosmologists have used high altitude balloons, ballistic rockets and satellite-born experiments. The most famous experiment focusing on the temperature of the CMBR was the COBE satellite (COsmic Background Explorer). It orbited the Earth, taking data from 1989 to 1993.

COBE was actually several experiments in one. The DMR instrument measured the anisotropies in the CMBR temperature across the sky (see more below) while the FIRAS experiment measured the absolute temperature of the CMBR and its spectral energy distribution. As we mentioned above, the prediction from BBT is that the CMBR should be a perfect blackbody. FIRAS found that that this was true to an extraordinary degree. The plot below (provided by Ned Wright) shows the CMBR spectrum and the best fit blackbody. As one can see, the error bars, which are quite small, are actually 400 standard deviations. In fact, the CMBR is as close to a blackbody as anything we can create here on Earth.

http://www.talkorigins.org/faqs/astronomy/firasspectrum.gif

In many alternative cosmology sources, one will encounter the claim that the CMBR was not a genuine prediction of BBT, but rather a "retrodiction" since the values for the CMBR temperature that Gamow predicted before the measurement differed significantly from the eventual measured value. Thus, the argument goes, the "right" value could only be obtained by adjusting the parameters of the theory to match the observed one. This misses two crucial points:

Existence, not temperature, is the key. In the absence of BBT, there would be no reason to expect a uniform, long-wavelength background radiation in the universe. True, astronomers like Eddington predicted that we would see radiation from interstellar dust (absorbed starlight, re-radiated as thermal emission) or background stars. However, those models do not lead to the sort of uniformity we see in the CMBR, nor do they produce a blackbody spectrum (stars, in particular, have strong spectral lines which are noticeably absent in the CMBR spectrum). Similar predictions can be made for background radiation in other parts of the electromagnetic spectrum (x-ray background from distant supernovae and quasars, for example) and the distribution of those backgrounds is nowhere near as uniform as we see with the CMBR.
This is how science works. No physical theory exists independent of free parameters that are determined from subsequent observation. This is true of Newtonian gravity and GR (Newton's constant), it is true of quantum mechanics and quantum electrodynamics (Planck's constant, the electron charge) and it is true of cosmology. As we mentioned above, the test of a theory is not that it meets one prediction. Instead, the true test is whether the model can match other observations once it has been calibrated against one data set.

A final test of the cosmological origins of the CMBR comes from looking at distant galaxies. Since the light from these galaxies was emitted in the past, we would expect that the temperature of the CMBR at that time was correspondingly higher. By examining the distribution of light from these galaxies, we can get a crude measurement of the temperature of the CMBR at the time when the light we are observing now was emitted (e.g. Srianand 2000). The current state of this measurement is shown in the plot below (provided by Ned Wright). The precision of this measurement is obviously not nearly as great as we saw with the COBE data, but they do agree with the basic BBT predictions for the evolution of the CMBR temperature with redshift (and disagree significantly with what one would expect for a CMBR generated from redshifted starlight or the like).

http://www.talkorigins.org/faqs/astronomy/CMB_T_vs_z.gif

e) Fluctuations in the CMBR

As mentioned in the previous point, the temperature of the CMBR is extremely uniform; the differences in the temperature at different locations on the sky are below 0.001 K. Since matter and radiation were tightly coupled during the earliest stages of the universe, this implies that the distribution of matter was also initially uniform. While this matches our basic cosmological assumption, it does lead to the question of how we went from that very uniform universe to the decidedly clumpy distribution of matter we see on small scales today. In other words, how could planets, stars, galaxies, galaxy clusters, etc., have formed from an essentially homogeneous gas?

In studying this question, cosmologists would end up developing one of the most powerful and spectacularly successful predictions of BBT. Before describing the theory side of things, however, we will take a brief detour into the history of measuring fluctuations ("anisotropies" in cosmological terms) in the CMBR.

The first attempt to measure the fluctuations in the CMBR was made as part of the COBE (COsmic Background Explorer) mission. As part of its four year mission during the early 1990s, it used an instrument called the DMR to look for fluctuations in the CMBR across the sky. Based on the then-current BBT models, the fluctuations observed by the DMR were much smaller than expected. Since the instrument had been designed with the expected fluctuation amplitudes in mind, the observations ended up being just above the sensitivity threshold of the instrument. This led to speculation that the "signal" was merely statistical noise, but it was enough to generate a number of subsequent attempts to look for the signal.

With satellite observations still on the horizon, data for the following decade was mostly collected using balloon-borne experiments (see the list at NASA's CMBR data center for a thorough history). These high altitude experiments were able to get above the vast majority of the water vapor in the atmosphere for a clearer look at the CMBR sky at the expense of a relatively small amount of observing time. This limited the amount of sky coverage these missions could achieve, but they were able to conclusively demonstrate that the signal seen by COBE was real and (to a lesser extent) that the fluctuations matched the predictions from BBT.

In 2001, the MAP probe (Microwave Anisotropy Probe) was launched, later re-named to WMAP in honor of Wilkinson who had been part of the original team looking for the CMBR back in the 1960s. Unlike COBE, WMAP was focused entirely on the question of measuring the CMBR fluctuations. Drawing from the experience and technological advances developed for the balloon missions, it had much better angular resolution than COBE (see the image below from the NASA/WMAP Science Team). It also avoided one of the problems that had plagued the COBE mission: the strong thermal emission from the Earth. Instead of orbiting the Earth, the WMAP satellite took a three month journey to L2, the second Lagrangian point in the Earth-Sun system. This meta-stable point is beyond the Earth's orbital path around the Sun, roughly one tenth as far as the Earth is from the Sun. It has been there, taking data, ever since.

http://www.talkorigins.org/faqs/astronomy/cobe_wmap.jpg

In the spring of 2003, results from the first year of observation were released - and they were astonishing in their precision. As an example, for decades the age of the universe had not been known to better than about two billion years. By combining the WMAP data with other available measurements, suddenly we knew the age of the universe to within 0.2 billion years. Across the board, parameters that had been known to within 20-30 percent saw their errors shrink to less than 10 percent or better. For a fuller description of how the WMAP data impacted our understanding of BBT, see the WMAP website's mission results. That page is intended for a layman audience; more technical detail can be found in their list of their first year papers.

So, how did this amazing jump in precision come about? The answer lies in understanding a bit about what went on between the time when matter and radiation had equal energy densities and the time of decoupling. A fuller description of this can be found at Wayne Hu's CMB Anisotropy pages and Ned Wright's pages. After matter-radiation equality, dark matter was effectively decoupled from radiation (normal matter remained coupled since it was still an ionized plasma). This meant that any inhomogeneities (arising essentially from quantum fluctuations) in the dark matter distribution would quickly start to collapse and form the basis for later development of large scale structure (the seeds of these inhomogeneities were laid down during inflation, but we will ignore that for the current discussion). The largest physical scale for these inhomogeneities at any given time was the then-current size of the observable universe (since the effect of gravity also travels at the speed of light). These dark matter clumps set up gravitational potential wells that drew in more dark matter as well as the radiation-baryon mixture.

Unlike the dark matter, the radiation-baryon fluid had an associated pressure. Instead of sinking right to the bottom of the gravitational potential, it would oscillate, compressing until the pressure overcame the gravitational pull and then expanding until the opposite held true. This set up hot spots where the compression was greatest and cold spots where the fluid had become its most rarefied. When the baryons and radiation decoupled, this pattern was frozen on the CMBR photons, leading to the hot and cold spots we observe today.

Obviously, the exact pattern of these temperature variations does not tell us anything in particular. However, if we recall that the largest size for the hot spots corresponds to the size of the visible universe at any given time, that tells us that, if we can find the angular size of these variations on the sky, then that largest angle will correspond to the size of the visible universe at the time of decoupling. To do this, we measure what is known as the angular power spectrum of the CMBR. In short, we find all of the points on the sky that are separated by a given angular scale. For all of those pairs, we find the temperature difference and average over all of the pairs. If our basic picture is correct, then we should see an enhancement of the power spectrum at the angular scale of the largest compression, another one at the size of the largest scale that has gone through compression and is at maximum rarefaction (the power spectrum is only sensitive to the square of the temperature difference so hot spots and cold spots are equivalent), and so on. This leads to a series of what are known as "acoustic peaks", the exact position and shape of which tell us a great deal about not only the size of the universe at decoupling, but also the geometry of the universe (since we are looking at angular distance; see 1b) and other cosmological parameters.

The figure below from the NASA/WMAP Science Team shows the results of the WMAP measurement of the angular power spectrum using the first year of WMAP data. In addition to the angular scale plotted on the upper x-axis, plots of the angular power spectrum are generally shown as a function of "l". This is the multipole number and is roughly translated into an angle by dividing 180 degrees by l. For more detail on this, you can do a Google search on "multipole expansion" or check this page. The WMAP science pages also provide an introduction to this way of looking at the data.

http://www.talkorigins.org/faqs/astronomy/powerspectrum.gif

As with the COBE temperature measurement, the agreement between the predicted shape of the CMBR power spectrum and the actual observations is staggering. The balloon-borne experiments (particularly BOOMERang, MAXIMA, and DASI) were able to provide convincing detections of the first and second acoustic peaks before WMAP, but none of those experiments were able to map a large enough area of the sky to match with the COBE DMR data. WMAP bridged that gap and provided much tighter measurement of the positions of the first and second peaks. This was a major confirmation of not only the Lambda CDM version of BBT, but also the basic picture of how the cosmos transitioned from an early radiation-dominated, plasma-filled universe to the matter-dominated universe where most of the large scale structure we see today began to form.
f) Large-scale structure of the universe

The hot and cold spots we see on the CMBR today were the high and low density regions at the time the radiation that we observe today was first emitted. Once matter took over as the dominant source of energy density, these perturbations were free to grow by accreting other matter from their surroundings. Initially, the collapsing matter would have just been dark matter since the baryons were still tied to the radiation. After the formation of the CMBR and decoupling, however, the baryons also fell into the gravitational wells set up by the dark matter and began to form stars, galaxies, galaxy clusters, and so on. Cosmologists refer to this distribution of matter as the "large scale structure" of the universe.

As a general rule, making predictions for the statistical properties of large scale structure can be very challenging. For the CMBR, the deviations from the mean temperature are very small and linear perturbation theory is a very good approximation. By comparison, the density of matter in our galaxy compared to the mean density of the universe is enormous. As a result, there are two basic options: either do measurements on very large physical scales where the variations in density are typically much smaller or compare the measurements to simulations of the universe where the non-linear effects of gravity can be modeled. Both of these options require significant investment in both theory and hardware, but the last several years have produced some excellent confirmations of the basic picture.

As we mentioned in the last section, the process that led to the generation of the acoustic peaks in the CMBR power spectrum was driven by the presence of a tight coupling between photons and baryons just prior to decoupling. This fluid would fall into the gravitational potential wells set up by dark matter (which does not interact with photons) until the pressure in the fluid would counteract the gravitational pull and the fluid would expand. This led to hot spots and cold spots in the CMBR, but also led to places where the density of matter was a little higher thanks to the extra baryons being dragged along by the photons and areas where the opposite was true. Like with the CMBR, the size of these areas was determined by the size of the observable universe at the time of decoupling, so certain physical scales would be enhanced if you looked at the angular power spectrum of the baryons. Of course, once the universe went through decoupling, the baryons fell into the gravitational wells with the dark matter, but those scales would persist as "wiggles" on the overall matter power spectrum.

Of course, as the size of the universe expanded, the physical scale of those wiggles increased, eventually reaching about 500 million light years today. Making a statistical measurement of objects separated by those sorts of distances requires surveying a very large volume of space. In 2005, two teams of cosmologists reported independent measurements of the expected baryon feature. As with the CMBR power spectrum, this confirmed that the model cosmologists have developed for the initial growth of large scale structure was a good match to what we see in the sky.

The second method for understanding large scale structure is via cosmological simulations. The basic idea behind all simulations is this: if we were a massive body and could feel the gravitational attraction of all of the other massive bodies in the universe and the overall geometry of the universe, where would we go next? Simulations answer this question by quantizing both matter and time. A typical simulation will take N particles (where N is a large number; hence the term N-body simulation) and assign them to a three-dimensional grid. Those initial positions are then perturbed slightly to mimic the initial fluctuations in energy density from inflation. Given the positions of all of these particles and having chosen a geometry for our simulated universe, we can now calculate where all of these particles should go in the next small bit of time. We move all the particles accordingly and then recalculate and do it again.

Obviously, this technique has limits. If we assign a given mass to all of our particles, then measurements of mass below a certain limit will be strongly quantized (and hence inaccurate). Likewise, the range of length scales is limited: above by the volume of the chunk of the universe we have chosen to simulate and below by the resolving scale of our mass particles. There is also the problem that, on small scales at least, the physics that determines where baryons will go involves more than just gravity; gas dynamics and the effects of star formation makes simulating baryons (and thus the part of the universe we can actually see!) challenging. Finally, we do not expect the exact distribution of mass in the simulation to tell us any thing in particular; we only want to compare the statistical properties of the distribution to our universe. This article discusses these statistical methods in detail as well as providing references to the relevant observational data.

Still, given all of these flaws, efforts to simulate the universe have improved tremendously over the last few decades, both from a hardware and a software standpoint. White (1997) reviews the basics of the simulating structure formation as well as the observational tests one can use to compare simulations to real data. He shows results for four different flavors of models -- including both the then-standard "cold dark matter" universe and a universe with a cosmological constant. This was before the supernovae results were released, putting the lie to the claim that, prior to the supernovae data, the possibility that the cosmological constant was non-zero was ignored in the cosmological literature. A CDM universe was the front runner at the time, but cosmologists were well aware of the fact that the data was not strong enough to rule out several variant models.

The Columbi (1996) paper is a good example of this awareness as well. In this article, various models containing different amounts of hot and cold dark matter were simulated, as well as attempts to include "warm" dark matter (i.e. dark matter that is not highly relativistic, but still moving fast enough to have significant pressure). Their Figure 7 provides a nice visual comparison between observed galaxy distributions and the results of the various simulated universes.

In 2005, the Virgo Consortium released the "Millennium Simulation"; details can be found on both the Virgo homepage and this page at the Max Planck Institute for Astrophysics. Using the concordance model (drawn from matching the results of the supernovae studies, the WMAP observations, etc.), these simulations are able to reproduce the observed large scale galaxy distributions quite well. On small scales, there is still some disagreement, however (see below for a more detailed discussion).
g) Age of stars

Since the stars are a part of the universe, it naturally follows that, if BBT and our theories of stellar formation and evolution are more or less correct, then we should not expect to see stars older than the universe (compare 3d!). More precisely, the WMAP observations suggest that the first stars were "born" when the universe was only about 200 million years old, so we should expect to see no stars which are older than about 13.5 billion years. On the other hand, stellar evolution models tell us that the lowest-mass stars (those with a mass roughly 1/10 that of our Sun) are expected to "live" for tens of trillions of years, so there is a chance for significant disagreement.

Before delving into this issue further, some nomenclature is necessary. Astronomers generally assign stellar formation into three generations called &quotopulations". The distinguishing characteristic here is the abundance of elements with atomic mass larger than helium (these are all referred to as "metals" in the astronomical literature and the abundance of metals as the star's "metallicity&quot. As we explained in section 2c, to a very good approximation primordial nucleosynthesis produced only helium and hydrogen. All of the metals were produced later in the cores of stars. Thusly, the populations of stars are roughly separated by their metal content; Population I stars (like our Sun) have a high metallicity, while Population II stars are much poorer in metals. Since the metal content of our universe increases over time (as stars have more and more time to fuse lighter elements into heavier ones), metallicity also acts as a rough indicator for when a given star was formed. The different stellar generations are also summarized in this article.

Although it may not be immediately obvious, the abundance of metals during star formation has a significant impact on the resulting stellar population. The basic problem of star formation is that the self-gravity of a given cloud of interstellar gas has to overcome the cloud's thermal pressure; clouds where this occurs will eventually collapse to form stars, while those where it does not will remain clouds. As a gas cloud collapses, the gravitational energy is transferred into thermal energy and the cloud heats up. In turn, this increases the pressure and makes the cloud less likely to collapse further. The trick, then, is to radiate away that extra thermal energy as efficiently as possible so that collapse may continue. Metals tend to have a more complex electron structure and are more likely to form molecules than hydrogen or helium, making them much more efficient at radiating away thermal energy. In the absence of such channels, the only way to get around this problem is by increasing the gravitational side of the equation, i.e., the mass of the collapsing gas cloud. Hence, for a given interstellar cloud, more metals will result in a higher fraction of low mass stars, relative to the stars produced by a metal-poor cloud.

The extreme case in this respect is the Population III stars. These were the very first generation of stars and hence they formed with practically no metals at all. As such, their mass distribution was skewed heavily towards the high mass end of the spectrum. Some of the details and implications of this state of affairs can be found in this talk about reionization and these two articles on the first stars.

Observing this population of stars directly would be a very good piece of evidence for BBT. Unfortunately, the life time of stars (which is to say the time during which they are fusing hydrogen in their cores into helium) decreases strongly with their mass. For a star like our Sun, the lifetime is on the order of 10 billion years. For the Population III stars, which are expected to have a typical mass around 100 times that of the Sun, this time shrinks to around a few million years (an instant, by cosmological standards). Therefore, we must look at regions of universe where the light we observe was first emitted near the time when these stars shone. This means that the light will be both dim and highly redshifted (z ~ 20). The combination of these two effects makes observations from the ground largely unfeasible, but may become possible when the James Webb space telescope begins service. First promising results were obtained just recently by the Spitzer infrared space telescope.

Like stars today, Population III stars formed heavy elements in their cores (by nuclear fusion), and even heavier elements when they went supernova. These metals were dispersed throughout space by the supernovae explosions and the Population II stars formed. With help from metal cooling, lower mass stars were able to form, low enough that they are still burning today. Population II stars are preferentially seen in globular clusters orbiting the galaxy and in the galactic bulge. By using the Hertzsprung-Russell diagram, astronomers can get a estimate of when the stars in a globular cluster (or other star cluster) formed. This is explained in more detail in the FAQ on Determining distances to astronomical objects or on this page about the Hertzsprung Russell Diagram and Stellar Evolution.

A second method of determining stellar age is by measuring the beryllium content in a star's outer layers. Applying this technique to the globular cluster NGC 6397, Pasquini (2004) found an age of 13.4 billion years, plus or minus 800 million years (more details can be found in this article). Other studies like Krauss (2003) and Hansen (2004) obtained similar results with related methods: 12.2 and 12.1 billion years, respectively, with errors on order 1 to 2 billion years.

The large uncertainties in these ages is partly due to the fact that these methods depend crucially on our theory of stellar development ("stellar evolution&quot, which in turn depends on our understanding of the nuclear reactions going on in stars. Despite the relatively low energies, the details for some of these reactions remain somewhat imprecise.

Recently, new results were obtained on the speed of a nuclear reaction chain which is quite important in stars, the so-called CNO cycle. This study (Imbriani 2004) revealed that the speed of this reaction is far slower than was previously assumed. This in turn implies that the stars are older than previously assumed, by something between 0.7 and 1 billion years. Using Pasquini's data, this implies that the oldest stars in the Milky Way are 14.1 to 14.4 billion years years old. This is older than the age of the universe determined from other measurements (compare the WMAP data, 2d); but one has to take into account the relatively large errors associated with these age determinations (see above). So these star ages are still consistent with the age of the universe determined in other ways.

As pointed out by Dauphas (2005), it is also possible to determine the age of the Milky Way without relying on assumptions about the details of the nuclear reactions going on in the stars. He used measurements of the uranium (U-238) and thorium (Th-232) abundance, both in the solar system and in low-metallicity halo stars to determine the age of our galaxy. His result was 14.5 billion years, with uncertainties of -2.2 and +2.8 billion years. Taking these errors margins into account, this is again nicely consistent with the age of the universe determined by WMAP.

One should also note that the age of the stars in distant galaxies can also be determined. To do this, one calculates theoretical models of what the spectrum of a galaxy looks like when the stars in it have a certain age (see Jimenez 2004), and compares these model predictions with the observed spectra of galaxies. Obviously, this is a somewhat complicated method with potential errors even greater than of the methods for determining the ages of stars in our neighbourhood.

Nevertheless, so far the results found are consistent with a universe with a finite age. In galaxies which are far away from us, which we should therefore see as they looked like when they still were very young, only young stars are found. For example, Nolan (2003) found that in two galaxies with redshifts around 1.5, the stars had ages of around 3-4 billion years at most. There was also a detailed study done on the star formation history of the universe, using observations of the ages of stars in distant galaxies, which showed that the rate of star formation was highest about 5 billion years ago (Heavens 2004).
h) Evolution of galaxies

Galaxies are also dynamic entities, changing over time. Like with large scale structure, the broad strokes of galaxy formation follow a path of "hierarchical clustering": small structures form very early on and these merge to form larger structures as time goes on. Within this larger framework, some galaxies will develop secondary features like spiral arms or bar-like structures, some of which will be transitory and some of which will persist.

This basic picture tells us that, if we look at very distant regions of the universe (i.e., galaxies with very high redshifts), we should see mainly small, irregular galaxies. For the most part, this is what we find (with some notable exceptions, as we will cover later). Starting in 1996, the Hubble Space Telescope took a series of very deep images: the Hubble Deep Field, the Hubble Deep Field South, and the Hubble Ultra Deep Field. As one would expect, the morphology of the few nearby galaxies in these images is quite a bit different from the very high redshift galaxies.

Another important indicator of galaxy evolution comes from quasars, specifically their redshift distribution. Quasars are generally believed to be powered by supermassive black holes at the centers of galaxies accreting matter; as dust and gas falls into the black hole, it heats up tremendously and emits a huge quantity of energy across a broad spectrum. For most true quasars, the amount of energy released during this process is a few orders of magnitude larger than all of the light emitted by the rest of the galaxy. In order for this sort of behavior to occur for some length of time, galaxies need to have a large quantity of dust and free gas near their cores. The bulk of observed quasars have redshifts near z ~ 2, which suggests that there was a particular epoch during the history of the universe when the conditions were right for a large fraction of galaxies. For steady-state models of the universe, this is hard to explain. On the other hand, BBT explains this quite neatly by noting that, in their early stages of formation, galaxies have a great deal of dust and free gas and galaxy collisions were also more common, which could serve as a mechanism for triggering quasar activity.

With that said, it should be noted that galaxy formation and evolution remains a very open question within BBT and not without controversy. See section 5d for more details.
i) Time dilation in supernova brightness curves

As explained in 2b, light traveling through the expanding universe undergoes redshift (i.e., the wavelength is stretched to larger values as the universe expands). Since the wavelength and frequency for a given photon are related inversely through the speed of light, which is a constant, it is obvious that as the wavelength increases the frequency must decrease. Likewise, if light from a distant galaxy varies with time (like we would expect for Cepheid variable stars or pulsars), then the time between these events is stretched (remember, frequency is inversely related to time). Thus, if we observe this galaxy from Earth, we will see a slower variation than an observer in that distant galaxy and the ratio between those times will be exactly equal to one plus the redshift of the galaxy.

While observing this time dilation with stars in distant galaxies is difficult, we can test it using supernovae in those galaxies. Type Ia supernovae, in particular, are known to have a characteristic signature, increasing in brightness rapidly and then slowly fading away over the course of several weeks. This signature varies somewhat depending on the exact chemical composition of the star before it undergoes its supernova explosion, but with careful monitoring we can compensate for this effect. This aspect was key to the supernovae measurements that gave the earliest indication of the existence of dark energy and has been the subject of many papers (for example, Leibundgut 1996, Riess 1997, Goldhaber 2001 and Knop 2003). These papers make it clear that correcting for the effects of redshift time dilation is critical for understanding the data. In particular, Goldhaber rules out a "no time dilation" model at 18 standard deviations. The plot below (from Ned Wright) demonstrates Goldhaber's findings.

http://www.talkorigins.org/faqs/astronomy/dilation-vs-z.gif

j) Tolman tests

In addition to predicting that the wavelength of light should change as the universe expands (where the observed wavelength is stretched by a factor of (1+z) relative to the initial wavelength), the BBT also requires that the surface brightness of light sources decreases, but as the fourth power of (1+z). One important consequence of this effect is that thermal emission from a black body at a given temperature at some point in the history of the universe will still appear as a thermal spectrum later on, but at a temperature that is a factor of (1+z) lower (as we mentioned in 2d). Thus, by measuring the deviation of the observed CMBR spectrum from that of a perfect black body, we get a very powerful test of the idea that the expansion of the universe follows the basic picture of standard BBT. This measurement was carried out with the COBE satellite in the 1990s and the spectrum was found to match a black body to one part in 10,000 (Mather 1990, Fixsen 1996).

A number of attempts have been made to apply this test to other objects in the universe since Tolman worked out the surface brightness scaling in 1930. The major difficulty in applying this test to any particular object is that, in order to test the observed surface brightness against the expectation, one must first know the absolute brightness in the first place. The lack of such a "standard candle" in cosmology is keenly felt.

In 2001, a series of papers by Lubin attempted to apply this test to distant galaxies. This is a difficult task since galaxies are dynamic entities on the time scale of the universe. They undergo periods of star bursts (rapid formation of stars, usually in galactic disks), they merge with one another, the opacity of interstellar dust changes as the metal content increases, and their constituent stars change in luminosity as they age. Lubin's paper attempts to take all of this into account. After folding these effects into the expected scaling for the galaxy surface brightnesses, they find results that are consistent with what they expect from the galaxy evolution models. This is not as strong an indication that the Tolman relation holds as the CMBR temperature, but it is a positive sign that the variation from the strict relation is more or less understood. Indeed, the results were strong enough that "tired light" models were able to be ruled out using this method.
k) Sunyaev-Zel'dovich effect

The picture that was described in 2d involved the CMBR photons passing through the universe from the time of decoupling until we detect them here on Earth without interacting with anything along the way. While this is largely true, it does not hold for all photons. The regions around massive galaxy clusters are full of very hot, ionized gas. So hot, in fact, that the free electrons are moving at relativistic speeds. Since these are free ions, they can interact much more freely with photons (like during the plasma phase of the universe). When CMBR photons pass through this gas, about 1% of them will interact with the gas. Since the photons have a much lower energy than the electrons, the scattering will impart energy into the photons via the inverse Compton effect. The result is that the CMBR spectrum is distorted, with some of the photons shifted to higher energies than we would expect from a pure thermal spectrum. This is the thermal Sunyaev-Zel'dovich effect and when we look at the CMBR in the direction of these galaxy clusters we should expect to see the effects of this distortion ( this page also offers some more details).

As we can see from the observational data, this effect is clearly observed. Since this is indicative of the fact that the photons must have passed through the cluster to get to us, this is strong evidence that the CMBR is indeed a cosmological phenomenon and not locally produced. These observations can also be used to measure the value of the Hubble parameter. The precision of the measurement is somewhat limited since it depends on the details of the distribution of the hot gas within the cluster, but the results are consistent with what we see from other methods.
l) Integrated Sachs-Wolfe effect

In addition to the Sunyaev-Zel'dovich effect, photons from the CMBR can also be subtly affected by the Integrated Sachs-Wolfe effect. The basis for this effect is gravitational redshift, one of the most basic predictions from GR and first demonstrated experimentally by Pound and Rebka in 1960. The basic idea is that, as photons enter a gravitational potential well, they pick up extra energy and when they exit they lose energy. Hence, scientists refer to photons "falling into" and "climbing out of" gravitational wells.

As CMBR photons pass through the foreground large scale structure, they pass through many such gravitational wells. If the depth of the well is static (or rather if the depth of the well is increasing at the same rate as the expansion of the universe), then the net energy change is zero. All of the energy they gained falling in is lost climbing out. However, if the universe contains dark energy (or has an open geometry), then the universe expands faster than the gravitational wells around massive objects can grow. As a result, the CMBR photons do not lose all of the energy they gained falling into the potentials. This makes the CMBR look very slightly hotter in the direction of these potentials, which also contain the highest concentrations of galaxies.

Following the release of the WMAP data, studies done by Scranton (2003), Afshordi (2004), Boughn (2004), and Nolta (2004) measured this effect using galaxies selected in a number of different ways. The signal-to-noise in any one of the measurements was not very large. However, taken together (and combined with the WMAP observation that the geometry of the universe was best fit by a flat universe), they provide significant evidence that this effect is real and is best explained by the standard Lambda CMD model of BBT.
m) Dark Matter

A common complaint regarding the inclusion of dark matter in cosmology is that it is an "epicycle", analogous to the epicycles of the Ptolemaic geocentric models of the solar system. In this view, dark matter is a crutch invented to save a model that otherwise does not fit the data. While popular with BBT critics, this stance does not hold up under further scrutiny.

The origin of dark matter as an astronomical entity comes not from cosmology, but rather from the work of Zwicky and Oort in 1933 and 1940, respectively. Zwicky's studies of galaxies' velocities in large clusters convinced him that there must be more mass present in the clusters (in order to provide sufficient gravitational pull to keep the clusters from flying apart) than could be accounted for by the visible mass of the galaxies themselves. Likewise, Oort's measurement of the rotation curves of galaxies (essentially, the orbital velocity of stars aro
MageGrayWolf
offline
MageGrayWolf
9,470 posts
Farmer

That's interesting, I didn't realize there was a limit of characters per post.

Here is the rest for you. I hope.

The origin of dark matter as an astronomical entity comes not from cosmology, but rather from the work of Zwicky and Oort in 1933 and 1940, respectively. Zwicky's studies of galaxies' velocities in large clusters convinced him that there must be more mass present in the clusters (in order to provide sufficient gravitational pull to keep the clusters from flying apart) than could be accounted for by the visible mass of the galaxies themselves. Likewise, Oort's measurement of the rotation curves of galaxies (essentially, the orbital velocity of stars around the galactic center plotted against the stars' radii) suggested that the mass interior to these stellar orbits indicated by simple Newtonian physics did not match the mass inferred by the light from the centers of those galaxies. Both of these observations were made well before modern cosmology had really taken shape and, hence, were independent of any need for dark matter to make cosmological measurements match the theory. More on the history of dark matter can be found here and in van den Bergh (1999).

Like the rest of cosmology, the current evidence for dark matter comes from a number of different observations:

Like Oort's original observations, modern measurements of the rotation curves for spiral galaxies indicate that there must be more mass in these galaxies than we can directly see. The velocity of a star (or gas cloud) in a roughly circular orbit around the center of a galaxy depends on the mass interior to that orbit, as basic Newtonian mechanics tells us. Hence, by measuring the velocity of stellar orbits at a number of radii, we can turn that into a mass profile. Faber (1979) gives a review of a number of such velocity measurements.

Two points are relevant here: First, the mass inferred from these measurements is invariably more than one would infer from looking at the visible matter in these galaxies. This was clear to Oort and remains so today. Second, the distribution of that dark matter is not the same as the visible matter. The stellar density in a spiral galaxy tends to fall off exponentially as one moves from the center to the edge in the plane of the disk. The mass profile inferred from the velocity curves, on the other hand, falls as the inverse cube of the radius (Prada 2003). This is not what we expect for baryons, which can lose gravitational energy via radiation and fall deeper into the gravitational potential well of the galaxy. For CDM, however, this option is not available (since the dark matter does not interact with photons) and hence it remains stuck at larger radii. Simulations of CDM verify this behavior, providing another clue that not only is dark matter present, but the majority of it is not made of baryons.
A similar game can be played with elliptical galaxies. These galaxies do not have the same simple orbital structure as spiral galaxies so the observation is somewhat different. Rather than measuring the velocity curves, we can look at the X-ray emission from these galaxies. X-rays are produced by extremely hot gas (temperatures in millions of degrees) surrounding these galaxies. As with the stars in the spiral galaxy, however, the mass of the galaxy must be sufficient to keep the particles in the gas gravitationally bound to the galaxy, so a mass can be inferred from a measurement of the X-ray temperature. Again, the mass measured in this manner invariably exceeds that expected by the amount of visible matter (cf Fabian 1986).
In a similar fashion, one can also look at the motion of galaxies in clusters. Like stars in elliptical galaxies, the motions of galaxies in these clusters are not simple circular orbits. To get a measure of the kinetic energy in the galaxies, astronomers measure their velocity dispersion, essentially the variance of the observed velocities for galaxies in the cluster. If the galaxy cluster is relatively unperturbed (i.e. has not undergone a major merger with another galaxy cluster), then the virial theorem can be used to calculate the expected gravitational force necessary to hold together a galaxy cluster of a given velocity dispersion. As mentioned above, Zwicky's 1933 measurements of galaxy cluster velocity dispersions were the first indicator that the total mass of clusters must be considerably higher than just the visible matter and this remains true with modern measurements.
As we mentioned in 2k, galaxy clusters are surrounded by a halo of extremely hot ionized gas. This means that we can use the same technique from our elliptical galaxy example above to get a mass measure for galaxy clusters and compare it to the visible mass. X-ray observations with the Chandra satellite have indeed revealed evidence for dark matter; see the press releases Chandra Discovers "Rivers Of Gravity" That Define Cosmic Landscape and Motions of Nearby Galaxy Cluster Reveal Presence of Hidden Superstructure.
The large amount of mass contained in galaxy clusters also makes them an excellent source of gravitational lenses. One of the more startling predictions of GR, gravitational lensing is the deflection of light due to gravitational potentials. The confirmation of gravitational lensing by Eddington's 1919 expedition was one of the early important observations in favor of GR and lensing remains a powerful cosmological probe today. For particularly strong gravitational potentials (like galaxy clusters), light from sources behind the lens can actually travel multiple paths to observers on the other side of the lens. This results in distorted, arc-like images of the background object like those seen in this image of Abell 2218. The pattern and shape of these images is very sensitive to the mass (and mass distribution) of the lensing object, providing our cleanest measure of galaxy cluster masses and, once again, dark matter is necessary to bridge the gap between the observed and visible mass. A list of currently discovered gravitational lenses can be found on the CASTLES Survey website. This article, Scientists Map Dark Matter, Prove Einstein Right also explains this effect in some detail.
Finally, we have the current cosmological concordance model. Measurements of distant supernovae, the CMBR anisotropies and large scale structure all point to a model which has a relatively large component of dark matter. Further, the latter two measurements are also able to differentiate between the amount of matter in normal baryonic form and that in non-baryonic matter. In the best fitting model they require about 5 parts of the latter for every part of the former.

A further review of these observations is provided in this page on Dark Matter.

So, given that we need a new sort of matter, one that does not interact with light in the way that normal matter does, a few questions are apparent: Is there a reasonable model that can provide possibilities for what this dark matter really is? And if so, why is it that we have not been able to observe it directly in laboratories here on Earth?

Before getting into those questions, it is important to recall that not all dark matter is non-baryonic. For these baryons, "dark" is a somewhat vague term. Occasionally, it is taken to mean that they do not give off light in the visible part of the electromagnetic spectrum; for example, warm interstellar and intergalactic gas, brown dwarfs, black holes, and neutron stars. Of these, only the first is currently beyond our abilities to observe directly; brown dwarfs give off light in the infrared, while black holes and neutron stars (or rather their environments) are strong sources of radio waves and X-rays. Taking in account the full electromagnetic spectrum available to astronomers, about half of the baryons in the universe can be called "dark matter" at the current time.

So, having addressed that, we return to the non-baryonic sector of dark matter. The current best bets for dark matter candidates come from particle physics, where current theories of supersymmetry supply a whole lots of possibilities. In the Minimal Supersymmetric Standard Model, each particle in the Standard Model has a super-partner particle of much greater mass. These particles would only exist in abundance at the very earliest stages of the universe, but the lightest of these particles would be stable against decay into lighter particles (since none exist) and, thus, remain in existence today. In scenarios like this, the lightest particle is typically the neutralino. An even more exotic, but widely discussed, possibility is the so-called "axion". Collectively, these particles are generally called WIMPs, short for "weakly interacting massive particle".

For many years, neutrinos were considered to be viable dark matter candidate (having the advantage that we definitely knew they existed). However, as more evidence accumulated from large scale structure and the CMBR, the possibility that neutrinos could explain the observations faded. In order to match the observations, the dark matter had to be cold, i.e. moving slowly relative to the speed of light. With their very small mass, neutrinos are very easy to accelerate to near light speed. Since they have so much kinetic energy, neutrinos do not easily collapse into relatively small gravitational potentials. If they were the dominant form of dark matter, they would smooth out the distribution of matter on small scales, in clear conflict with the strong small scale clustering we observe. Indeed, when we include information from WMAP, cosmologists find that neutrinos can comprise no more than 1.5% of the total energy density in the universe.

As the evidence for dark matter mounted and particle physics was able to provide a number of plausible candidates, a number of experiments have begun over the last several years to detect dark matter directly. So far the experiments have not been able to make a definite detection, but a great deal of theoretical parameter space remains uncharted. For a review of the current constraints, these two articles are worth reading.

Another exciting possibility on the horizon is the Large Hadron Collider. This experiment at CERN is expected to reach high enough energies to look for supersymmetric particles, the discovery of which would be an important indicator that our current theories about dark matter particles are a strong possibility. Of course, it is also possible that the LHC will find something entirely new and unexpected.
n) Dark Energy

In an epicycle rant, dark energy is quickly added to the list along with dark matter. Like with the dark matter case, calling dark energy an epicycle inserted to save BBT ignores a number of the facts of the case. Unlike dark matter, the only evidence for dark energy comes from purely cosmological measurements, but the existence of some sort of dark energy was part of GR and BBT from the very earliest days of the theory, hardly what one would expect for a parameter invented ad hoc to save a theory. Further, the evidence for dark energy comes from a wide variety of cosmological observations, each with their own independent errors and systematic biases. Additionally, there are theoretical arguments that this type of energy should exist.

First, we look at the observational evidence.

By the mid-1990s, a number of cosmological observations had reached sufficient precision that it was difficult to reconcile them with a universe dominated by dark matter. Roughly a decade and a half prior, Alan Guth and others suggested an addition to the then current picture of BBT: inflation. The motivation for inflation was to explain the horizon and flatness problems (basically, why is the universe so uniform and close to flat if we know that these are unstable solutions to the equations governing BBT; this is covered in more detail in 3e). Since that time inflation had become a standard part of BBT (and remains so today). One of the most generic inflation predictions was that the overall density of the universe should be very, very close to the critical value. Mid-1990s measurements of the matter density from galaxy clusters and other sources consistently preferred matter densities much lower to match the data. At the same time, measurements of the ages of the oldest stars were yielding ages that were inconsistent with the age of the universe based on a matter-only model. An open model, where the density was lower than the critical value, would alleviate these observational problems to a certain extent but would be difficult to square with inflation, which had been given a strong boost by the COBE CMBR measurements a few years prior. As it would turn out, dark energy solved all these disparate problems. The story is told in more detail in this article: Dark Energy: Just What Theorists Ordered.

While dark energy was a frequently mentioned possible solution to the state of problems by the late 1990s, few cosmologists were willing to make that leap without stronger evidence. For many cosmologists, that evidence came in the form of the 1998 supernovae results. Two teams, working independently and with largely disjoint sets of data, found that observations of distant supernovae were consistently dimmer than one would expect for a matter-only universe (see Riess 1998 and Perlmutter 1999). Indeed, they found that the expansion of the universe had been accelerating for the last several billion years, beyond the effect expected even for an open universe. The best fit to the data included a substantial dark energy component, enough to keep the universe's geometry flat while also matching the low matter density galaxy cluster measurements and resolving the age crisis. For more details see this page: Is there a nonzero cosmological constant?.

For those still reluctant to include dark energy in their models, the situation became more difficult with the release of the first year WMAP results. These observations revealed that the total density of the universe was very close to the critical value, putting the last nail into the open universe coffin. Having a detailed CMB map also allowed for a much cleaner measurement of the integrated Sachs-Wolfe effect, one of the key indicators of dark energy.

A good summary of the various lines of evidence supporting the existence of dark energy is also given by this webpage: Dark Energy.

While the current data is sufficient to indicate the need for something like dark energy, the details of dark energy are still largely unconstrained. We do not know what the equation of state for dark energy is, whether it remains constant or changes over time, whether the dark energy density remains constant across all space or if it clusters, etc. As with dark matter, however, a number of potential models from theoretical physics have been proposed, although the physics of dark energy are generally more speculative than for dark matter. They all match the current data, but, in general, make very different predictions for future observations. We will review a few of them briefly.

The most basic form of dark energy is a cosmological constant: a smooth, constant energy density everywhere in the universe with equation of state equal to -1. This sort of scalar field matches the basic picture of the vacuum from quantum field theory: even in the absence of particles, so-called "zero-point fluctuations" will fill all of space uniformly. Without a proper theory of quantum gravity, a precise calculation of the magnitude of this vacuum energy density is impossible (we would need to know the proper quantization of space and time to do so). In the absence of such a theory, the most obvious calculation (based on the Planck mass) gives a vacuum energy density roughly 120 orders of magnitude greater than energy density we infer from cosmological observations. This disconnect has been called "the worst prediction ever made in theoretical physics", with no small amount of justification.

To reconcile this discrepancy, one might imagine that a full accounting of the contributions from all of the different parts of the theory would largely cancel each other, leaving the small remnant vacuum energy density we observe today. A further discussion of this idea (and related ones) can be found here: What's the Energy Density of the Vacuum?

By relaxing the requirement that the density of dark energy remain constant over time, we arrive at the class of dark energy models called quintessence . The idea here is that, instead of relying on a slight asymmetry in particle physics to get our dark energy, we propose the existence of a (so far entirely hypothetical) type of field; recall that, in quantum field theory, &quotarticles" and "fields" are largely the same thing. Like for the vacuum energy, the equation of state for this field is negative. However, since it is associated with a field rather than an innate part of spacetime, the energy density and the equation of state can change over time. Depending on the details of the model, this flexibility can help to explain the "cosmic coincidence problem": the fact that the energy density of dark energy and matter are nearly equal today puts us at a relatively rare point in the history of our universe, akin to just happening to be in the exact place where two transcontinental trains pass each other. The current data is sufficient to constrain very strong evolution in the equation of state, but smaller changes associated with some varieties of quintessence are still viable models.

In summary, while dark matter has a number of promising models and direct detection is a very real possibility in the near future, dark energy remains a mystery. Several models exist which explain the current data, but none of them are nearly as mature as the dark matter models. Future observations will be able to put greater constraints on both the current equation of state and its change over time, but testing these models in detail is extremely challenging. As with any area of current theoretical research, we will simply have to wait until more data is available and the theory has advanced before making more detailed statements.
z) Consistency

In the discussion above, we made frequent reference to the fact that many different sorts of cosmological observations are combined to produce the concordance Lambda CDM model that most cosmologists use today. This should not be interpreted as a set of observations all contingent on each other for mutual support, wherein the removal of one observation causes the entire structure to collapse. Rather, it is the case of finding intersections between mutual lines of evidence to locate the best overall solution. Even if future data shows that our interpretation of one line is incorrect, the others remain largely unaffected.

As an example, consider the WMAP team's Determination of Cosmological Parameters paper. The age of the universe obtained from the WMAP measurements is consistent with the observed stellar ages methods. The ratio of baryons to photons is consistent with the ratio of deuterium to helium predicted from primordial nucleosynthesis. The Hubble constant is consistent with measurements from distant supernovae, the Tully-Fisher relation and the surface brightnesses of galaxies. Likewise, the cosmological model from the WMAP measurements is consistent with measurements of large scale structure from surveys like the Sloan Digital Sky Survey (SDSS) and the Two-Degree Field Survey (2dF). If these individual results were not compatible with each other, then we would not see an improvement in the parameter constraints when we combine the data sets. The fact that we do see an improvement is evidence that the theory does, indeed, hold together.[/i]"

Also look at the bottom of the last page cause I posted things there before you ninja'd me earlier.


All I see you doing is making up something since you don't have an answer and calling what you made up the answer. Just saying "reasoning" isn't evidence.

I'm also thinking you're being intentionally dodgy here. Not at all good if you wish to convince anybody of the validity of your claims.

(btw unbacked isn't a word)


http://www.merriam-webster.com/dictionary/unbacked

Let's go with a simple one first: When people pray miracles happen. When they don't they don't explain that.


Nothing supporting the happening of miracles. Any attempts to test such a claim have shown no significant difference between what is prayed for and hat is not.

Exactly. Paradoxial Simplicity.


This is what I'm arguing with isn't it?
http://apps4review.com/wp-content/uploads/2012/04/wordgame.jpg
Kasic
offline
Kasic
5,557 posts
Jester

Here
"How would you practically demonstrate that claim?"


I'm 99% sure I never said that, so you must be confusing me with someone else. I can't find the post you're referring to anyways.

So your theory doesn't have solid proof either.


It does for the theory. The theory doesn't include how it got there though. Kind of like when you have a recipe for a cake, it doesn't say in the recipe where the sugar, eggs, flour and etc came from.

That's like saying I just copied and pasted it or something.


That's a better way of putting it then "jumbled." You took God, put it in front of the theory, and called it good. You have no evidence to assert that point though.

@Mage, if you found a character limit to the posts, that means you copied and pasted too much >.
Kasic
offline
Kasic
5,557 posts
Jester

You said earlier that there are "Theories" about what formed into the Big Bang with no "Concrete" Proof. So yeah.


I said theories, lower case. Lower case theory means just someone proposing something. Theory, capitalized, means a scientific theory.

It's a bit of a nuisance but it makes a lot of difference in meaning.

Well if you have no idea where it came from your theory is invalid.


No, it's not. Going back to my example of the cake recipe, the recipe accurately covers how the cake is made. How the ingredients were produced, what brand they were or where they were grown/processed is irrelevant to the cake recipe.
MageGrayWolf
offline
MageGrayWolf
9,470 posts
Farmer

Okay fair enough I can't back up my claims but I can still believe them.


Sure you can but why?

That sentence right there means that that entire site IS GARBAGE.


No it doesn't, that is how theories work. You fail at science.
Proof is a mathematical term. "Proof" with science is evidence. There is a reason for not saying a theory is proven. A scientific theory's main attribute is that it's falsifiable, meaning they must be capable of being modified based on new evidence. This also means it makes predictions about the natural world that are testable by experimentation. If we were to say it's proven we would lose this ability.

This isn't much of an argument. Besides Paradoxial Simplicity is what I described and you described.


It was either the word game or facepalm.

Explain to me the red line under it.


Spell check doesn't have every word in existence.

Well if you have no idea where it came from your theory is invalid.


No that doesn't invalidate the theory at all. It would be like saying the theory that the earth is an oblong spheroid is invalid if we couldn't explain how the Earth formed.

Look how about you spend some time educating yourself on how this works and then we can get back to this?
pangtongshu
offline
pangtongshu
9,815 posts
Jester

1. God, he has no name, The Father of Jesus Christ. (That God)


Names of Christian god

Jehovah is the most common one..appearing almost 7,000 times in the old testament
wontgetmycatnip
offline
wontgetmycatnip
95 posts
Peasant

2. Christianity's version.

Which Christianity?
3. How the he** should I know? I wasn't there. All I know is he created it.

With that sort of logic you can justify anything. I could return with "How should I know? I wasn't there. All I know is he didn't create it." and it would be just as valid and logical.
4. If you want to ask this let me first ask you What was the Big Bang created from if there was nothing?

A)You didn't answer the question.
B)We don't know what there was before the big bang. It is simply the first point in spacetime we can measure.
7. Explain what you mean by this.

Describe, in practical terms, how your god hypothetically created the universe, or would create one. Provide a model that demonstrates how this would physically happen, under what conditions it happened, and under what circumstances it could happen again.
"None of these prove the Big Bang"That sentence right there means that that entire site IS GARBAGE.

You don't understand science. Science doesn't operate on &quotroof"; &quotroof" is a mathematical term. In science, you demonstrate the validity of your argument with evidence, including and most often research. No scientific theory has been &quotroven", not even well established ones such as the theory of evolution and the theory of gravity.
You said earlier that there are "Theories" about what formed into the Big Bang with no "Concrete" Proof. So yeah. (next paragraph) Well if you have no idea where it came from your theory is invalid.

Earlier on you said "How the he** should I know? I wasn't there. All I know is he created it." and "How should I know how he created it? All I know is he created it". I'm glad that you can realize that your own argument is invalid.
Showing 61-75 of 219