In all of science, it’s very easy to reach a conclusion based on what you’ve seen so far. But an enormous danger lies in extrapolating what you know — in the region where it’s been well-tested — to a place that lies beyond the established validity of your theory. Newtonian physics works just fine, for example, until you either go down to very small distances (where quantum mechanics comes into play), until you get close to a very large mass (when General Relativity becomes important), or until you start moving close to the speed of light (when Special Relativity matters). When it comes to describing our universe within our modern cosmological framework, we must be take equal care to ensure we’re getting it right.

The universe, as we know it today, is expanding, cooling, and getting clumpier and less dense as it ages. On the very largest cosmic scales, things appear to be uniform; if you were to place a box a few billion light-years on a side anywhere within the visible universe, you’d find the same average density, everywhere, to ~99.997% precision. And yet, when it comes to understanding the universe, including how it evolves over time, both far into the future and way back into the distant past, there’s only one equation needed to describe it: the first Friedmann equation. Here’s why that equation is so incomparably powerful, along with the assumptions go into applying it to the entire cosmos we inhabit.

Going way back to the beginning of the story, Einstein’s General Relativity was put forth in 1915, where it quickly supplanted Newton’s law of universal gravitation as our leading theory of gravity. Whereas Newton hypothesized that all masses in the universe attracted one another instantaneously, according to an infinite-ranged “action-at-a-distance,” Einstein’s theory was very different, even in concept.

Space, instead of being an unchanging backdrop for masses to exist and move in, became inextricably tied to time, as the two were woven together in a fabric: spacetime. Nothing could move through spacetime faster than the speed of light, and the more rapidly you moved through space, the slower you moved through time (and vice versa). Whenever and wherever not just mass but any form of energy was present, the fabric of spacetime curved, with the amount of curvature directly related to the stress-energy content of the universe at that location.

In short, spacetime’s curvature told matter and energy how to move through it, while the presence and distribution of matter and energy told spacetime how to curve.

Within General Relativity, Einstein’s laws provide a very powerful framework for us to work within, but it’s also incredibly difficult: only the simplest of spacetimes can be solved exactly, rather than numerically. The first exact solution came in 1916, when Karl Schwarzschild discovered the solution for a non-rotating point mass, which we identify today with a black hole. If you decide to put down a second mass in your universe, your equations are now unsolvable.

However, plenty of exact solutions are known to exist, and one of the earliest was provided by Alexander Friedmann, way back in 1922. If, he reasoned, the universe were filled uniformly with some sort(s) of energy — matter, radiation, a cosmological constant, or any other form of energy you can imagine — and that the energy is distributed evenly in all directions and in all locations, then his equations provided an exact solution for spacetime’s evolution.

Remarkably, what he found was that this solution was inherently unstable over time. If you universe began from a stationary state and was filled with this energy, it would inevitably contract until it collapsed from a singularity. The other alternative is that the universe expands, with the gravitational effects of all the different forms of energy working to oppose the expansion. All of a sudden, the enterprise of cosmology was put on a firm scientific footing.

It cannot be overstated how important the Friedmann equations — and in particular, the first Friedmann equation — is for modern cosmology. In all of physics, it’s arguable that the most important discovery wasn’t physical at all, but was rather a mathematical idea: that of a differential equation.

A differential equation, in physics, is an equation where you begin at some initial state, with properties that you choose to best represent the system you have. Have particles? No problem; just give us their positions, momenta, masses, and other properties of interest. The differential equation’s power is in this: it tells you how, based on the conditions your system began with, it will evolve to the very next instant. Then, from the new positions, momenta, etc., i.e., from all the other properties that you could derive, you can put them back into the very same differential equation, and it will tell you how the system will evolve to the very next moment.

From Newton’s laws to the time-dependent Schrödinger equation, differential equations tell us how to evolve any physical system either forward or backward in time.

But there’s a limitation here: you can only keep this game up for so long. Once your equation no longer describes your system, you’re extrapolating beyond the range over which your approximations are valid. For the first Friedmann equation, you need for the contents of your universe to remain constant. Matter remains matter, radiation remains radiation, a cosmological constant remains a cosmological constant, and there are no transformations allowed from one species of energy to another.

For another, you need your universe to remain isotropic and homogeneous. If the universe gains a preferred direction or becomes too non-uniform, these equations no longer apply. It’s enough to make one worry that our understanding of how the universe evolves might be faulty in some way, and that we might be making an unwarranted assumption: that perhaps this one equation, the one that tells us how the universe expands over time, might not be as valid as we commonly assume.

This is a risky endeavor, because we always, always have to be challenging our assumptions in science. Is there a preferred frame-of-reference? Do galaxies rotate clockwise more frequently than they rotate counterclockwise? Is there evidence that quasars only exist at multiples of a specific redshift? Does the cosmic microwave background radiation deviate from a blackbody spectrum? Are there structures that are too large to explain in a universe that is, on average, uniform?

These are the types of assumptions that we check and test all the time. While there have been many splashy claims made on these and other fronts, the fact of the matter is that none of them have held up. The only frame-of-reference that’s notable is the one where the Big Bang’s leftover glow appears uniform in temperature. Galaxies are just as likely to be “left-handed” as “right-handed.” Quasar redshifts are definitively not quantized. The radiation from the cosmic microwave background is the most perfect blackbody we’ve ever measured. And the large quasar groups we’ve discovered are likely to only be pseudo-structures, and not gravitationally bound together in any meaningful sense.

On the other hand, if all of our assumptions remain valid, then it becomes a very easy exercise to run these equations either forwards or backwards in time as far as we like. All you need to know is:

- how fast the universe is expanding today,
- what the different types and densities of matter and energy are that are present today,

and that’s it. Just from that information, you can extrapolate forwards or backwards as far as you like, and so you can know what the observable universe’s size, expansion rate, density, and all sorts of other factors were and will be at any moment in time.

Today, for example, our universe consists of about 68% dark energy, 27% dark matter, about 4.9% normal matter, about 0.1% neutrinos, about 0.01% radiation, and negligible amounts of everything else. When we extrapolate that both backwards and forwards in time, we can learn how the universe expanded in the past and will expand in the future.

But are the conclusions that we’d draw robust? Or are we making simplifying assumptions that are unjustified? Throughout the history of the universe, here are some things that might throw a wrench into the works about our assumptions.

- Stars exist, and when they burn through their fuel, they convert some of their rest-mass energy (normal matter) into radiation, changing the composition of the universe.
- Gravitation occurs, and the formation of structure creates an inhomogeneous universe with large differences in density from one region to another, particularly where black holes are present.
- Neutrinos first behave as radiation when the universe is hot and young, but then behave as matter once the universe has expanded and cooled.
- And, very early on in the history of the universe, the cosmos was filled with the equivalent of a cosmological constant, which must have decayed away (signifying the end of inflation) into the matter and energy that populates the universe today.

Perhaps surprisingly, it’s only the fourth of these that plays any substantial role in altering the history of our universe.

The reason for that is simple: we can quantify the effects of the others, and see that they only affect the expansion rate at the ~0.001% level or below. The tiny amount of matter that gets converted into radiation does cause a change in the expansion rate, but in a gradual and low-magnitude way; only a small fraction of the mass in stars, which itself is only a small fraction of the normal matter, ever gets converted into radiation. The effects of gravitation have been well-studied and quantified (including by me!), and while it can slightly affect the expansion rate on local cosmic scales, the global contribution doesn’t impact the overall expansion.

Similarly, neutrinos can be accounted for precisely to the limit of how well-known their rest masses are, so there’s no confusion there. The only issue is that, if we go back early enough, there’s an abrupt transition in the energy density of the universe, and those abrupt changes — as opposed to smooth and continuous ones — are the ones that can truly invalidate our use of the first Friedmann equation. If there’s some component to the universe that rapidly decays away or transitions into something else, that’s the one thing we know of that could challenge our assumptions. If there’s anyplace where invoking the Friedmann equation falls apart, that will be it.

It’s extremely hard to draw conclusions about how the universe will work in regimes that lie beyond our observations, measurements, and experiments. All we can do is appeal to how well-known and well-tested the underlying theory is, make the measurements and take the observations that we’re capable of, and draw the best conclusions that we can based upon what we know. But we always have to keep in mind that the universe has surprised us at many different junctions in the past, and will likely do so again. When it does, we have to be ready, and part of that readiness comes from being prepared to challenge even our most deeply held assumptions about how the universe works.

The Friedmann equations, and in particular the first Friedmann equation — which relates the universe’s expansion rate to the sum total of all the different forms of matter and energy within it — has been known for 99 years, and applied to the universe for almost as long. It’s how we know how the universe has expanded over its history, and it enables us to predict what our ultimate fate will be, even in the ultra-distant future. But can we be certain our conclusions are correct? Only to a particular level of confidence. Beyond the limitations of our data, we must always remain skeptical of drawing even the most compelling conclusions. Beyond the known, our best predictions remain mere speculations.