The climate is always changing
Ice ages are controlled by how much sunlight is received by the Arctic region during summer.
In the early twentieth century, mathematician Milutin Milanković calculated (by hand) how the distribution of sunlight varies and speculated that variations in arctic sunlight caused the ice ages.
The cause of these cyclic swings in temperature and the associated growth and retreat of great continental ice sheets was proposed by several scientists, notably by the Serbian mathematician Milutin Milanković in 1912. He recognized that the shape of Earth’s orbit around the sun varies cyclically over time—back and forth from more circular to more oval—with a period of about 100,000 years. Milanković also knew that Earth’s tilt with respect to the plane in which it orbits the sun wobbles over a cycle of 41,000 years, and that Earth’s rotation axis Precession is a change in the orientation of the rotational axis of a rotating body. Imagine a wobbling, spinning top.precesses like a top with periods of 19,000 and 23,000 years. These three factors—our planet’s tilt, wobble, and orbit—affect the way sunlight is distributed around the world, even though they hardly affect the total amount of sunlight received by the planet as a whole.
He speculated—correctly, it turns out—that ice ages are controlled by how much sunlight is received by the Arctic region during summer, and set about calculating this value from the basic laws of physics that control the earth’s orbit and rotation. After years of hand calculation, Milanković produced a curve showing how ice ages should behave. At that time, data such as those used to produce Figure 7 did not exist, and so there was only rough agreement with what little information there was. But today we know that the great ice ages were caused by the cycles computed by Milanković, though there are gaps in our understanding of the details of how Earth’s climate responded to these.
Until the industrial revolution, the Arctic had been cooling for many thousands of years.
Unimpeded, this mechanism would lead the earth toward another ice age.
This expected cooling is illustrated in Figure 9, which zooms in on the last 2,000 years of temperatures in the Arctic. The slow, steady cooling trend from the beginning of the record to around 1700–1800 CE probably reflects the slow decline in sunlight reaching the Arctic due to the Milanković orbital cycles. Unimpeded, this mechanism would lead the earth toward another ice age, with continental ice sheets beginning to grow thousands of years from now. But note the strong uptick in temperature toward the end of the record, particularly after about 1900. This is quite unusual by the standards of the last few thousand years and reflects the increase in carbon dioxide and other greenhouse gases brought about by humanity’s rapid consumption of fossil fuels. We are certain that this increase in CO2 concentrations was caused by human activities because the isotopes of carbon in ice show that it comes from fossil fuel burning and the clearing of forests. Over the course of a few hundred years, humans rapidly burned fossil fuels that nature created over tens of millions of years.
How much of the CO2 increase is natural?
Almost certainly not. Figure 10 shows the history of atmospheric CO2 and Antarctic temperature going back 800,000 years, thus covering many Milanković cycles. Clearly, the atmospheric concentration of CO2 does vary naturally, in tandem with temperature, ranging from about 180 to about 280 parts per million by volume (ppmv). But the Milanković cycles cannot account for the enormous spike at the end of the record, a spike to over 400 ppmv that humans put there. There is no evidence that it has been that large for many millions of years. If we do nothing, and there is no global economic meltdown, we may reach well over 1000 ppmv by the end of this century.
A very close and careful analysis of the records of temperature and CO2 in ice cores shows that during Milanković cycles, CO2 mostly lags temperature, suggesting that the CO2 variations were caused by the warming and cooling, not the other way around. In this case, the CO2 was acting as a positive feedback, amplifying the Milanković oscillations. But in the last 100 years, the huge increase in CO2 drove the temperature change.
The argument that one has to choose whether CO2 is a forcing or a response is specious. The same agent can be a forcing in one circumstance and a response in another. Suppose you have a manual transmission car in first gear, pointed downhill, and you release the brake. The downhill motion of your car will spin up its engine. In fact, this is a good way to start your car if its battery is dead and you happen to be pointed downhill. But ordinarily, the engine powers the motion of the car.
06a What are climate models?
To deal with the immense complexity of the climate system, scientists turn to comprehensive global climate models. The word “model” means many different things to different people and in different contexts. Models used for predicting weather, for example, are computational devices for solving large sets of equations. Using a computer to solve these equations is very similar to using a computer to, say, precisely land a spacecraft on Mars. This type of modeling is quite different from something like economic modeling. Economic models also solve equations, but unlike weather models, the equations are constructs based mostly on past economic data and records of human behavior.
But this comparison of climate models with the models used to land spacecraft is a little misleading. Although the equations governing climate are known rather precisely, there is no way they can be solved exactly using present-day computers. We cannot even begin to track each molecule of the climate system but must average over big blocks of space and time. For example, today’s climate models typically average over blocks of the atmosphere that are 100 kilometers square and perhaps 1 kilometer thick, and over time intervals of several tens of minutes. This averaging introduces errors and skips over important climate processes.
Cumulus convection—thunderstorms, for example—is the main way, other than radiation, that heat is transmitted vertically through the atmosphere. But cumulus clouds are only a few kilometers wide and so cannot possibly be simulated by models that average over 100 kilometer squares. Nevertheless, they must be accounted for, and so we turn to a technique awkwardly called “parameterization” to do so. Parameterizations represent processes that cannot be resolved by the model itself, and they attempt to be faithful to the equations underlying those processes. But many assumptions have to be introduced, and their efficacy is usually judged by how well they simulate past events. In many ways, parameterizations are closer in spirit to economic modeling than to programming spacecraft.
Thus climate and weather models are hybrids of strictly Contains known inputs with no random variables and no degree of randomness.deterministic modeling (like programming spacecraft) and somewhat ad hoc parameterizations (closer to economic modeling).
Weather models can be tested over and over again, every day, and thereby progressively refined. Today’s weather models are far superior to those of a generation ago, partly because of improved computational technology, partly because of increased know-how, and partly because they can be repeatedly tested against observations and refined. But climate evolves slowly, and so there are not that many climate states against which to test models. So, in contrast with weather forecasting, in climate modeling we have neither the history of success nor the confidence that comes with it. But the fundamentally chaotic nature of weather imposes a predictability horizon on weather forecasting, whereas with climate we are trying to predict the slow response of the long term average statistics of the weather to changes in sunlight, CO2, and other factors. For this kind of prediction, there may not be a fundamental predictability horizon. (We can say with confidence that summer will be warmer than winter for as many years in advance as we care to.) Instead, we have to deal with remaining uncertainties in the physics of climate.
To take one example, the water vapor content of the atmosphere varies, mostly in response to temperature itself. As the atmosphere warms, the concentration of water vapor increases. But water vapor is the most important greenhouse gas, and its increase leads to further warming. This is an example of a positive feedback in the system, and current understanding suggests that this factor alone more or less doubles the warming that occurs in response to increasing CO2. But the true physics of climate is not that simple, and the distribution of water vapor is affected by many other variables besides temperature. Our incomplete understanding of water vapor is thus one source of uncertainty in modeling climate.
Much more problematic are clouds, which, regarding radiation, work both sides of the street. They account for most of the reflection of sunlight by our planet, thereby cooling it. But they also absorb and reradiate infrared radiation just like greenhouse gases, thereby exerting a warming effect. Which effect wins depends on the altitude and optical properties of the clouds. At present, there is no generally accepted theory for how clouds respond to climate change. Clouds are now considered the main source of uncertainty in climate projections.
To this problem we can add many other issues that reflect the immense, almost overwhelming complexity of the climate system. As sea ice melts, a white surface is replaced by dark ocean waters, which absorb more sunlight (another positive feedback). In some places, jungles, which are relatively dark, may be replaced by deserts, which are highly reflective—a negative feedback. The rate at which the oceans absorb excess CO2 may itself change in response to changes in ocean temperature and concentration of dissolved CO2. Incomplete understanding of these processes also introduces uncertainty in climate projections.
Another source of uncertainty is the response of the deep ocean to climate change. The oceans act as a buffer to temperature change and delay the response of global temperature to increasing greenhouse gases. Here is a good way to think about the effect of the oceans. Suppose we have a sealed glass cylinder containing equal volumes of air and water. If it is just sitting at rest with no energy going in or out through the walls of the container, the air and water will settle down to the same temperature. Add enough black dye to the water to make it opaque and shine a powerful flashlight down through the glass top of the cylinder. The light passes through the air but is absorbed at the very top of the water, heating it. So the top of the water warms up, and since that is the part that is in contact with the air, the air warms up too. But the water below the surface is not heated by the light, which never makes it down below the surface, so it remains at the temperature it had before. But slowly—very slowly—the warmth of the surface water is diffused down into the deep water and this both warms the deep water and cools the surface water and with it, the air.
Thus after we turn on the flashlight there will be an initial fast warming of the air and surface water, followed by a very slow increase in the temperature of the whole system. Eventually, the water and air will reach a new, warmer temperature. How long it takes to do so will depend on how rapidly heat diffuses downward into the deep water.
Our models could account for the lag between heat input and temperature change in the real world if we had a simple theory for how heat penetrates the ocean depths. We know that heat is mixed rapidly downward to a depth of between 20 and 150 meters (60 and 500 feet), depending on location and time of year. If heat did not penetrate deeper, then the 20–150 meter penetration would give a lag of around two years. But we know from measurements that heat manages to circulate much deeper in the ocean, taking quite a long time to do so, perhaps as much as 1,000 years. Just how this happens is complex, and is a source of uncertainty for longer range climate projections.
Finally, mathematical models of climate-like systems can exhibit sudden, unpredictable shifts. We don’t know for sure whether our climate is an example of such a system, but there is evidence encoded in ice cores from Greenland that ice age climates can jump rather quickly from one state to another. This evidence, together with our models, puts mathematical teeth on the idea of tipping points—sudden and largely unpredictable shifts in the climate state. This idea keeps many a climate scientist awake at night.
06c Dealing with uncertainty
Estimating future emissions is a problem of economic and behavioral forecasting, including, very importantly, predicting population growth. Will developed nations learn how to better conserve energy? Will the economies of countries like India expand rapidly, as China’s did, leading to rapid growth in energy demand? How far will low-carbon energy technologies penetrate the energy sector? There are strong interdependencies among these issues. For example, recent experience shows that as gross national product per capita expands together with per capita energy consumption, population growth tends to level off, ameliorating the growth in energy demand. All these factors strongly affect greenhouse gas emissions.
To deal with all this, the Intergovernmental Panel on Climate Change (IPCC4) came up with a set of just four “representative concentration pathways” (RCPs), expressing plausible evolutions of greenhouse gases and other man made influences on climate, such as aerosols. These are labeled with the associated A direct measure of the amount that the Earth’s energy budget is out of balance in comparison to a preindustrial baseline. In terms of climate change, this number tells us how much greenhouse gas emissions are contributing to warming the atmosphere.radiative forcing in the year 2100; so, for example, RCP 6.0 has a radiative forcing of 6 watts per square meter by the year 2100. (For comparison, doubling CO2 produces a radiative forcing of about 4 watts per meter squared.) Figure 12 shows the evolutions of these concentration pathways, expressed as though all the forcing is due to CO2 alone. (That is, we take the radiative forcings associated with other greenhouse gases like methane and nitrous oxide, along with aerosols, and convert them into A measure of total greenhouse gas emissions expressed in terms of the amount of carbon dioxide having the same global warming potential over a specified timescale (generally, 100 years).CO2-equivalent units.)
In addition to the uncertainty surrounding the emissions there is inherent uncertainty in the models themselves. Many important processes such as turbulence, convection, and the interaction of radiation with clouds have to be represented indirectly in the models and this is one of many sources of model error. One strategy to account for this important source of uncertainty is to run many different models and to run each of them many times with different initial states to produce a large ensemble of projections. While imperfect, comparing the results of the many members of such an ensemble gives us some idea of the inherent uncertainty in model projections. This strategy is also used in running weather prediction models and has proved valuable in quantifying the uncertainty of weather forecasts.
06d What the models say
The pink curve in Figure 12, RCP 8.5, is a pessimistic projection that assumes no serious effort to curtail greenhouse gas emissions and robust economic growth. In this projection, by the end of the century, the CO2 equivalent has quadrupled from preindustrial levels, to around 1,230 ppm. Paleoclimate proxies suggest that such a value has not been seen since at least the Eocene period, roughly 50 million years ago, when alligators roamed Greenland, and sea level was 70 meters (about 230 feet) higher than today’s. If the climate were to equilibrate to the associated radiative forcing of 8.5 watts per meter squared, extrapolation of the IPCC temperature projections would yield a global warming of 3–9°C (5–16°F).
The other three RCPs assume some level of mitigation of greenhouse gas emissions and are useful for estimating how various mitigation strategies might ameliorate climate change.
Unless humanity makes substantial changes in emissions or their capture, by the end of the century we could see CO2 concentrations not seen for 50 million years.
This was when alligators roamed Greenland and sea levels were 230 feet higher than today.
The projected response of global mean surface temperature depends on both the emissions trajectory and the climate model used to make the projection. In its Fifth Assessment Report, the IPCC summarizes this response, shown in Figure 13, which extends to the year 2300. The color shading for each curve in the figure represents the scatter among the various climate models used to make the projections. Note that if nothing is done to curb emissions, and economic growth proceeds rapidly in the developing world, global mean temperature may rise by between 2.5 and 4.5°C (4.5 to 8°F) by 2100, and by between 4 and 13°C (7 and 23°F) by 2300.
But what are the consequences of these changes? How will they affect us in human and economic terms? We next consider the set of real risks that climate change poses and how, at least for some risks, we might go about attaching actual numbers.
- MIT Explainer: What are climate models?
- MIT Explainer: How sensitive is the Earth's temperature to greenhouse gases?
- MIT Explainer: What is the Intergovernmental Panel on Climate Change (IPCC)?
- Ask MIT Climate: Why is the ocean so important for climate change?
- En-ROADS: Climate Solutions Simulator
- Economic Risks of Climate Change: An American Prospectus