Chapter 10: Ethics

Do We Have a Moral Responsibility to Stop Global Temperature Rise ?[1]

The Relevance of Ethics to the Climate Change Discussion

Scientists continue to provide overwhelming evidence that greenhouse gas pollution, environmental degradation, and consequent global climate change are profoundly dangerous to humans and to other life on Earth. A group of 500 scientists led by a team from Stanford issued this recent warning: “Unless all nations take immediate action, by the time today’s children are middle-aged, the life-support systems of the Earth will be irretrievably damaged” (Barnosky et al., 2013). But to the surprise and frustration of the scientists, all nations are not taking immediate action to slow climate change, and people are largely silent, even acquiescent, in the face of real threats to their futures and the futures of all beings that evolved under this – not another, hotter, more volatile and violent – climate.

What accounts for this disconnect between facts and actions?

An answer can be found in the logic of practical decision-making, the form of reasoning that leads from facts to sound conclusions about what course of action a person or government should take. The logic goes like this:

Any argument that reaches a conclusion about what ought to be done will have two premises. The first is a statement of fact, a descriptive statement based on empirical evidence, often grounded in observation and science: This is the way the world is; this is the way the world may become under a certain set of conditions. The second premise is a statement of value, a prescriptive statement, a moral affirmation based on cultural values and ethical norms: This is the way the world ought to be; this is good, this is just, this is a worthy goal. From this partnership of facts and values, but from neither alone, we can reason to a reliable conclusion about what we ought to do (Box 1). One might say that the first premise alone is a world without a compass. The second premise alone is a compass without a world. Only together can they point in the direction we ought to go.

Box 1. The Logic of Practical Decision-making. An Example

The factual premise If we do not act soon, anthropogenic environmental changes will bring serious harms to the future.
The ethical premise We have a moral obligation to avert harms to the future, so as to leave a world as rich in life and possibility as the world we inherited.
The conclusion Therefore, we have a moral obligation to act, and act now.

Scientists have done an impressive, sometimes even heroic, job of documenting the factual premise. But the ethical premise is still under discussion – what is our obligation to the future? The stakes of this discussion are high.

Do We Have a Moral Obligation to Take Action to Protect the Future of a Planet in Peril?

A 2010 project asked one hundred of the world’s moral leaders from a wide variety of worldviews and continents if we – governments and individuals – have a moral obligation to do what we can to prevent catastrophic climate change, and if so, why (Moore and Nelson, 2010). The goal was not to find the right answer, but to find a great abundance of answers, so that no matter what views people bring to the discussion, they will find at least one reason that speaks powerfully to them. Yes, the moral leaders wrote back, we must take action, for a wide variety of reasons. Here are just seven of their answers (Box 2).

Box 2. Do we have an obligation to take action to prevent catastrophic climate change?

  1. Yes, to protect the thriving of humankind.
  2. Yes, for the sake of the children.
  3. Yes, for the sake of the Earth and all its lives .
  4. Yes, because the gifts of the Earth are freely given, and we are called to gratitude and reciprocity.
  5. Yes, because compassion requires us to reduce or prevent suffering.
  6. Yes, because justice demands it.
  7. Yes, because our moral integrity requires us to do what we believe is right.

1. We must act, to protect the thriving of humankind.

Daniel Quinn, author of Ishmael, explained our peril. “We are like people living in the penthouse of a hundred story building. Every day we go downstairs and at random knock out 150 bricks to take upstairs to increase the size of our penthouse. Since the building below consists of millions of bricks, this seems harmless enough . . . for a single day. But for 30,000 days? Eventually—inevitably—the streams of vacancy we have created in the fabric of the walls below us must come together to produce a complete structural collapse. When this happens—if it is allowed to happen—we will join the general collapse, and our lofty position at the top of the structure will not save us.” (Quinn, 2010)

Of course, not everyone thinks that a catastrophic crash in human numbers would be a bad thing — wouldn’t the world be better off without us? But consider: for whatever purpose and by whatever process, in humans, the universe has evolved the capacity to turn and contemplate itself – to seek to understand the universe and to celebrate the mysteries of what we cannot understand. And whatever the faults of our species – and they are innumerable and tragic – we, maybe alone, have the capacity to imagine how we might be better.

This is the positive side of action to avert climate catastrophe. At this hinge point in history, we have not only the chance to escape the worst of the harms, but the chance to make a “great turning” (Macy and Johnstone, 2012) toward a healthier, more just and joyous planetary civilization.

The upshot: If severe planetary change threatens to undermine the foundations of human thriving, and if human thriving is a fundamental value, then we have an obligation to avert the degradations that threaten us. Anyone who accepts the scientific evidence about the dangers of climate change and affirms the value of human life, will not be able to sit on their hands.

2. We must act, for the sake of the children.

James Speth, former Dean of the School of Forestry and Environmental Studies at Yale, writes, “All we have to do to destroy the planet’s climate and ecosystems and leave a ruined world to our children and grandchildren is to keep doing exactly what we are doing today” (Speth, 2010). If climate destabilization will be manifestly harmful to children, as Speth claims, and if we have a moral obligation to protect children, then we have an obligation to expend extraordinary effort to prevent catastrophic climate change.

Then twelve -year old Severn Suzuki, speaking at the UN’s Earth Summit in Rio de Janeiro, said, “Parents should be able to comfort their children by saying ‘everything’s going to be all right,’ ‘it’s not the end of the world,’ and ‘we’re doing the best we can.’ But I don’t think you can say that to us anymore.” The question then is: What must we do, in order to tell our children honestly that we’re doing the best we can for them?

It’s important to think carefully about what those extraordinary efforts are. People might say, “I don’t care about ethics. All I care about are my children. And I am going to make as much money as I can, so that they can be safe and happy all their lives.” Doesn’t everyone want a safe and happy future for their children? The irony, of course, is that we harm them even as (especially as) we try to provide for them. In the end, the amassing of material wealth in the name of our privileged children’s future is what will hurt them the most, as it exhausts the resilience of the planet’s life-supporting systems. And what our decisions will do to the children who are not privileged is not just an irony; it’s a moral wrong. These children, who will never know even the short-term benefits of misusing fossil fuels, are the ones who will suffer first as rising seas flood their homes, fires scorch cropland, diseases spread north, and famines scourge lands that had been abundant.

3. We must act for the sake of the Earth and all its lives, because the community of Earth and its lives has intrinsic and infinite value.

The failure to act on behalf of the Earth and all its creatures is, of course, a great imprudence – a cosmic cutting-off-the-limb-you’re-sitting-on stupidity. But it is also a moral failure. That is because the planetary community (this swirling blue sphere crammed with life) is not only instrumentally valuable. That is, it’s not just valuable because it is supportive of human life. Rather, the Earth, like a human being, has value in and of itself. It has what philosophers call intrinsic value. We have responsibilities to honor and protect what is of value. So we have the responsibility to honor and protect the Earth as we find it, a rare blue jewel in the solar system.

image

Kim Heacox, reprinted by permission.

Philosopher Kathleen Dean Moore writes,

Premise 1. It’s not just the sun in winter, the salmon sky th at lights the snow, or blue rivers through glacial ice. It’s the small things, too – the kinglet’s gold crown, the lacy skeletons of decaying leaves, and the way all these relate to one another in patterns that are beautiful and wondrous. The timeless unfurling of the universe has brought the Earth to a glorious richness that awakens in the human heart a sense of joy and wonder.

Premise 2. It is right to protect what is wondrous and wrong to destroy it. This is part of what “right” means – to enhance, rather than diminish, what is of value.

Conclusion. This is how we ought to act in the world – with respect, with deep caring and fierce protectiveness, and with a full sense of our obligation to the future, that this planetary richness shall remain.[2]

4. Yes, because the gifts of the Earth are freely given, and we are called to gratitude and reciprocity.

Begin with this fact: The gifts of the Earth (what we cravenly call “natural resources” or “ecosystem services”) are freely given — rain, sun, fresh air, rich soil, all the abundance that nourishes our lives and spirits. Perhaps they are given to us by God or the gods; maybe they are the fruits of a fecund Earth. It doesn’t matter to the argument: let that be a mystery, why we are chosen to receive such amazing gifts. What is important is that they are given. We do not earn these gifts. We have no claim on them. If they were taken away, there is nothing we could do to retrieve them. At the same time, we are utterly dependent on these gifts. Without them, we quickly die. This unequal relationship, the relationship of giver and receiver of gifts, makes all the moral difference.

We understand the ethics of gift-giving. To receive a gift requires us to be grateful. To dishonor or disregard the gift — to ruin it, or waste it, to turn it against the giver or lay greedy claim to it or sourly complain — all these violate our responsibilities as a recipient. Rather, to be grateful is to honor the gift in our words and our actions, to say, “This is a great gift,” and to protect it and use it well. In this way, gratitude calls us to attentiveness, celebration, and careful use.

Furthermore, an important part of gratitude is reciprocity, the responsibility to give in return. We give in return when we use our gifts well for the benefit of the Earth and the inhabitants who depend on its generosity. In this way, gratitude for our abundant gifts is the root of our moral obligation to the future to avert the coming climate calamities and leave a world as rich in possibilities as the world that has been given to us.

5. We must act from compassion, which requires us to reduce or prevent suffering.

Of all the virtues that a human being can possess, the greatest may be compassion. ‘Compassion,’ to ‘feel with,’ to imagine ourselves in another’s place. To be frightened as they are frightened by a suddenly unstable world, to be bewildered as they wonder where to turn, to suffer their thirst and anger. Understanding the joys or sufferings of others, the compassionate person is joyous or suffers too. The truly compassionate person also acts in the world, providing conditions that bring forth joy and preventing or diminishing conditions that create pain.

Among the calamities of climate change and the resulting environmental degradation is an increase in human suffering and the suffering of other feeling beings. Climate change disrupts food supplies, reduces or contaminates drinking water, spreads disease, increases the terror of storms, floods great cities, and cracks villages into the sea. The price of the reckless use of fossil fuels will be paid in large part by human suffering.

If virtuous people are compassionate, if compassionate people act to reduce suffering, if climate change will cause suffering greater than the world has ever known, then we who call ourselves virtuous have an inescapable obligation to the future to avert the effects of the coming calamities.

6. We must act, because justice demands it.

If people have inalienable rights to life, liberty, and the pursuit of happiness, then the carbon-spewing nations are embarking on the greatest violation of human rights (Universal Declaration of Human Rights) the world has ever seen. Uprooting people from their homes, exposing them to new disease vectors, disrupting food supply chains — it’s a systematic violation of human rights. By whom, and for what? By the wealthy nations who can’t or won’t stop spewing carbon into the air. For what? For self-enrichment, the continuation of wasteful and pointless consumption of material goods. Why? Because of the failure of conscience or will to create a fairer way of living on the planet.

It’s not just a violation of rights: Those who are suffering, and will suffer, the most severe harms from climate change (at least in the short term, until it engulfs us all) are those least responsible for causing the harm. That’s not fair.

Sheila Watt-Cloutier, the former chair of the Inuit Circumpolar Council, wrote of the human rights claims of northern-latitude people: “We Inuit and other Northerners . . . are defending our right to culture, our right to lands traditionally used and occupied, our right to health, our right to physical security, our right to our own means of subsistence and our rights to residence and movement. And as our culture, again, as I say, is based on the cold, the ice and snow, we are in essence defending our right to cold.”

7. We must act, because personal integrity requires us to do what’s right.

When people are asked to rate their hope that humankind will find a way to maintain a livable climate — on a scale of one (not a snowball’s chance in hell) to ten (nothing to worry about) — they generally come in at about three to four on the hope-o-meter.[3] They speak wistfully: “Let’s face it. Our options are limited, our cities and homes and transportation systems are disgracefully designed, destructive ways of living are skillfully protected by tangles of profit and power around the world, extractive corporations are behaving like sociopaths (see “characteristics of a sociopath”), and we have run out of time. How can any reasonable person be hopeful? And if you don’t have hope, then why should you act?”

But to think there are only two options — hope and despair — is a fallacy of false dichotomy. Between hope and despair is the broad and essential expanse of moral ground, which is not acting out of hope or failing to act out of despair, but acting out of personal integrity.

Integrity: a matching between what you believe and what you do, which is walking the talk. To act justly because you believe in justice. To live gratefully because you believe life is a gift. To act lovingly toward the Earth, because you love it. The meaning of our lives is not in what we accomplish in the end, any more than the meaning of a baseball game is the last out. What makes our lives meaningful is the activities we engage in that embody our values, whatever happens in the world. What does integrity ask of us? First, to refuse to be made into instruments of destruction. With thoughtless decisions about what we invest in, what we buy, what we praise, what we value, what we do for a living, we volunteer to be the foot soldiers of corporate destruction. Soldiers used to say, “Hell no,” to an unjust war. Can we say the same to an unjust, far more disastrous, way of life?

Integrity calls us to live in ways that express our deepest values. As we live with integrity, we can escape the unsettled grief of lives that violate our deeply held beliefs about right and wrong. As we live with integrity, we can imagine and bring into being new ways of living on the land that are bright with art and imagination, nested into families and communities, grateful and joyous – and lasting for a very long time.

Chapter 9: Economics

Economics and the climate change challenge:  Understanding incentives and policies

As the earth warms, the impacts are expected to affect both ecosystems and humans. Thus, understanding the impacts of climate change requires a combined understanding of how ecosystems respond to the buildup of greenhouse gases and how humans are impacted and thus respond to the changes in these ecosystems. This chapter provides an economic perspective on the climate change challenge and an introduction to the role that market-based incentives and policy can play in helping us mitigate and adapt to the impacts of climate change.

To better understand the economic levers that can be used to address our climate change problem, we can think of this challenge as similar to other decisions and outcomes that we encounter in our every-day lives and how incentives are used to alter behavior. For example, our society has reduced the incidents of lung cancer by implementing a variety of policies aimed at changing consumer behavior such as employing education campaigns that communicate the link between smoking and lung cancer, imposing taxes on cigarettes and tobacco products, banning smoking in most public places, and prohibiting the purchase by minors. While none of these are perfect deterrents for everyone, the collective outcome has been to greatly reduce the adverse impacts of smoking behaviors.

These same principles can be applied to climate change, water pollution, and other environmental challenges. In all of these applications, the environment (atmosphere, water) is viewed as an asset that provides a variety of services that support life and sustain our existence. As with all long term assets, we seek to use them sustainably. And, as with all assets, there is a value associated with their services. The value will decline as the asset is rendered less productive. Polluting these environmental assets will also decrease the level of services they can provide now and in the future. CO2 is a form of pollution into the “atmosphere asset,” where high levels of CO2 emissions causes serious and irreversible adverse impacts. The challenges of reducing CO2 emissions is magnified since these emissions accumulate in the atmosphere over time and also disperse throughout the global atmosphere.  If society wants to slow down the rate or amount of CO2 emissions, it needs to provide incentives that discourage such emitting behavior. Economic “tools” can be used to redirect behavior toward less CO2 generating activities, to evaluate the most cost effective policy options and incentives to sustain this behavior, and to assess the long term costs of continued delays in collective actions to reduce greenhouse gas emissions. In this chapter we focus our economic lens on policy options to address the adverse impacts of emitting too many greenhouse gases into the atmosphere. These options include designing government-mandated or voluntary-style programs and regulations.

In order to influence behaviors and thus outcomes, one must first understand the nature of the interactions between a substantially more variable climate, and the potential damages and irreversibilites. We must also have an understanding of why the problem is occurring. Once the what and why of the problem are defined, there are a variety of policies that can be used to influence production and consumption behaviors in order to reduce or mitigate the impacts of climate change.

Whatis the problem?  As noted in earlier chapters, human induced emissions of CO2 and other greenhouse gases is markedly adding to the naturally occurring greenhouse gases in the atmosphere. These activities include such things as the burning of fossil fuels, manufacturing, and agricultural production. The accelerated accumulation of these greenhouse gases is causing our climate to change at rates that are unprecedented. These changes are resulting in damages to the environment and creating adverse health impacts which collectively are imposing costs on society. Damages from climate change for the USA were estimated for certain sectors of the economy in a recent study to be about 1.2% of gross domestic product per 1°C increase in global temperatures, with larger damages in southern states (see also this news article). In an effort to inform decision makers about significant potential damages in the U.S. and allow the government to set priorities and manage risks, the U.S. Government Accountability Office has recently released a report on potential economic effects of climate change. These damages/costs we will refer to as the social cost of carbon.

Whyis this problem occurring?  These costs to society occur because there are adverse impacts associated with human activities that are not readily apparent and often indirectly associated with the activity. In the case of energy production from fossil fuels, adverse impacts are caused by the generation of CO2. When a producer ignores the unintended side impacts of this type of energy generation, there is an implicit cost imposed upon others. Without information on the full (private plus social) cost of carbon, as reflected in the extent of the adverse impacts, and without policies that address these costs, it is too convenient to simply ignore the impacts on our ecosystems. This Ted Talk by Pavan Sukhdev “Put a value on nature!”  provides an excellent and quick overview of the problem.

Let’s use electricity production from coal power plants as an example of this type of “cost to society”. According to the Energy Information Administration, coal power plants in the United States emitted nearly two million metric tons of carbon dioxide in 2016. These CO2 emissions, which contribute to global warming, are the source of the negative externality associated with electricity generation from fossil fuels.

Assume you are an electricity producer and you produce electricity by burning coal. Below is a graph of the supply and demand curves for electricity (Figure 1). The downward sloping curve can be viewed as the demand (willingness to pay) for electricity; the downward sloping demand curve can also be viewed as a marginal benefit curve because each point on the curve represents the benefit of an additional unit of electricity. The upward sloping curve represents the supply of electricity which also reflects the marginal private cost of producing each unit of electricity from this coal power plant. As the owner of a firm that is producing electricity, you would produce electricity at the quantity where the marginal benefit of electricity production, as reflected in the price consumers are willing to pay, equals your marginal private cost of producing that amount of electricity. This is the point A on the graph. At this point, the quantity of electricity produced is Qp, which is the privately optimal amount of electricity. Note that at any quantity greater than Qp the marginal benefits of that level of electricity are less than the marginal private costs of producing that level of electricity.

Now let’s look at it from Society’s point of view. We need electricity; it is what heats and lights our homes and powers our appliances and electronic devices. However, with each unit of electricity produced from fossil fuels there is also a certain amount of carbon dioxide emitted which is imposing unwanted costs in the form of negative impacts from climate change. This means that the total cost of producing each unit of electricity generated from fossil fuels is actually higher than just the private cost of production which basically ignores these adverse impacts associated with higher levels of carbon dioxide in the atmosphere. Thus, the true cost of producing electricity from fossil fuels is more like the red curve on this graph, which reflects a higher cost of production for each level of electricity. Economists indicate this by calculating the cost as the sum of the private costs to generate and deliver the electricity to your home PLUS the social cost of carbon.

Thus, in the case for electricity generated from fossil fuels (coal, natural gas), the socially optimal amount of electricity production would be Qs, which is less than the privately optimal amount Qp. In the absence of awareness of these external costs and impacts, we will continue to overproduce and overconsume electricity generated from fossil fuels. Economists refer to this as a market failure, because the (private) production and consumption of electricity exceeds the level that would be produced and consumed if the costs of CO2 emissions were accounted for and incorporated into the total costs of production.

How can we correct this problem?

There are many options for reducing the adverse impacts to society from CO2 and other greenhouse gas emissions, such as, switching to sources of electricity that do not contribute to the CO2 problem, trying to generate fossil-based electricity by not releasing the CO2 into the atmosphere, or exploring ways to reduce the demand for products that produce greenhouse gas emissions. All of these options require that the signals to producers and consumers regarding the adverse impacts are made directly, through regulations and higher prices, or indirectly through research and development on cleaner technologies. Having the ability and information to assess the benefits and the costs of the corrective actions allows decision makers to assess risks and prioritize options.

Let’s see how this works using the example of the firm who is emitting CO2 in the process of generating electricity and unaware of his adverse impacts of their actions on the accumulation of greenhouse gases. We can correct this market failure by explicitly recognizing both the private production costs and the costs to society. Economists refer to these social costs as externality costs. Since the externality costs do not show up on the electricity generating firm’s expense spreadsheets they generally get ignored in the decision-making process. However, there are numerous ways to correct this cost omission. Here we present three common policies that can be used to internalize the costs of the CO2 emissions:  command and control regulations, pricing carbon through a carbon tax or a cap and trade system, and subsidies.

Command and control regulations specify how a producer must manage his/her production process that is also resulting in generation of the CO2 pollution, establishes a monitoring procedure(s), and enforces a set of standards aimed at either the production process itself or the quantity of electricity. Let’s look at our graph again (Figure 2). Here we have an illustration of the quantity of electricity that is being produced (Qp) and the socially optimal amount when carbon emissions are considered (Qs). How can we achieve the reduction from Qp to Qs?

One way would be to impose an emission standard which dictates that the quantity of electricity produced shall not to exceed Qs. If a producer generates more than this amount, a fine is imposed. When the fine is set high enough the producer will choose to reduce production rather than incur the fine, thus achieving the socially optimal quantity of electricity (Qs). Note that the emission standard is indirectly reflected in the graph below since emissions are tied to electricity production. This also assumes that we have good information and can determine the exact damages from CO2 emissions, and we can accurately measure CO2 emissions associated with the production of electricity. It is a tall order, but with current technologies and monitoring process it is not impossible.

Determining the precise emissions standard, however, is difficult. If the emission standard is too restrictive and results in production levels to the left of Qs, the policy is overly costly. On the other hand, if the emission standard is too lax, the target for emissions reductions that are necessary will not be met.

In the United States, command and control policies are often used by the Environmental Protection Agency to ensure clean air and water. For example, under the Clean Water Act, there are many waterways that have Total Maximum Daily Loads standards that set a limit on the amount of pollutants that can occur in water ways. Likewise, the Clean Air Act created National Ambient Air Quality Standards (NAAQS) for many pollutants that are harmful to both public health and the environment such as Sulfur Dioxide, lead, nitrogen dioxide and even particulate matter, which can cause breathing problems. These command and control policies have been highly successful in improving the water and air quality in the United States. However, some argue that there are other more efficient ways to obtain the same outcome, and that command-and-control regulations do not provide incentives to improve beyond the standard set (Tietenberg 1985, Stewart 1996).

Putting a price on carbon is the method many economists favor for reducing or controlling greenhouse gas emissions. Pricing carbon provides producers and consumers with a monetary incentive to reduce greenhouse gas emissions by placing a value on each unit of carbon dioxide or carbon dioxide equivalent that is emitted into the atmosphere. The carbon price can be viewed as the amount that must be paid for the right or permission to emit one unit of carbon dioxide into the atmosphere. This is a direct way to incorporate the costs to society of greenhouse gas emissions into the decision-making process of producers and consumers. Carbon pricing is usually either in the form of a tax or a combination of a cap on emissions with the ability to trade carbon dioxide emission allowances, referred to as a cap-and-trade system.

A carbon tax is an incentive that encourages companies and households to invest in cleaner technologies and adopt greener practices, by increasing the price of an item that contributes to the buildup of greenhouse gases into the atmosphere. If the tax on the greenhouse gas emissions is set high enough, the increased price for the product that is produced using a technology that generates greenhouse gases as a by-product of electricity, provides an incentive for producers to reduce emissions, by either reducing production, or investing in technologies that produce less carbon or that capture the carbon. Often a proportion of this increased cost is passed to consumers in the form of higher prices, which also incentivizes consumers to purchase less.

In Figure 3 below we illustrate how a tax can be used to incorporate the cost to society of electricity production, and reduce electricity production and the associated CO2 emissions from Qp to Qs. Recall from our earlier discussion, in a situation where there is no standard and no carbon tax, the quantity of electricity that firms produce will be Qp. When we add a carbon tax to the price of each unit of electricity, we shift the supply curve to the left. This is represented by the marginal social cost curve represented by the red line in this graph. At the new equilibrium where marginal social cost is equal to marginal benefits, the quantity supplied shifts from Qp to Qs just as in the Command and Control case. However, in this case a tax in the amount represented by the tan box is collected by the federal government. The government can then use this tax in a variety of ways such as giving the tax revenues back to the general population as a tax refund, investing in research on or construction of carbon reduction technologies, or using the money to reduce budget shortfalls.

Carbon taxes have been implemented in many countries, such as Finland, Denmark, Norway, Sweden, and British Columbia, among others. Success is often measured as reduced emissions, but other factors such as technology innovation and industrial efficiency gains have also been cited as factors of success. To learn more about carbon taxes and where they have been implemented refer to Carbontax.org. This website explains the basics of a carbon tax and provides examples of where carbon taxes have been implemented around the world.

A cap-and-trade system combines the command-and-control policy of setting a cap on emissions with a carbon pricing policy. With a Cap-and-Trade program, a limit or cap on emissions is set, and polluters receive or purchase emissions allowances. The total allowances are limited by the cap. Each pollution source (firm) can then design its own compliance strategy. They may choose to install pollution controls and implement efficiency measures; they also have the option to sell any excess allowances or purchase allowances if they find they cannot meet the emission standard by other means. This policy is most impactful when the firms within the industry are not identical and have different costs to reduce pollution. The key difference between this and the Command and Control policy is that the cost to each firm of compliance may be less than a taxing scheme because low-cost firms are allowed to trade allowances. Under a market based trading system, firms that can abate at a lower cost will choose to sell some of their allowances to firms that have higher costs of abatement.

The Environmental Protection Agency has successfully used cap-and-trade systems in the past to set up several clean air markets. Perhaps the most well-known is the Acid Rain program. This program has substantially reduced SO2 and NOx released into the atmosphere. These are compounds that can readily mix with water and oxygen in the atmosphere to form sulfuric and nitric acids, which then fall to the ground as acid rain. Acid rain is detrimental to plants, wildlife, fish, buildings, and humans. More recently cap-and-trade systems have been used to reduce carbon emissions in the Northeastern United States (Regional Greenhouse Gas Initiative) and California. For answers to some common questions about emissions trading, see these short videos about the Emissions Trading Scheme in New Zealand by Motu Research.

To better understand how a cap-and-trade program  works and how it could be more efficient than command-and-control policies, let’s look at an example (Figure 4). Suppose we have two plants that are each currently emitting 100 tons of carbon dioxide for a total of 200 tons. The government wants to cap emissions at 100 tons. This requires a 100 ton reduction in emissions, so they establish a command-and-control policy requiring each plant to reduce emissions by 50 tons. Plant A can reduce emissions at a cost of $10 per ton, but plant B is less efficient and it costs them $20 per ton to reduce emissions. When each firm has to reduce emissions by 50 tons, the total cost of emission reductions is $1,500.

Now let’s look at how the costs would differ under a cap-and-trade policy (Figure 5). In this example each firm is given 50 allowances of 1 ton each. Since it is less expensive for Plant A to reduce emissions, they would benefit from reducing emissions and selling some or all of their allowances to Plant B.

Let’s assume that the price of allowances is set at $15. Just as in the previous example it costs Firm A $10/ton to reduce emissions and it costs Firm B $20/ton. Because it only costs plant A $10/ton to reduce their emissions, they will decide to reduce their emissions by 100 tons (bringing their emissions to 0). They can then sell all their allowances and receive $750. When this is subtracted from their cost of emissions reductions of $1000, their cost of emissions reductions after trading is only $250.

At $15/ton Firm B would choose not to reduce emissions at all , but to purchase 50 allowances from A for $750 because it will cost them $5/ton less than reducing their own emissions. Thus instead of paying $1000 to reduce their emissions by 50 tons, they pay $750 to purchase 50 allowances.

Even though Firm B does not reduce their emissions, the goal of 100 tons of emissions reductions is still met, because Firm A chose to reduce all of their emissions. In this case the goal of emission reductions is met with a total cost of only $1,000 a savings of $500 from the command and control option.

Plant A benefits financially from being more efficient and plant B has an incentive to find cheaper ways to reduce emissions in the future, so that they could benefit from selling their allowances or at least not have to purchase as many in the future.

Another common method that has been used to correct this problem is the use of subsidies. Subsidies are used to encourage desired behaviors. Subsidies represent the carrot approach as opposed to the stick approach of command and control policies. Subsidies come in various forms such as cash grants, low interest or interest-free loans, tax breaks, and rebate, offered to consumers and producers. For example in order to get producers and consumers to invest in energy efficient products and energy systems, Oregon has offered renewable energy development grants to organizations that plan to install renewable energy systems, as well as personal income tax credits to homeowners and renters for purchasing energy-efficient products and energy systems for their homes. Subsidies are often offered for only a short period of time to initiate changes in behavior with the aim that these changes will encourage and enhance emerging markets until they can work without government interference.

Discussion

There is no one climate policy that is the perfect solution. To make significant progress in combating climate change, a combination of several policies that influence both producer and consumer behaviors will be required. For example, as we have mentioned above, Oregon has offered subsidies in the form of tax rebates and grants. The Oregon legislature has also been looking into several carbon pricing bills which include cap and trade or cap and invest systems as well as a carbon tax and shift program. In 1996, Oregon also created a CO2 emission standard. This required new energy facilities to meet the CO2 standards or pay a carbon tax per metric ton of excess CO2. Facilities also have the option to provide cogeneration that offsets fossil fuels or invest in projects that offset CO2 emissions. Simultaneously the (Oregon) Climate Trust was established as a nonprofit organization and given the authority to purchase and retire CO2 offsets with the taxes collected from excess CO2 emissions.

Attempts to gain support for a national carbon tax or carbon cap-and-trade system have not yet been successful, but regional cap-and-trade systems such as the Regional Greenhouse Gas Initiative (RGGI) and the California Carbon Tax program have been showing signs of successfully controlling carbon emissions. As an added benefit, revenues from these programs have been used to offset rate increases and support investments in carbon saving technologies. For example the RGGI’s 2015 report, states that 64% of RGGI 2015 investments were used to support energy efficiency programs in the region, 16% were used to fund clean and renewable energy programs, and 4% have funded GHG abatement programs. These programs have also spurred local economic growth and job creation.

As knowledge of the impacts of climate change grows, consumer demand for alternative sources of energy is also growing and creating new markets and new jobs. For instance, several companies have emerged in recent years that build and or install windmill and solar panels. In addition to creating clean energy sources, these companies also create new employment opportunities and create demand for many raw materials that are needed to build these systems. The increase demand for these products and growth of alternative energy facilities has significantly contributed to the electricity capacity in the US in recent years. For example, about 60% of electricity generation capacity added to the U.S. grid in 2016 came from wind and solar (Cusick 2017). As these new energy markets become more stable, state and local governments will be eliminating many of the subsidy programs that have helped to establish these markets. The sun setting of many Oregon Department of Energy tax credits at the end of 2017 is a prime example.

Chapter 8: Impacts

Chapter 7: Models

Climate models are tools used in climate research. They are attempts to synthesize our theoretical and empirical knowledge of the climate system in a computer code. This chapter describes how climate models are constructed, and how they are evaluated, and it discusses some applications.

a) Construction

Climate models solve budget equations numerically on a computer. The equations are based on the conservation of energy, momentum, and mass (air, water, carbon, and other relevant elements, substances, and tracers). Typically they are solved in separate boxes that represent specific regions of Earth’s climate system components (Fig. 1). Along their boundaries the boxes exchange energy, momentum and mass. Exchange with the flow of water or air from one box to another is called advection. Prognostic variables such as temperature, specific humidity in the atmosphere, or salinity in the ocean, and three velocity components (zonal, meridional, and vertical) are calculated in each box. The momentum equations, which are used to calculate the velocities, are based on Newton’s laws of motion and they include effects of the rotating Earth such as the Coriolis force. The temperature equations are based on the laws of thermodynamics. Thus, climate models represent the fundamental laws of physics as applied to Earth’s climate system.

The evolution of the prognostic variables in the interior boxes are solved one time step at a time (see chapter 4, equation B1.4). After the prognostic variables have been updated the fluxes between boxes (I and O) are calculated, which are used for the next time step. Then the prognostic variables are updated again using the fluxes, and so on. This procedure is called forward modeling because all model variables at the next time step are calculated only from the model variables at the previous time step and the boundary conditions without the use of observations. Boundary conditions such as the incident solar radiation at the top-of-the-atmosphere or concentrations of greenhouse gases are usually required as input to the calculations. These are also called radiative forcings. Other boundary conditions need to be applied at the lower boundary: the topography (mountains) and bathymetry (sea floor). To start a forward model simulation initial conditions need also to be provided. Those can be taken from observations or idealized distributions.

Climate models range from the simplest zero-dimensional (0D) Energy Balance Model (EBM) discussed in chapter 4 to the most complex three-dimensional General Circulation Models (GCMs). The range of models ordered with respect to complexity is called the hierarchy of climate models. The 0D-EBM can be expanded by solving the energy budget equation separately at different latitudinal bands. This is called the one-dimensional EBM (1D-EBM). The 1D-EBM is still vertically averaged but it includes energy exchange between latitudinal bands as discussed in chapter 6. Meridional energy transport in 1D-EBMs is typically treated as a diffusive process proportional to the temperature gradient, such that heat flows from warm to cold regions. 1D-EBMs typically treat the ocean and atmosphere as one box, so that one cannot distinguish between heat transport in the ocean and atmosphere.

One-dimensional models are also used for vertical energy transfer in the atmosphere. These are called radiative-convective models and they work similar to the models used to produce Fig. (6) in chapter 4. Radiative-convective models themselves range in complexity from line-by-line models, which calculate radiative transfer in the atmosphere at individual wavelengths, to models that average over a range of frequencies (band models). Line-by-line models are computationally expensive and cannot be used in three-dimensional GCMs, which use band models. Band models are calibrated and tested by comparison to line-by-line models. As discussed in chapter 4, radiative fluxes alone would cause a much warmer surface and much colder upper troposphere than observed. Therefore, radiative-convective models include convection, mostly by limiting the lapse rate to the observed or moist adiabatic rate.

Intermediate complexity models consist of 2D-EBMs, which are still vertically averaged but include zonal transport of energy and moisture (in this case they are called Energy-Moisture-Balance Models or EMBMs), zonally-averaged ocean models coupled to a 1D-EBM, and zonally averaged dynamical atmospheric models (resolving the Hadley circulation). Intermediate complexity models also often include biogeochemistry and land ice components. They are computationally relatively inexpensive and can be run for many thousands or even millions of years.

The first climate models developed in the 1960s were 1D-EBMs and simple GCMs of the atmosphere and ocean at very coarse resolution. Initially ocean and atmospheric models were developed separately and only later they were coupled. Coupling involves the exchange of heat and water fluxes and momentum at the surface. Current state-of-the-science coupled, three-dimensional GCMs also include sea ice and land surface processes such as snow cover, soil moisture and runoff of water through river drainage basins into the ocean. Many models also include dynamic vegetation with separate plant functional types such as trees and grasses. However, most current coupled GCMs that are used for future projections do not include interactive ice sheet components. This is because ice sheets have long equilibration (response) times of tens of thousands of years and therefore they need to be run for a much longer time than the other climate system components, which is currently not possible for most climate models.

The deep ocean has an equilibration time of about a thousand years. For future projections people are mostly interested in the next hundred or perhaps a few hundreds of years. For the most reliable and detailed projections on these timescales global climate modeling groups try to configure their models at the finest possible resolution. Currently the typical resolution is a few degrees (~200 km) in the horizontal directions and 20 to 30 vertical levels each in the atmosphere and ocean components. Finer resolution global models are being developed at various climate modeling centers around the world, but currently most climate projections of centennial and longer time scales are based on coarser resolution models (Fig. 2).

Figure 2: Illustrations of land surfaces (green and brown colors) and sea floor (blue) at fine (top; ~10 km) and coarse (botton; ~100 km) resolution. Note that the elevations and depths are strongly exaggerated with respect to horizontal distances. From ucar.edu.
 

Ice sheet models have finer spatial resolution (10s of kilometers) in order to resolve the narrow and steep ice sheet margin, but they have much larger time steps (1 year) than atmospheric (seconds) and ocean (minutes to hours) models because of the slow ice velocities (10-100 m/yr) compared to velocities of ocean currents (1-10 cm/s) or winds (1-10 m/s). The time step in a model depends on the velocity of the fluid and the grid-box size. The higher the velocity and the smaller the grid-box size the smaller the time step has to be to guarantee numerical stability. Finer resolution models therefore have to use smaller time steps, which is an additional burden on the computational resources. Another obstacle to move to higher resolution is the amount of data that accumulate. The highest resolution ocean model simulations currently require petabytes (1015 bytes = 1000 terrabytes) of storage for all the model output. Processing these huge amounts of data is a challenge.

One way to avoid increasing computational resources at higher resolution is to construct regional climate models for a specific region of interest, e.g. North America. However, the disadvantage of regional climate models is that boundary conditions at the margins of the model domain have to be prescribed. The resulting solution in the interior depends strongly on those boundary conditions, which often are taken from global climate models. Therefore, any bias from the global climate model at the boundary would be propagated by the regional model into the interior of the model domain. However, in the interior the regional climate model can account for details e.g. of the topography, that a global model cannot. Thus, although not a silver bullet, regional climate models are useful for simulating climate in more spatial detail than possible with global models.

Typically the resolution and grids of the atmospheric and ocean components are different. Therefore, surface fluxes and variables needed to calculate the fluxes need to be mapped from one grid to the other. This is often accomplished by a coupler, which is software that does interpolation, extrapolation, and conservative mapping. All calculations need to be numerically sound such that energy, water, and other properties are conserved. However, numerical schemes, e.g. for the transport from one box to the next, are associated with errors and artifacts.

Models that include biogeochemistry, such as the carbon cycle, and/or ice sheets are called Earth System Models. Earth System Models calculate atmospheric CO2 concentrations interactively based on changes in land and ocean carbon stocks. They can be forced directly with emissions of anthropogenic carbon, whereas models without carbon cycles need to be forced with prescribed atmospheric CO2 concentrations.

Due to the limited resolution of the models, processes at spatial scales below the grid box size cannot be directly simulated. For example, individual clouds or convective updrafts in the atmosphere are often only a few tens or hundred meters in size and therefore cannot be resolved in global atmospheric models. Similarly, turbulence and eddies in the ocean, which are important for the transport of heat and other properties, cannot be resolved by global ocean models. These processes must be parameterized. A parameterization is a mathematical description of the process that depends on the resolved variables, e.g. the mean temperature in the grid box, and one or more parameters. A simple example of a parameterization is the meridional heat flux Fm = -K∂T/∂y in a 1D-EBM, which can be parameterized as a diffusive process, where K>0 is the diffusivity, ∂T/∂y is the meridional temperature gradient and y represents latitude. This parameterization transports heat down-gradient (note the minus sign), which means from warmer to colder regions. In this case the parameter K can be determined from observations of meridional heat flux (Chapter 6, Fig. 4) and ∂T/∂y (Chapter 6, Fig. 1). Parameterizations can be derived from empirical relationships based on detailed measurements or high-resolution model results. The parameter values are usually not precisely known but they influence the results of the climate model. Therefore, parameterizations are a source of error and uncertainty in climate models. Parameters in the cloud parameterization of a model, for example, will impact its cloud feedback and therefore its climate sensitivity.

b) Evaluation

Climate models are evaluated by comparing their output to observations. Fig. 3 shows the multi model mean (the average of all models) from the most recent IPCC report (Flato et al., 2013). The model simulated surface temperature distribution is similar to the observations (Chapter 6, Fig. 1) with warm (20-30°C) temperatures in the tropics and cold (<0°C) temperatures near the poles and at high altitudes (Himalayas). The models also reproduce some of the observed zonal gradients such as the cooler temperatures in the eastern equatorial Pacific compared to the western Pacific warm pool and the warmer temperatures in the northeast Atlantic compared to the northwest Atlantic, which are caused by the upper ocean circulation. However, the models are not perfect as indicated by biases such as too cold temperatures in the northern North Atlantic and too warm temperatures in the southeast Atlantic and Pacific. The warm biases in the southeast Atlantic and Pacific are most likely caused by the coarse resolution ocean models that do not resolve well the narrow upwelling in these regions. A similar bias is seen in the California current in the northeast Pacific. Despite these biases the multi model mean agrees with the observed temperatures to within plus/minus one degree Celsius in most regions. Even the larger regional biases such as those mentioned above are relatively small compared to the ~60°C range of temperature variations on Earth. This indicates that the models reproduce observed surface temperatures relatively well.

Fig. 4 shows the multi model mean precipitation. The models reproduce the general pattern of the observations (Chapter 6, Fig. 8) such as more precipitation in the tropics and at mid-latitudes compared with less precipitation in the subtropics and at the poles. They also reproduce some of the observed zonal differences such as dryer conditions over the eastern parts of the subtropical ocean basins compared with wetter conditions further west. However, the models also display systematic biases such as the double Intertropical Convergence Zone (ITCZ) over the East Pacific and too dry conditions over the Amazon. The relative errors in precipitation are generally larger than those for temperature. This indicates that the models are better in simulating temperature than precipitation. This may not be surprising given that the simulation of precipitation depends strongly on parameterized processes such as convection and clouds.

Fig. 5 compares correlation coefficients of different variables. It confirms our previous conclusion that the models are better in simulating temperature than precipitation. It also shows that the models have very good skill in simulating Emitted Terrestrial Radiation at the top-of-the-atmosphere, whereas they are less good at simulating clouds. The current generation of climate models (CMIP5) are improved compared with the previous generation (CMIP3) particularly for precipitation. Another interesting feature also apparent in Fig. 5 is that the the multi model mean is almost always in better agreement with the observations than any one particular model. A similar phenomenon, which has been called the wisdom of the crowd, has been noted by Sir Francis Galton (1907), who analyzed villager’s guesses for the weight of an ox at an English livestock fair. He found that many guesses where too high or too low, but the mean of all guesses was almost exactly the correct weight of the animal.

Fig. 6 shows that most models overestimate temperatures in the thermocline by about 1°C, presumably due to too much vertical diffusion. Again these errors are relatively small given the large range (~20°C) of deep ocean temperature variations. Model errors in salinity are larger near the surface than in the deep ocean. In the southern hemisphere subtropics near surface waters are too fresh, perhaps related to the double ITCZ bias and associated too wet conditions in the atmosphere there (Fig. 5).

All climate models simulate an increasing ocean heat content over the last 40 years, consistent with observations. Some models simulate more and others less heat uptake. The multi model mean, however, is in good agreement with the observations. This indicates that the models are skillful not only in simulating the mean state of ocean temperatures but also its recent temporal evolution.

Historical and paleoclimate variations are also used to test and evaluate climate models as will be discussed next.

c) Applications

Some of the main applications of climate models are paleoclimate studies, detection and attribution studies, and future projections. Future projections will be discussed in the next chapter.

Paleoclimate model studies are not only useful for a better understanding of past climate changes and their impacts but they can also be used to evaluate the models. For example, model simulations of the Last Glacial Maximum (LGM) are broadly consistent with temperature reconstructions (see Chapter 3) that show global cooling of 4-5°C, polar amplification and larger changes over land than over the ocean. The models reproduce these basic features of the reconstructions, which indicates that they have skill simulating climates different from the present (Masson-Delmotte et al., 2013). On the other hand, there are also differences between model results and observations, e.g. in deep ocean circulation (Muglia et al., 2015), which indicates that model’s skills in those aspects are questionable.

Detection and attribution studies attempt to determine which observed climate changes are unusual (detection) and what are its causes (attribution). Climate models driven with only natural forcings show variations from one year to the next due to internal climate variability (e.g. El Niño) and short term cooling in response to large volcanic eruptions (Fig. 8). However, they do not show a long-term warming trend over the past century, in contrast to the observations. This suggests that the global warming observed since about the 1970’s is highly unusual and cannot be explained by internal climate variability (as represented in the models) nor by natural drivers. However, if anthropogenic forcings are included the models reproduce very well the observed long term trend. The multi model mean also reproduces well the observed short term coolings associated with large volcanic eruptions of the past 50 years.

Models driven with both natural and anthropogenic forcings reproduce well not only the observed global mean temperature changes but also its spatial distribution such as larger warming at high northern latitudes (polar amplification) and over land (land-sea contrast). These results represent evidence that human activities are the main reason for the observed warming during the past 50-60 years. The IPCC’s AR5 concludes that “it is extremely likely that human activities caused more than half of the observed increase in GMST from 1951 to 2010” (Bindhoff et al., 2013), where GMST stands for global mean surface temperature.

Chapter 6: Processes

Chapter 5: Carbon

Due to the importance of CO2 as a greenhouse gas the carbon cycle is a crucial part of the climate system. Since carbon exchanges with the biosphere, biological processes need to be considered in climate science. The carbon cycle is part of the broader biogeochemical cycles, which include other biologically important chemical elements such as nitrogen and oxygen.

a) The Natural Carbon Cycle

Carbon exchanges relatively rapidly between three large reservoirs: the ocean, the atmosphere, and the land (Fig. 1). Of those the ocean contains the most carbon: almost 40,000 Pg. Most of the carbon in the ocean is in the form of Dissolved Inorganic Carbon (DIC), and most DIC resides in the intermediate and deep layers because those depths make up most of its volume. Marine biota are important in transferring carbon from the surface to the deep ocean, but their biomass is very small because they consist mainly of microscopic algae called phytoplankton. Phytoplankton build the base of the ocean’s food web through photosynthesis. They have adapted to be tiny and light, so as not to sink to the sea floor. They need to stay near the sunlit surface to photosynthesize. Below about 100 m depth light levels get too low due to absorption by sea water. The deep ocean is therefore dark but organic matter sinks there from the surface in various forms, e.g. as fecal pellets of zooplankton. Below the surface, the sinking dead organic matter is respired by bacteria and returned into the inorganic carbon pool. This is called the biological pump because it removes carbon from the surface and atmosphere and sequesters it in the deep ocean, where it can stay for hundreds to thousands of years. Dissolved CO2 gas in sea water is part of the DIC pool. It exchanges with the atmosphere about 80 Pg of carbon per year. Ocean-atmosphere gas exchange depends on the difference between surface ocean and atmospheric partial pressures (pCO2; in this book we use pCO2 and CO2 concentration synonymously in units of parts per million or ppm) and therefore leads to a strong and relatively rapid coupling of the atmospheric CO2 concentration to the surface ocean.

The second biggest of the three rapidly exchanging carbon reservoirs is the land, which contains about 4,000 Pg of carbon. On land carbon is stored in living vegetation, in soils, and in permafrost. Since land plants don’t have the problem of sinking out of the light they can grow big and contain large amounts of carbon, such as trees. Therefore, much more carbon is stored in living biomass on land (~500 Pg) than in the ocean (~3 Pg). However, even more carbon is stored in soils and permafrost.

More than 100 Pg of carbon are removed from the atmosphere each year by photosynthesis of land plants and turned into organic matter. Organic matter cycles through the land food web and eventually gets into the soil carbon pool where it decomposes. Like in the ocean, bacteria and heterotrophic organisms on land respire organic carbon and turn it back into inorganic CO2. Land carbon uptake and release does not depend strongly on atmospheric CO2 concentrations. They depend more on water availability and temperature, respectively. Plant growth on land is strongly water limited and respiration rates strongly depend on temperature. However, CO2 increases the water use efficiency of land plants because at higher CO2 concentrations they don’t need to open the stomata as much as at lower CO2 concentrations. Stomata are small openings in the cells that allow CO2 to enter, but they also allow water to leave in a process called transpiration (see figure in box below). Thus, at higher CO2 levels plants can grow more for the same amount of water usage.

The atmospheric carbon reservoir is relatively small compared to the ocean, which is ~40 times bigger, and the land, which is ~10 times bigger. However, the atmosphere is crucial in linking land and ocean through rapid exchanges.

Box 1: Photosynthesis and Respiration

Photosynthesis (Fig. B1.1) is the process by which autotrophic organisms (plants, algae, and many bacteria) produce organic matter and oxygen from CO2 and water using light as an energy source.

Respiration (Fig. B1.2) is the reverse process by which heterotrophic organisms (bacteria, fungi, animals, and humans) oxidize organic carbohydrates to derive their energy resulting in CO2 and water.

In order to photosynthesize, land plants have to take up CO2 from the air. They do this by opening little pores called stomata, through which not only CO2 can enter, but also water and oxygen can leave the cell (Fig. B1.3).

b) Anthropogenic Carbon

Human effects on the global carbon cycle have been relatively limited before the industrial revolution, although some emissions from land-use change such as deforestation may have been going on for hundreds or thousands of years. During the last 100 years or so, however, rapid burning of fossil fuels such as coal, gas, and oil have caused a massive perturbation (Fig. 2). This perturbation is perhaps most evident in the atmosphere where CO2 concentrations have increased by more than 40 %. In chapter 3 we have seen that current levels of atmospheric CO2 have been unprecedented for the last 800,000 years, but reconstructions going back further in time indicate that the last time Earth’s atmosphere had about 400 ppm CO2 is about 3 million years ago.

Currently humans emit about 10 billion tons of carbon into the atmosphere per year mostly from fossil fuel burning (~90 %). However, deforestation continues to be a significant contribution (~10 %). Anthropogenic carbon emissions from fossil fuel burning have increased rapidly after World War Two. The ocean has taken up about 40 % (155/395) of all anthropogenic carbon emissions hitherto. 60 % (240/395) have stayed in the atmosphere, whereas the land comes out in a wash because of loss due to deforestation and gain due to recent regrowth.

Anthropogenic carbon emissions have increased rapidly during the early 2000’s but in recent years they have flattened out (Fig. 3) mostly because of emissions from China showed a similar behavior (Fig. 4). About half of all carbon put into the atmosphere by humans since the industrial revolution (cumulative emissions) was done so in the last 30 years. Cumulative emissions are the grey and brown areas in Fig. 2. Together they amount to about 500 GtC or half a trillion metric tons. As we will see later cumulative carbon emissions determine the global temperature increase.

The effects of the financial and economic crisis is seen in the dip in global carbon emissions in 2009 caused by emission reductions in the US and Europe (Fig. 4), whereas emissions continued to increase in China until 2013 after which they stayed constant. A similar dip can be expected for the current economic slowdown due to the coronavirus pandemic.

Human caused carbon emissions are mainly from burning of fossil fuels, whereas cement production contributes only about 6 % (Fig. 5). Burning of coal, oil, and gas have all increased substantially during the last 50 years. The increase in emissions from China during the first decade of the 21st century was fueled mainly by coal burning.

Among the four top emitters the US is the one with the largest emissions per person (Fig. 6). The average US American emits more than 4 metric tons of carbon into the air each year. This is more than twice the emissions per person in Europe or China, more than three times the average emissions world-wide, and about ten times the emissions from a person in India. The US is responsible for 25% of all carbon emitted in the past (cumulative emissions) although it makes up only 4% of the world’s population. Europe, which accounts for about 10% or the world’s population, has emitted more than 22% of all carbon. This figure shows other countries.

How do we know that the rising CO2 concentrations in the atmosphere are from human activities? There are several independent lines of evidence. The first comes from economic data. Since fossil fuels are traded internationally we know how much oil, coal, and gas a country imports and uses. The data shown in Figs. (2) through (6) are based on these estimates. The shaded area in Fig. (3) indicates the error bars of those estimates. Not all countries publish and make their data available, which leads to these uncertainties. However, they are relatively small such that the emissions are known to within about a 5 % error margin.

Box 2: Carbon Isotopes

Carbon exists as three isotopes. The most common carbon-12 (12C) with 6 protons and 6 neutrons, the rarer carbon-13 (13C) with an additional neutron, and carbon-14 (14C) or radiocarbon with two additional neutrons. 14C is radioactive and decays with a half-life of 5,730 years.

Plants and algae fractionate carbon isotopes by about 20 ‰ during photosynthesis such that they preferentially take up the light 12C. Pre-industrial δ13C values of atmospheric CO2 were about -6.5 ‰. Thus plant and soil carbon have δ13C values of around -27 ‰.

The delta notation is analogous to that of oxygen isotopes discussed in chapter 3. δ13C = R/Rstd – 1, where R = 13C/12C is the ratio of the heavy over the light isotope and Rstd is that of a standard.

The second line of evidence is based on carbon isotope measurements. Fractionation during photosynthesis leads to plants and algae having very depleted δ13C values (see box Carbon Isotopes). Since fossil fuels are derived from ancient plants they are depleted in 13C isotopes as well. Thus the addition of carbon with a very depleted 13C signature to the atmosphere leads to a decrease in δ13C values of atmospheric CO2. This is observed in measurements both from ambient air and air extracted from ice cores (Fig. 7).

The third line of evidence is based on oxygen measurements in air. Burning of fossil fuels has a similar chemical reaction equation that that of respiration. Carbohydrates react with oxygen to form CO2 and water. Energy is released during this reaction. Thus, burning of carbohydrates consumes oxygen. By measuring the oxygen to nitrogen ratio in air the changes in atmospheric oxygen concentration can be detected even though they are small compared to the absolute oxygen concentrations (Fig. 8). These measurements are evidence of a massive combustion process happening on Earth right now.

We conclude that humans have caused a large perturbation to the natural carbon cycle mostly by the burning of fossil fuels, which has increased atmospheric CO2 concentrations from 280 ppm to more than 400 ppm, to levels unprecedented in Earth’s history for about 3 million years. About 40 % of the anthropogenic carbon emitted so far has been taken up by the ocean, thus reducing the accumulation of CO2 in the atmosphere.

Box 3: Residence Time

The residence time τ of a substance in a reservoir is the time required to completely replace the reservoir with its input: τ =X/I = X/O (at equilibrium I = O), where X is the reservoir size (a.k.a. stock, amount, or inventory) and I (O) is the input (output) flux. See Budget Equation Box in Chapter 4.

Exercise: Use Fig. 1 to calculate

  • for the pre-industrial period the residence times of carbon (tip: sum up all the inputs or outputs to calculate I or O) in the
    • atmosphere,
    • ocean,
    • land,
    • combined ocean-atmosphere-land system, and
  • the residence time of anthropogenic carbon in the combined ocean-atmosphere-land system.

c) Carbonate Chemistry and Ocean Acidification

CO2 enters the ocean from the atmosphere through gas exchange if the partial pressure in the atmosphere is larger than the partial pressure in the ocean. It dissolves as CO2 gas in water just like it is dissolved in your soda drink. If you look at an unopened bottle of soda, you do not see bubbles. The CO2 molecules are emerged within a vast number of water molecules. The drink was bottled under pressure or under cold temperatures. Solubility of CO2 like that of other gases such as oxygen depends on temperature. More gas can be dissolved in colder water (Fig. 9). This is the reason why as you warm up a soda drink it will lose CO2. Because it is not liquid, CO2 does not ‘evaporate’ into the air, but it outgasses. Evaporation implies a phase change, which does not occur in this case.

In the ocean CO2 reacts with seawater to form carbonic acid (H2CO3), which dissociates into bicarbonate (HCO3) and carbonate (CO32-) ions:

(1)   \begin{equation*} \ce{CO_2 + H_2O \Longleftrightarrow H_2CO_3 \Longleftrightarrow HCO^{3-} + H^+ \Longleftrightarrow CO_3^{2-} + 2H^+.} \end{equation*}

The sum of these three carbon species is called dissolved inorganic carbon (DIC = CO2 + HCO3 + CO32-) or total carbon. The equilibrium between the species depends on the pH. In the current ocean, pH is about 8.1, which leads to about 86.5 % of DIC being in the form of bicarbonate ions, 13.0 % in the form of carbonate ions, and only 0.5 % in the form of aqueous CO2 (Fig. 10; Zeebe and Wolf-Gladrow, 2001).

Dissociation of carbonic acid into bicarbonate (baking soda) produces a hydrogen ion H+, which increases the pH of the water. Most hydrogen ions, however, re-combine with carbonate ions to form additional bicarbonate ions. Nevertheless, adding CO2 to seawater increases its hydrogen ion concentration (decreases its pH) and decreases the carbonate ion concentration. This process is called ocean acidification.

Observations show that the partial pressure of surface ocean water follows closely the trend in atmospheric CO2 (Fig. 11) indicating the uptake of anthropogenic carbon. The measurements also demonstrate that the ocean’s pH decreases. Data from near Hawaii show that the pH has decreased by about 0.05 units from 1988 to 2011. Global estimates suggest a decrease by 0.2 units from preindustrial times. This corresponds to a ~30 % increase in hydrogen ions.

The penetration of anthropogenic carbon into the ocean is largest in the North Atlantic, at mid-latitudes in the Southern Ocean and in the subtropical North Pacific (Fig. 12). As we will see below these are regions of convergence, subduction, or deep water formation.

Anthropogenic CO2 enters the ocean at the surface. Therefore most anthropogenic carbon is in the surface layers (Fig. 13). However, measurable amounts have penetrated most of the upper kilometer of the ocean. In some regions such as the North Atlantic and in the Southern Ocean anthropogenic carbon has entered levels below 2 km depth. These are regions in the ocean where surface waters sink to great depths taking anthropogenic carbon with them.

Calcifying organisms such as corals, coccolithophores, foraminifera, and pteropods build shells and other body parts out of calcium carbonate (CaCO3) by using calcium Ca2+ and carbonate CO32- ions (Fig. 14).

Figure 14: Examples of Calcifying Organisms.

Top left:(inactive link as of 5/19/2021) Coccolithophores (phytoplankton).

Top right: A live foraminifera (zooplankton).

Bottom left: Corals (animals).

Bottom right: (inactive link as of 5/19/2021) A pteropod, a.k.a. sea butterfly.
Coccolithophores (phytoplankton). A live foraminifera (zooplankton).
Corals (animals). A pteropod, a.k.a. sea butterfly.

Decreasing carbonate ion concentrations and pH will lower the saturation state of calcium carbonate, which will make it more difficult for organisms to build calcium carbonate shells. It will also more easily dissolve existing calcium carbonate. Many scientists are concerned that the currently ongoing changes in the carbonate chemistry of the ocean and those expected for the future, in case of continued anthropogenic carbon emissions, may have adverse consequences for the ocean’s ecosystem. The rates of change are likely much larger than anything experienced in the last millions of years, with unknown risks.

Experiments show that increased CO2 or decreased pH can lead to malformed or partially dissolved coccoliths or pteropod shells. However, ocean acidification research is still in its infancy and consequences for many species and ecosystems are currently not known.

Chapter 4: Theory

We have seen how global climate has changed and we’ve learned that some of these changes have been related to forcings and feedbacks such as atmospheric CO2 concentrations and the seasonal distribution of solar irradiance. Now we want to proceed to understand quantitatively why climate is changing. To do this we will consider Earth’s energy budget, review what electromagnetic radiation is, how it interacts with matter, how it passes through the atmosphere, and how this creates the greenhouse effect. But first, we will briefly discuss the general budget equation because it is widely used by scientists and it will be used at different occasions throughout this book.

Box 1: Budget Equation

Scientists like to keep track of things: energy, water, carbon, anything really because it allows them to exploit conservation laws. In physics, for example, we have the law of energy conservation. It is the first law of thermodynamics and states that energy cannot be destroyed or created; it can only change between different forms or flow from one object to another. Similarly, the total amounts of water and carbon on earth are conserved although they may change forms or flow from one component to another one. To start, we need a well-defined quantity of interest. Let’s call that quantity X. It could be energy, water, carbon, or something else that obeys a conservation law. It can be restricted to a specific part or component of the climate system, e.g. water in the cryosphere. Mathematically a budget equation can be written as

(B1.1)   \begin{equation*} \frac{\partial X}{\partial t} = I - O\ , \end{equation*}

where the differentials ∂ denote an infinitesimally small change, t is time, I is the input, and O is the output. The left-hand-side of this equation is the rate-of-change of X. In practice the differentials can be replaced by finite differences (Δ) such that we get

(B1.2)   \begin{equation*} \frac{\Delta X}{\Delta t} = I - O\ . \end{equation*}

Finite differences can be easily calculated: ΔX = X2X1 and Δt = t2t1. Here X1 corresponds to the quantity X at time t1 and X2 correspond to the quantity X at time t2. Note that the inputs and outputs have units of the quantity X divided by time. They are often called fluxes.

Different boxes can be connected such that the output of one box becomes the input of another box.

A budget is in balance if the quantity X does not change in time. In this case ΔX = 0 and the input equals the output:

(B1.3)   \begin{equation*} I = O . \end{equation*}

Let’s do a little example. Assume a student gets a monthly stipend of $1,000 and $400 from his/her parents. Those are the inputs to the student’s bank account in units of dollars per month I = $1,000/month + $400/month = $1,400/month. The outputs would be the student’s monthly expenses. Let’s say he/she pays $400 for tuition, $420 for rent, $390 for food, and $100 for books (not for this one though) such that O = $400/month + $420/month + $390/month + $100/month = $1,310/month. The rate-of-change of his/her bank account is ΔX/Δt = I – O = $1,400/month – $1,310/month = $90/month. The student saves $90 per month.

Equation B1.2 can be used to predict the quantity X at time t2

(B1.4)   \begin{equation*} X_{2} = X_{1} + (I - O) \Delta t \end{equation*}

from its value X1 at time t1 if the inputs and outputs are known. In climate modeling this method, called forward modeling, is used to predict quantities into the future one time step Δt at a time.

In our example, if the student starts at time t1, let’s say in January, with X1= $330 in his/her bank account then we can predict that in February he/she will have X2 = $330 + ($1,400/month – $1,310/month)×(1 month) = $330 + $90 = $420. Note that in this case the time step Δt = 1 month.

a) Electromagnetic Radiation

Earth’s energy budget is determined by energy input from the sun (solar radiation) and energy loss to space by thermal or terrestrial radiation, which is emitted from Earth itself. Solar radiation has shorter wavelengths than terrestrial radiation because the sun is hotter than Earth. To understand this let’s consider the electromagnetic radiation spectrum (Fig. 1). Electromagnetic radiation are waves of electric and magnetic fields that can travel through vacuum and matter (e.g. air) at the speed of light (c). It is one way that energy can be transferred from one place to another. The wavelength of electromagnetic radiation (λ), which is the distance from one peak to the next, varies by more than 16 orders of magnitude. Visible light, which has wavelengths from about 400 nm (nanometers, 1 nm = 10-9 m = one millionth of a millimeter) to about 700 nm, occupies only a small part of the entire spectrum of electromagnetic radiation. The frequency (ν) times the wavelength equals the speed of light c = νλ.

Albert Einstein showed in 1905 that electromagnetic radiation has particle properties. In modern quantum physics the light particle is called a photon. Each photon has a discrete amount of energy E = hv = hc/λ that corresponds to its wavelength, where h = 6.63×10-34 Js is Planck’s constant. The shorter the wavelength the higher the energy. High energy photons at ultraviolet, X-ray, and gamma ray wavelengths can be harmful to biological organisms because they can destroy organic molecules such as DNA.

Interaction of electromagnetic radiation with matter depends on the wavelength of the radiation. Molecules have different discrete energy states and they can transition from one state to another one by absorbing or emitting a photon at a wavelength that corresponds to that energy difference (Fig. 2). Absorption (capture) of a photon leads to a transition from a lower to a higher energy state. Note that once absorbed the photon is gone and its energy has been added to the molecule. Emission (release) of a photon leads to a transition from a higher to a lower state (reversing the direction of the arrows in the top panel of Fig. 2). Note that the emitted photon can have a different wavelength that the absorbed photon. If, e.g., a UV photon was absorbed and has caused the energy of the molecule to increase from the ground electronic state to the second excited state, the molecule can emit two visible photons, first one that leads to a transition from the second to the first excited electronic state, and then another that leads to a transition to the ground state.

In physics, a blackbody is an idealized object that can absorb and emit radiation at all frequencies. A blackbody emits radiation according to Planck’s law (Fig. 3). In classical physics experiments, a closed box covered on the inside with graphite is used to study its properties. It has only a small hole as an opening to measure the radiation that comes out of the box. Although a blackbody is an idealization many objects behave like a blackbody. Even fresh snow. Or the sun.

Integration of the Planck curve overall frequencies results in the Stefan-Boltzmann law

(1)   \begin{equation*} F = \epsilon \sigma T^{4}\ , \end{equation*}

which states that the total energy flux F in units of watts per square meter (Wm-2) emitted from an object is proportional to the absolute temperature of the object T in units of Kelvin (K) to the power four. The Stefan Boltzmann constant is σ = 5.67×10-8 Wm-2K-4 and ε is the emissivity (0 < ε < 1), a material-specific constant that allows for deviations from the ideal blackbody behavior (for which ε = 1). For ε = 1, F represents the area under the Planck curve. The emissivity for ice is 0.97, that for water is 0.96, and that for snow it is between 0.8 and 0.9. Thus, water and ice are almost perfect blackbodies, whereas for snow the approximation is less perfect but still good. Highly reflective materials such as polished silver (ε = 0.02) and aluminum foil (ε = 0.03) have low emissivities.

Equation (1) states that any object at a temperature larger than absolute zero emits energy. The energy emitted increases rapidly with temperature. E.g. a doubling of temperature will cause its radiative energy output to increase by a factor of 24 = 16.

Box 2: Earth’s Energy Balance Model 1 (Bare Rock)

A video element has been excluded from this version of the text. You can watch it online here: https://open.oregonstate.education/climatechange/?p=104

Figure B2.1: Illustration of the ‘Bare Rock’ Energy Balance Model. Yellow arrows indicate solar radiation. The red arrow represents terrestrial radiation.

We can now attempt to construct a simple model of the Earth’s energy budget in balance. The energy input is the absorbed solar radiation (ASR). The energy output is the emitted terrestrial radiation (ETR). Thus, equation B1.3 becomes

(B2.1)   \begin{equation*} ASR = ETR \end{equation*}

The absorbed solar radiation can be calculated from the total solar irradiance (TSI = 1,370 Wm-2), which is the flux of solar radiation through a plane perpendicular to the sun’s rays. (TSI is also sometimes called the solar constant although it is not constant but varies slightly as we’ll see below.) Since Earth is a rotating sphere the amount of radiation received per area is S = TSI/4 = 342 Wm-2 because the area of a sphere is 4 times the area of a disc with the same radius.

Part of the incident solar radiation is reflected to space by bright surfaces such as clouds or snow. This part is called albedo (a) or reflectivity. Earth’s average albedo is about a = 0.3. This means that one third of the incident solar radiation is reflected to space and does not contribute to heating the climate system. Therefore ASR = (1 – a) S = 240 Wm-2. Assuming Earth is a perfect blackbody ETR = σT4. With this equation B2.1 becomes

(B2.2)   \begin{equation*} (1 - a) S = \sigma T^{4} \end{equation*}

and we can solve for T = ((1 – a) S / σ)1/4. Inserting the above values for a, S, and σ gives T = 255 K or T = -18°C, suggesting Earth would completely freeze over as illustrated by ice sheets moving from the pole to the equator in the above animation. This is of course not what is going on in the real world and Earth’s actual average surface temperature, which is about 15°C, is much warmer. What’s wrong with this model? It is bare rock without an atmosphere! The model works well for planets without or with a very thin atmosphere like Mars, but it fails for planets that have thick atmospheres with gases that absorb infrared radiation such as Venus or Earth.

The concept of Earth’s energy balance goes back 200 years to French scientist Jean-Baptiste Fourier, as explained in this (1.5 h) documentary (discussion of Fourier’s contributions start at 9:52).

Temperature is the macroscopic expression of the molecular motions in a substance. In any substance such as the ideal gas depicted in Fig. 4 molecules are constantly in motion. They bump into each other and thus exchange energy. A single molecule is sometimes slow and at other times fast, but it is their average velocity that determines the temperature of a gas. More precisely, the temperature of an ideal gas T ~ E is proportional to the average kinetic energy E = \frac{1}{2} mv^{2} of its molecules. The faster they move the higher the temperature. At absolute zero temperature T = 0 K all motions would cease.

The lower panel in Fig. (3) shows blackbody curves for temperatures representative of the sun and Earth. Due to Earth’s lower temperature the peak in the radiation occurs at longer wavelengths around 10 μm in the infrared part of the spectrum. The sun’s radiation peaks around 0.5 μm in the visible part of the spectrum, but it also emits radiation at ultra-violet and near infrared wavelengths. Sunlight at the top-of-the-atmosphere is almost perfectly described by a blackbody curve (Fig. 5). Some solar radiation is absorbed by gases in the atmosphere but most is transmitted.

b) The Greenhouse Effect

Absorption by water vapor in the infrared and by ozone (O3) in the ultraviolet and scattering of light remove 25-30% of solar radiation before it hits the surface (Fig. 6). For Earth’s radiation, on the other hand, total absorption is with 70-85% much larger. The most important absorbers in the infrared are water vapor and CO2 whereas oxygen/ozone, methane, and nitrous oxide absorb smaller amounts. Gases that absorb infrared radiation are called greenhouse gases. There is only a relatively narrow window around 10 μm through which Earth’s atmosphere allows radiation to pass without much absorption. Thus, Earth’s atmosphere is mostly transparent to solar radiation, whereas it is mostly opaque to terrestrial radiation.

Why is it that only certain gases in the atmosphere absorb infrared radiation? After all there is much more nitrogen (N2) and oxygen (O2) gas in the atmosphere than water vapor and CO2 (Fig. 7). However, nitrogen and oxygen gas both consist of two atoms of the same element. Therefore, they do not have an electric dipole moment, which is critical for the interaction with electromagnetic radiation. Gas molecules that consist of different elements like water or CO2, on the other hand, do have dipole moments and can interact with electromagnetic radiation. Since CO2 is a linear and symmetric molecule it does not have a permanent dipole moment. However, during certain vibrational modes (Fig. 7) it attains a dipole moment and can absorb and emit infrared radiation. Detailed spectroscopic measurements of absorption coefficients show thousands of individual peaks in the spectra for water vapor and CO2 caused by the interaction of vibrational with rotational modes and broadening of lines by collisions (e.g. Pierrehumbert, 2011). These data are used by detailed, line-by-line radiative transfer models to simulate atmospheric transmission, absorption, and emission of radiation at individual wavelengths.

Figure 7:

Left: Composition of the dry atmosphere. Water vapor, which is not included in the image, varies widely but on average makes up about 1 % of the troposphere.

Top right: Vibrational modes of CO2. The black circles in the center represent the carbon atom, which carries a positive charge, whereas the oxygen atoms (white) carry negative charges. The asymmetric stretch mode (b) and the bend mode (c) lead to an electrical dipole moment, whereas the symmetrical stretch (a) does not. Modes (b) and (c) correspond to absorption peaks around 4 and 15 μm, respectively (Fig. 6).

Bottom right: Vibrational modes of H2O. The red balls in the center represent the negatively charged oxygen atom, whereas the white balls represent the positively charged hydrogen atoms. Due to its angle, it has a permanent dipole moment and various modes of vibration and rotation.

Figure 7:

Left: Composition of the dry atmosphere. Water vapor, which is not included in the image, varies widely but on average makes up about 1 % of the troposphere.

Top right: Vibrational modes of CO2. The black circles in the center represent the carbon atom, which carries a positive charge, whereas the oxygen atoms (white) carry negative charges. The asymmetric stretch mode (b) and the bend mode (c) lead to an electrical dipole moment, whereas the symmetrical stretch (a) does not. Modes (b) and (c) correspond to absorption peaks around 4 and 15 μm, respectively (Fig. 6).

Bottom right: Vibrational modes of H2O. The red balls in the center represent the negatively charged oxygen atom, whereas the white balls represent the positively charged hydrogen atoms. Due to its angle, it has a permanent dipole moment and various modes of vibration and rotation.

Absorption (emission) of radiation by the atmosphere tends to increase (decrease) its temperature. At equilibrium the atmosphere will emit just as much energy as it absorbs, but it will emit radiation in all directions, half of which goes downward and increases the heat flux to the surface. This additional heat flux from the atmosphere warms the surface. This is the greenhouse effect.

An atmosphere in which only radiative heat fluxes are considered and that was perfectly transparent in the visible and perfectly absorbing in the infrared would result in a much warmer surface temperature than our current Earth (see Perfect Greenhouse Model box below). It can also be easily shown that adding more absorbing layers would further increase surface temperatures to Ts = (n + 1)1/4T1, where T1 = 255 K is the temperature of the top-most of n layers. For two layers Ts = 335 K and the intermediate atmospheric layer’s temperature is T2 = 303 K. This could be called the ‘Super Greenhouse Model’. Thus, even though no infrared radiation from the surface can escape to space already with one perfectly absorbing layer, adding more absorbing layers further increases surface temperatures because it insulates the surface further from the top, which will always be at 255 K. In atmospheric sciences, this process is called increasing the optical thickness of the atmosphere.

Box 3: Earth’s Energy Balance Model 2 (Perfect Greenhouse)

Since Earth’s atmosphere absorbs most terrestrial radiation emitted from the surface, we may want to modify our ‘Bare Rock’ Energy Balance Model by adding a perfectly absorbing atmosphere. As in the ‘Bare Rock’ model the energy balance at the top-of-the-atmosphere gives us the emission temperature of the planet, which we now interpret as the atmospheric temperature Ta = 255 K. We now have an additional equation for the atmospheric energy balance. At equilibrium, the total emitted terrestrial radiation from the atmosphere (two times ETRa = σTa4 since one ETRa goes downward and one goes upward) must equal the absorbed radiation coming from the surface (ETRs = σTs4). This gives us a surface temperature of Ts = 21/4Ta = 303 K, which is too warm compared with the real world.

c) Earth’s Energy Budget

In contrast to the ‘Perfect Greenhouse’ model, Earth’s atmosphere does absorb some solar radiation, it does transmit some infrared radiation, and, importantly, it is heated by non-radiative fluxes from the surface (Fig. 8). In fact, if only radiative fluxes (solar and terrestrial) are considered, surface temperatures turn out to be much warmer than they currently are and upper tropospheric temperatures are too cold (Manabe and Strickler, 1964). However, warming of the surface by absorbed solar and terrestrial radiation causes the air near the surface to warm and rise, causing convection. Convective motions cause both sensible and latent heat transfer from the surface to higher levels in the atmosphere. Most of this non-radiative heat transfer is in the form of latent heat. Evaporation cools the surface, whereas condensation warms the atmosphere aloft. Thus, the energy and water cycles on Earth are coupled.

The downward terrestrial radiation from the atmosphere is the largest input of heat to the surface. In fact, it is more than twice as large as the absorbed solar radiation. This illustrates the important effect of greenhouse gases and clouds on the surface energy budget. The greenhouse effect is like a blanket that keeps us warm at night by reducing the heat loss. Similarly, the glass of a greenhouse keeps temperatures from dropping at night.

Clouds are almost perfect absorbers and emitters of infrared radiation. Therefore, cloudy nights are usually warmer than clear-sky nights. The important effect of water vapor on the greenhouse effect can be experienced by camping in the desert. Night-time temperatures there often get very cold due to the reduced greenhouse effect in the dry, clear desert air.

d) Radiative Forcings, Feedback Processes, and Climate Sensitivity

We’ve seen how adding greenhouse gases to the atmosphere increases its optical thickness and further insulates the surface from the top, which will lead to warming of the surface. But how much will it warm for a given increase in CO2 or another greenhouse gas? To answer this question and because we also want to consider other drivers of climate change we introduce the concepts of radiative forcing and feedbacks. These concepts are a way to separate different mechanisms that result in climate change. Radiative forcing is the initial response of radiative fluxes at the top-of-the-atmosphere. It can be defined as the change in the radiative balance at the top-of-the-atmosphere (the tropopause) for a given change in one specific process that affects those fluxes with everything else held constant. Examples for such a process are changes in greenhouse gas concentrations, aerosols, or solar irradiance.

A change in the radiative balance at the top-of-the-atmosphere will cause warming if the forcing is positive (more absorbed solar radiation or less emitted terrestrial radiation), and it will cause cooling if the forcing is negative (less absorbed solar or more emitted terrestrial to space). The amount of the resulting warming or cooling will not only depend on the strength of the forcing but also on feedback processes within the climate system. A climate feedback is a process that amplifies (positive feedback) or dampens (negative) the initial temperature response to a given forcing. E.g. as a response to increasing CO2 concentrations surface temperatures will warm, which will cause more evaporation and increased water vapor in the atmosphere. Since water vapor is also a greenhouse gas, this will lead to additional warming. Thus, the water vapor feedback is positive. The warming or cooling resulting from one specific forcing and all feedback processes is called climate sensitivity. Let’s discuss some of the known radiative forcings and feedback processes in more detail.

Radiative Forcings

Detailed radiative transfer models can be used to calculate the radiative forcing for changes in atmospheric greenhouse gas concentrations. As shown in Fig. (9) for CO2, the forcing turns out to depend logarithmically on its concentration (Ramaswamy et al., 2001)

(2)   \begin{equation*} \Delta F = 5.35 [\mathrm{Wm^{-2}}] \ln (C / C_{0}), \end{equation*}

where C is the CO2 concentration and C0 is the CO2 concentration of a reference state (e.g. the pre-industrial).

This means that the radiative effect of adding a certain amount of CO2 to the atmosphere will be smaller the more CO2 is already in the atmosphere. The reason for this is the saturation of peaks in the absorption spectrum (Fig. 6). E.g. in the center of the peak at 15 mm all radiation from the surface is already fully absorbed. Increasing CO2 further only broadens the width of the peak.

Figure 10: Top: Atmospheric methane concentrations as a function of time. Top: Recent air measurements from Mauna Loa.

Bottom: Ice core, firn, and air measurements from Antarctica. Note that most methane sources are in the northern hemisphere, which leads to higher concentrations there compared with the southern hemisphere.

Methane (CH4) is produced naturally in wetlands and by various human activities such as in the energy industry, in rice paddies, and by agriculture (e.g. cows). It is removed by chemical reaction with OH radicals and has a lifetime of about 10 years. Anthropogenic activities have increased atmospheric methane concentrations since the industrial revolution by more than a factor of two, from around 700 ppb to more than 1600 ppb currently (Fig. 10). On a per-molecule basis methane is a much more potent greenhouse gas than CO2, perhaps in part because its main absorption peak around 8 mm is less saturated than the one for CO2 (Fig. 6). However, since methane concentrations are more than two orders of magnitude smaller than CO2 concentrations, its radiative forcing since the industrial revolution is 0.5 Wm-2, which is less than that for CO2. As we will see below CO2 also has a much longer lifetime than methane and therefore it can accumulate over long timescales. Indeed, while recent measurements indicate a slowdown of methane growth rates in the atmosphere, CO2 increases at ever higher rates (Fig. 8 in Chapter 2).

Aerosols are small particles suspended in the air. Natural processes that deliver aerosols into the atmosphere are dust storms and volcanic eruptions. Burning of oil and coal by humans also releases aerosols into the atmosphere. Aerosols have two main effects on Earth’s radiative balance. They directly reflect sunlight back to space (direct effect). They also act as cloud condensation nuclei such that they can cause more or brighter clouds, which also reflect more solar radiation back to space. Thus, both the direct and indirect effects of increased aerosols lead to cooling of the surface. Therefore, aerosol forcing is negative.

Large explosive volcanic eruptions can lead to ash particles and gas such as sulfur dioxide (SO2) ejected into the stratosphere, where they can be rapidly distributed over large areas (Fig. 11). In the stratosphere SO2 is oxidized to form sulfuric acid aerosols. However, stratospheric aerosols eventually get mixed back into the troposphere and removed through precipitation or dry deposition. The lifetime of volcanic aerosols in the stratosphere is in the order of months to a few years. Estimates of the radiative forcing from volcanic eruptions depend on the eruption but varies from a few negative tenths of a watt per square meter to -3 or -4 Wm-2 for the largest eruptions during the last 100 years (Fig. 12).

Figure 11: Effects of volcanic eruptions.

Top: Measurements of solar radiation transmitted at Hawaii’s Mauna Loa Observatory.

Center: Photograph of a rising Pinatubo ash plume.

Bottom: Photograph from Space Shuttle over South America taken on Aug. 8, 1991 showing the dispersion of the aerosols from the Pinatubo eruption in two layers in the stratosphere above the top of cumulonimbus clouds.

Anthropogenic aerosols are released from burning tropical forests and fossil fuels. The latter is the major component producing more sulfate aerosols currently than those naturally produced. Aerosol concentrations are higher in the northern hemisphere, where industrial activity is located. Radiative forcing estimates are uncertain but vary between about -0.5 to -1.5 Wm-2 for both direct and indirect effects (Figs. 12, 13). Generally, estimates of aerosol forcings are more uncertain than those for greenhouse gases.

Solar irradiance varies with the 11-year sunspot cycle. Direct, satellite-based observations of total solar irradiance show variations of about 1 Wm-2 between sunspot maxima and minima (Fig. 14). To estimate radiative forcing TSI needs to be divided by four, which results in about 0.25 Wm-2. Longer-term estimates of TSI variations based on sunspot cycles indicate an increase from the Maunder Minimum (1645-1715) to the present by about 1 Wm-2. The resulting forcing is again about 0.25 Wm-2.

Comparisons of the different forcings indicate that long-term trends of the last 100 years or so are dominated by anthropogenic forcings. The negative forcings from increases in aerosols compensate somewhat the positive forcings from the increase in greenhouse gases. However, the net effect is still a positive forcing of about 2 Wm-2. Volcanic forcings are episodic and the estimates of solar forcing are much smaller than those of anthropogenic forcings.

Figure 14: Total solar irradiance variations.

Top: measurements based on various satellites as a function of time.

Bottom: longer term reconstructions.

Feedback Processes

A feedback process is a modifier of climate change. It can be defined as a process that can amplify or dampen the response to the initial radiative forcing through a feedback loop. In a feedback loop the output of a process is used to modify the input (Fig. 15). By definition, a positive feedback amplifies and a negative feedback dampens the response. In our case, the input is the radiative forcing (ΔF) and the output is the global average temperature change (ΔT).

Figure 15: Schematic illustration of fast climate feedback loops (left) and their effects on the vertical temperature distribution in the troposphere (right).
A video element has been excluded from this version of the text. You can watch it online here: https://open.oregonstate.education/climatechange/?p=104

Imagine talking into a microphone. A positive feedback process works like the amplifier that makes your voice louder. It can lead to a runaway effect if no or only weak negative feedback processes are present. If you hold the microphone too close to the speaker, the runaway effect can result in a loud noise. Early in Earth’s history, between about 500 million and 1 billion years ago, Earth may have experienced a runaway effect into a completely ice and snow covered planet called ‘Snowball Earth’ caused by the ice-albedo feedback (see below). An example of a negative feedback process would be talking into a pillow. This makes your voice quieter. A negative feedback is stabilizing. It prevents a runaway effect. Both positive and negative feedback processes operate in the climate system. (Try to imagine talking into multiple pillows and microphones.) In the following we will discuss the most important ones.

Let’s assume we have an initial positive forcing (ΔF > 0) as illustrated in Fig. 15. As a response, temperatures in the troposphere will warm. Since the troposphere is well mixed, we can assume that the warming is uniform (ΔTs = ΔTa > 0). Thus, the upper troposphere will warm, which will lead to increased emitted terrestrial radiation (ΔETRa > 0) to space. Increased heat loss opposes the forcing and leads to cooling. This is the Planck feedback, and it is negative. Equilibrium will be achieved if ΔETRa = ΔF. Since ΔETRa = ETRa,fETRa,i is the difference between the final ETRa,f = σ(Ta + ΔTa)4 and the initial ETRa,i = 240 Wm-2 as calculated above (e.g. Fig. B2.1), we can calculate the surface temperature change due to the forcing and the Planck feedback ΔTpl = ΔTa = [(ETRa,i + ΔF)/σ]1/4 – Ta. For a doubling of atmospheric CO2  ΔF = 3.7 Wm-2 this results in ΔTpl = 1 K.

Thus, if only the Planck feedback was operating and everything else would be fixed, a doubling of CO2 would result in a warming of about 1 K. However, warmer air and surface ocean temperatures will also lead to more evaporation. The amount of water vapor an air parcel can hold depends exponentially on its temperature. This relationship, which can be derived from classical thermodynamics, is called the Clausius-Clapeyron relation (Fig. 16). Since most of Earth is covered in oceans there is no lack of water available for evaporation. Therefore, it is likely that warmer air temperatures will lead to more water vapor in the atmosphere. Because water vapor is a strong greenhouse gas this will lead to an additional reduction in the amount of outgoing longwave radiation and therefore to more warming. Thus, the water vapor feedback is positive. If we assume again that the temperature change is uniform with height the troposphere will warm by an additional amount ΔTwv due to the water vapor feedback (red line in Fig. 15).

Increased amounts of water vapor in the atmosphere also imply increased vertical transport of water vapor and thus more latent heat release at higher altitudes where condensation occurs. This warms the air aloft ΔTlr > 0. In contrast, at the surface increased evaporation leads to cooling ΔTlr < 0. Thus, the lapse-rate, Γ = ΔT/Δz, which is the change in temperature with height in the atmosphere, is expected to decrease. Warming of the upper atmosphere will increase outgoing longwave radiation. Therefore, similarly to the Planck feedback, the lapse rate feedback is negative.

Since both the water vapor and the lapse rate feedback are caused by changes in the hydrologic cycle, they are coupled. This results in reduced uncertainties in climate models if the combined water vapor plus lapse rate feedback is considered rather than each feedback individually (Soden and Held, 2006).

Warming surface temperatures will also cause melting of snow and ice. This decreases the albedo and thus it increases the amount of absorbed solar radiation, which will lead to more warming. Thus, the ice-albedo feedback is positive. Our simple energy balance model 2 from above can be modified to include a temperature dependency of the albedo, which can exhibit a runaway transition to a snowball Earth and interesting hysteresis behavior. Hysteresis means that the state of a system does not only depend on its parameters but also on its history. Transitions between states can be rapid even if the forcing changes slowly.

Warming will also likely change clouds. However, no clearly understood mechanism has been identified so far that would make an unambiguous prediction how clouds would change in a warmer climate. Comprehensive climate models predict a large range of cloud feedbacks. Most of them are positive, but a negative feedback cannot be excluded at this point. The cloud feedback is the least well understood and the most uncertain element in climate models. It is also the source of the largest uncertainty for future climate projections.

Climate models can be used to quantify individual feedback parameters γi. They are calculated as the change in the radiative flux at the top-of-the-atmosphere ΔRi divided by the change of the controlling variable Δxi: γi = ΔRi/Δxi. E.g. to quantify the Planck feedback the controlling variable Δxi = ΔT is atmospheric temperature. The atmospheric temperature is increased everywhere by ΔT = 1 K and then the radiative transfer model calculates ΔR at every grid point of the model, that is at all latitudes and longitudes, and then averages over the whole globe. This results in ΔRpl and γpl = ΔRpl/ΔT. All individual feedback parameters can be added to yield the total feedback γ = γpl + γwv + γlr + γia + γcl. The total feedback has to be negative to avoid a runaway effect. The strongest, most precisely known feedback is the Planck feedback, which is about γpl = −3.2 Wm-2K-1.  Estimates for the other feedbacks are about γwv + γlr ≅ +1 Wm-2K-1 for the combined water vapor / lapse rate feedbacks, γia = +0.3 Wm-2K-1 for ice-albedo feedback, and γcl = +0.8 Wm-2K-1 for the cloud feedback. This gives for the total feedback γ values from about −0.8 to about −1.6 Wm-2. The total feedback parameter can be used to calculate the climate sensitivity.

Climate Sensitivity

The climate sensitivity ΔT2× is usually defined as the global surface temperature change for a doubling of atmospheric CO2 at equilibrium and it includes all fast feedbacks discussed above. Current best estimates are ΔT2× 3 K, however it ranges from about 1.5 to about 4.5 K. This large uncertainty is mostly due to the large uncertainty of the cloud feedback. Sometimes the climate sensitivity SC = −1/γ is reported in units of K/(Wm-2). Since we know the forcing for a doubling of CO2 ΔF2× = 3.7 Wm-2 quite well, one can be calculated from the other using SC = ΔT2×/ΔF2×. For ΔT2× ≅ 3 K, SC ≅ 0.8 K/(Wm-2), and γ ≅ 1.2 Wm-2K-1. Note that slow feedbacks associated with the growth and melting of ice sheets or changes in the carbon cycle are not included in these numbers. Since we would expect those feedbacks to be also positive we can expect an even higher climate sensitivity for longer timescales (hundredths to thousandths of years).

Our definitions of radiative forcings and feedbacks above are not clear cut. They are based on existing climate models and the processes included in them. For example changes in atmospheric CO2 concentrations over long paleoclimate timescales can be thought of as a feedback rather than a forcing since the ultimate forcing of the ice age cycles are changes in the Earth’s orbital parameters and hence the seasonal distribution of solar radiation.

Chapter 3: Paleoclimate

Measurements with modern instruments (the instrumental record) are available only for roughly the past century. This is insufficient to describe the full natural variability of the climate system, which makes attribution of observed changes difficult. We want to know if the changes observed in the recent past are unusual compared to pre-industrial climate variability. If they are it is more likely that they are anthropogenic, if not they could well be natural. Paleoclimate research is also important for a fundamental understanding of how the climate system works. Some paleoclimate changes, e.g. the ice age cycles, were much larger than those during the instrumental record. Thus, we can learn much from paleoclimate data about the impacts of large climate changes.

a) Methods

Paleoclimate research is able to extend the instrumental record back in time much further than the instrumental record and has delivered a fascinating history of past climate changes. Most paleoclimate evidence is indirect and based on proxies for climate variables. This evidence is less precise than measurements with modern instruments because of the additional uncertainty in the relation between the proxy and the climate variable. Examples for proxies are pollen (Fig. 1) found in lake sediments that can be used to reconstruct past vegetation cover, which in turn can be related to temperature and precipitation. Similarly, different species of planktic foraminifera prefer different temperatures. Some live in colder waters others prefer warmer waters. Their fossil shells accumulate in sediments, which can be retrieved with a coring device employed from a research vessel. Shells deeper in the sediment are older. If shells of cold-loving foraminfera are found at a site where currently warm-loving species live, it suggests that near surface temperatures in the past have been colder. Mathematical methods have been developed to quantify the temperature changes from the species composition. Other proxies are chemical such as the ratio of magnesium to calcium (Mg/Ca), which is related to temperature, or isotopes of oxygen or carbon in the calcium carbonate shells of foraminifera, which can be used to reconstruct temperature, salinity, ice volume and carbon cycling. Benthic foraminifera live on or in the ocean’s sediments and thus provide useful information on deep ocean properties. Here is an excellent interactive post about paleoclimate proxies.

Figure 1: Electron microscope pictures of fossil foraminifera (top, two planktic foraminifera on top and one benthic foraminifera on the bottom) and pollen (bottom).

Proxies are found in different archives such as tree-rings, ice-cores, corals, ocean or lake sediment cores that cover different time periods at a range of temporal resolutions (Fig. 2). The resolution of a record can be quantified as the time difference  Δ t = t 2 t 1 between two adjacent samples t 1 and t 2 . The smaller  Δ t the higher the resolution. Written historical accounts can be used to reconstruct past climatic conditions at very high temporal resolution – some ancient documents contain daily weather entries – back to about 1,000 years, but there are only a limited number of such records available. Tree-rings, corals and speleothems (cave deposits such as stalactites and stalagmites) provide reconstructions at annual to decadal resolution ( Δ t ~ years to decades) back many thousands of years. Ice cores have typically decadal to centennial resolution going back almost a million years for Antarctica and about 100,000 years for Greenland. Ocean sediment cores cover millions of years in the past but usually at low temporal resolution of centennial to millennial timescales ( Δ t ~ 100s to 1,000s of years).

Layered paleoclimate archives. Varves are layered sea or lake sediments. From geo.arizona.edu.

Figure 2: Layered paleoclimate archives. Varves are layered sea or lake sediments.

Several methods are used to date samples to construct chronologies of paleoclimate records. Tree-rings are annual layers, which can be counted. Patterns of thin and thick rings can be matched from one tree to another, older one (Fig. 3). This way a large number of trees can be used to create a long layer-counted chronology. Layer counting can also be used in other archives with annual layers such as ice cores or lake sediments. Most ocean sediments don’t have annual layers because of bioturbation, which is the mixing of sediments by worms and other organisms that live in the sediment. When organic material is present radiocarbon dating can be used to determine the age of a sample. Radiocarbon (14C) decays exponentially with a half-life of 5,730 years. Thus, the lower the ratio of radiocarbon to regular carbon (14C/12C) in a sample, the older it is. This ratio can be measured precisely with a mass spectrometer. However, this method can only be used until about 40,000 years before the present because older material has unmeasurably small amounts of 14C.

b) The Last Two Millennia

Historical accounts such as pictures of the frozen Thames (Fig. 4) document a period of relatively cold conditions during the 16th to 19th centuries in Europe called the Little Ice Age. Conversely, relatively warm conditions during the 9th to 13th centuries, called the Medieval Warm Period, may have allowed Vikings to colonize Greenland and travel to North America.

Figure 4: Left: Picture of the frozen Thames from 1683-84 by Thomas Wyke. Right: Ruins of Hvalsey Church from the Greenland settlements of the Norse.

Two recent reconstructions of global temperatures, however, indicate that the Medieval Warm Period was not a global phenomenon (Fig. 5). These reconstructions also suggest that there was a long term cooling trend during the past 2,000 years that culminated in the Little Ice Age, which was terminated by a relatively rapid warming during the 20th century. According to the PAGES 2k reconstruction global average temperature during the three decades from 1971 to 2000 was warmer than at any other 30-year period in the last 1,400 years. This suggests that the recent warming is unusual. The rate of change during the last ~100 years also seems to be unusually fast compared with the previous 2,000 years. The two independent reconstructions agree well in the cooling trend over the past 1,000 years, but the PAGES 2k reconstruction suggests slightly warmer conditions during the first millennium CE ( Common Era ). The Marcott et al. (2013) dataset is based mostly on lower resolution ocean sediment cores and is therefore smoothed compared to the higher resolution PAGES 2k dataset, which includes mostly land data such as pollen and tree rings.

c) The Holocene

Fig. 6 shows the full Holocene (the last 10,000 years) reconstruction of global average temperatures from Marcott et al. (2013). It suggests that the long term cooling trend of the last 2,000 years is part of a longer trend that extends back in time to the middle Holocene around 4,000 BC. The early Holocene from around 8,000 BC to 4,000 BC was relatively warm, similarly to recent decades. (This is debated in the scientific community; a recent paper suggest that it wasn’t warmer during the early Holocene and that biases in proxies related to seasonality are to blame. If this is true, the current warming will be unprecedented for more than 10,000 years, perhaps more than 100,000 years or longer.) The rate of temperature change appears to be much smaller compared with the last 100 years, but the relatively low resolution of the reconstruction leads to smoothing and does not allow a fair comparison with the instrumental record on 100 year timescales.

Now let’s have a look at CO2. Is the observed increase in atmospheric CO2 during the last 60 years unusual compared to the pre-industrial Holocene? Ice cores can be used to answer this question. When snow accumulates on an ice sheet it compresses to firn and later to ice due to the pressure of the overlaying snow (Fig. 7). During this compaction process small bubbles of air are trapped within the ice. In the lab the air can be extracted from the ice, e.g. by mechanically crushing the ice, and its CO2 concentration, and other greenhouse gases, can be measured.

Figure 7: Ice cores have been drilled in different locations in Antarctica (top left) using drilling devices like the one depicted here (top center). Air gets trapped in the ice through compaction of snow and firn (top right). These air bubbles in the ice are visible by eye (bottom left) and in the microscope (bottom center). Sometimes dark ash layers are found in ice cores, which can help to date the ice (bottom right).

Ice cores from Greenland are not suitable for CO 2 reconstructions because they are contaminated by impurities (e.g. dust) that can lead to CO 2 production in the ice. However, Antarctic ice is so pure that it provides excellent records of past atmospheric CO 2 concentrations. Different ice cores have been drilled in Antarctica (Fig. 7). The measurements from the youngest ice and firn match up very well with the direct measurements of modern air from Mauna Loa (Fig. 8). Also, the measurements from different ice cores agree with each other (different colored symbols in Fig. 8). This indicates that Antarctic ice cores faithfully record past atmospheric CO 2 concentrations. The results show that atmospheric CO 2 concentrations have been relatively constant between about 260 and 280 ppm during the Holocene (the last 10,000 years). It was only during the last 200 years that CO 2 concentrations started to increase. Thus, we have answered the question posed above and conclude that the CO 2 increase during the last 200 years is very unusual and has not happened before during the last 10,000 years. We also know that burning of fossil fuels has increased dramatically after the industrial revolution (1760-1840). In the carbon cycle chapter below, more evidence will be presented that demonstrates that the subsequently observed CO 2 increase was indeed due to human activities such as the burning of fossil fuels.

Other greenhouse gases have also been measured in air extracted from ice cores. Methane (CH4) and nitrous oxide (N2O) show a very similar behavior to CO2, such that their concentrations were relatively constant throughout the Holocene around 700 ppb and 260 ppb, respectively, and increased dramatically during the last 200 years to values around 1,700 and 310 ppb, respectively (IPCC, 2007).

d) The Ice Ages

Fig. 6 already hints at a cold period before the Holocene. Indeed we now know that for a long time Earth had been in an ice age, or glacial state, before the current warm period of the Holocene begun. But it was only in the 19th century that scientists realized that Earth has experienced ice ages on a global scale. This discovery was made by Louis Agassiz, a Swiss geologist, who hypothesized that not only Alpine glaciers were advanced but also large ice sheets moved south from northern Europe and America leaving glacial landforms behind (Fig. 9). For a fascinating and more detailed account of this discovery the reader is referred to Imbrie and Imbrie (1979).

Figure 9: Glacial Landforms. Top: Polished bedrock with striations indicate a glacier was moving over it. The glacier incorporates rocks into its base and by pushing them over the underlying bedrock creates grooves. This example is from Mount Rainier National Park. Bottom left: Moraines (this example is from Svalbard) are glacial deposits formed at the side (lateral moraines) or end (terminal moraines) of a moving glacier. Bottom right: Erratic boulders like this one from Scotland, many miles from a possible bedrock source, have been attributed by Louis Agassiz to the action of ice-age glaciers.

During the height of the last ice age, the Last Glacial Maximum (LGM) roughly 20,000 years ago, large additional ice sheets covered parts of North America and northern Europe (Fig. 10). The Laurentide Ice Sheet was more than 3 km thick and covered all of what is now Canada and part of the northern United States reaching as far south as New York City, Chicago, and Seattle. The Eurasian (or Fennoscandian) Ice Sheet covered all of Scandinavia, much of the British isles, the Baltic Sea, and surrounding land areas from northeastern Germany to northwestern Russia. Mountain glaciers also descended further down valleys and often into the low lands.

Because so much more water was locked up as ice on land, sea level was 120 m lower during the LGM than it is today. Imagine your favorite beach of today. There was no water there at the LGM. Explore with NOAA’s interactive bathymetry viewer how much further it would have been to the water during the LGM at your favorite beach.

The LGM is a well-studied time period in paleoclimate research, and we have a wealth of data available. Ice cores show lower concentrations of atmospheric greenhouse gases such as CO2 (180 ppm vs 280 ppm during the late pre-anthropogenic Holocene; Fig. 11) and methane. Vegetation reconstructions show that forests were replaced by tundra and grasslands over large parts of the mid- to high latitudes (Prentice et al., 2011). We also know from ice and ocean sediment cores that the air was dustier. Temperature proxies show colder temperatures almost everywhere (Fig. 10). However, temperature changes were not the same everywhere. Temperature changes over large parts of the tropical and subtropical oceans were rather small. Globally-averaged sea surface temperatures have been estimated to be only 2°C cooler than the present (MARGO, 2009). Land areas in the tropics experienced moderate cooling of about 3°C (Bartlein et al., 2011). The largest cooling of more than 8°C occurred over land at mid-to-high latitudes and over Antarctica (Fig. 11). Globally averaged surface air temperature has been estimated to be 4°C colder during the LGM (Annan and Hargreaves, 2013). More recent, yet unpublished studies suggest 5°C indicating some uncertainty in these estimates. These authors also suggest that on average the cooling over land was 3 times larger than over the oceans. The land-sea contrast and polar amplification are similar to what we’ve seen in observed warming over the past century (Fig. 2 in Chapter 2). This suggests that those are robust properties of the climate system.

Box 1: Oxygen Isotopes

Isotopes are variations of the same element with a different number of neutrons, which leads to a different mass (Fig. B1). Since different isotopes of the same element have the same number of electrons (yellow circles in Fig. B1) they react chemically identically or very similar.

Water molecules (H2O) with 18O are (20 – 18)/18 = 11% heavier than those with 16O. The mass of a molecule affects how likely it will participate in a phase change such as evaporation or condensation. Water molecules at a certain temperature in the water phase have a distribution of kinetic energy \frac{1}{2} mv^{2}. Some are a little faster, others are a little slower. Only the fastest will be able to leave the water phase and make it to the vapor phase (air). (Here is a nice youtube video explaining this in a little bit more detail.) Because the mass of a heavy water molecule is larger, its velocity, on average, must be smaller in order to have the same kinetic energy. Therefore, the heavier isotopes will remain in the liquid phase more often than the lighter isotopes. This process is called fractionation. It leads to an accumulation of heavy water isotopes in the ocean and relatively more light water isotopes in the air.

Here is an analogy. Imagine a number of black and white soccer balls are lined up at the center line of a soccer field. The black balls are slightly heavier than the white balls. Now a player will shoot the balls, one after another, alternatively black and white, towards the goal. When he/she is done you count the balls that made it across the goal line. Will there be more black or white balls? Yes, indeed, more white balls, because they are lighter and they fly farther due to larger velocities put in by the player who exerts about the same amount of energy for each shot. In this analogy the white balls are the light isotopes.

Isotopes are usually expressed as delta values such as \delta^{18}O = (R - R_{std}) / R_{std}, where R = 18O/16O is the heavy over light ratio of a sample, relative to that of a standard Rstd. Fig. B2 illustrates how fractionation during evaporation and condensation affects the isotope values of water, vapor, and ice in the global hydrological cycle.

Figure B2: Typical δ18O values (in permil). Surface ocean water has δ18O values of around zero. Due to fractionation during evaporation less heavy isotopes make it into the air, which leads to negative delta values of around -10 ‰. Condensation prefers the heavy isotopes for reasons analogous to evaporation. In this example the first precipitation thus has a δ18O value of about -2 ‰, the remaining water vapor will be further depleted in 18O relative to 16O thus that its δ18O value is approximately -20 ‰. Any subsequent precipitation event further depletes 18O. This process is known as Rayleigh distillation and leads to very low δ18O values of less than -30 ‰ for snow falling onto ice sheets. Thus, ice has very negative δ18O of between -30 and -55 ‰. Deep ocean values today are about +3 to +4 ‰. During the LGM, as more water was locked up in ice sheets, the remaining ocean water became heavier in δ18O by about 2 ‰. We know this because foraminifera build their calcium carbonate (CaCO3) shells using the surrounding sea water. Thus they incorporate the oxygen isotopic composition of the water into their shells which are preserved in the sediments and can be measured in the lab.

We conclude that paleoclimate data from the LGM show that Earth was dramatically different from today, with large ice sheets, low sea level, and different vegetation. These changes happened even though the global average temperatures changed only by 4-5°C. This is comparable to changes projected for some future scenarios.

Geologic evidence such as that shown in Fig. 9 is abundant only for the last major glaciation because each glacial advance erases evidence from previous glaciations. However, because of our friends, the foraminifera, we know many details of previous glaciations. How can that be? Foraminifera live in the ocean. Well, as explained in the box, the δ18O of sea water records the amount of ice volume and thus sea level. Since foraminifera record the δ18O of sea water in their shells, we can amazingly reconstruct past ice volume from tiny shells found in mud on the ocean floor.

Fig. 11 shows that there were about 9 glacial-interglacial cycles during the past 800,000 years. Most of the time sea level was lower than today, during some of the glacial maxima by more than 120 m. Ice core records show that glacial periods were always associated with low atmospheric CO2 concentrations and low temperatures in Antarctica. CO2 concentrations varied between about 180 ppm during glacial maxima and 280 ppm during interglacials. The correlation between these completely independent datasets, one from Antarctic ice cores, the other from deep sea foraminifera, is astounding. It demonstrates that climate and the carbon cycle are tightly interlinked. High CO2 concentrations are always associated with warm temperatures, high sea level, and low ice volume. This indicates the importance of atmospheric CO2 concentrations for climate but it also suggests that climate impacts the carbon cycle and causes changes in CO2.

Closer inspection of leads and lags shows that during the last deglaciation CO2 and Antarctic temperature lead global temperature, which suggests that CO2 is an important forcing mechanism for the warming (Shakun et al., 2012). But we also know that climate affects the carbon cycle. For example, CO2 is more soluble in colder water, which causes the ice age ocean to take up additional carbon from the atmosphere. Another likely reason for the lower glacial CO2 concentrations in the atmosphere is iron fertilization. The colder glacial atmosphere was also dustier. Dust contains iron and thus iron delivery to currently iron limited regions such as the Southern Ocean was increased during the ice age. This intensified phytoplankton growth and the biological pump, which describes processes of sinking and sequestering of carbon in the deep ocean. Our recent research indicates that about half (~45 ppm) of the glacial-interglacial CO2 variations can be explained by temperature and another 25-35 ppm by iron fertilization (Khatiwala et al., 2019). However, this topic is not settled and subject to ongoing research. More about how climate can affect CO2 will be discussed in the carbon cycle chapter. For now let’s just conclude that climate and the carbon cycle are tightly linked.

But what causes glacial-interglacial cycles? They are caused by changes in Earth’s orbit around the sun, which affects the seasonal distribution of incoming solar radiation. This theory was first proposed in 1938 by Serbian astronomer Milutin Milankovich, who calculated variations in Earth’s orbital parameters and linked those to past ice ages. Earth’s orbit can be described by three parameters (Fig. 12). The eccentricity E is the deviation from a perfectly circular orbit. Earth’s orbit is a slight ellipse although it is close to circular. E varies on ~100,000 year cycles between zero and 0.06. The tilt T, or obliquity, describes the angle between Earth’s axis of rotation and the ecliptic, which is the plane of Earth’s orbit around the sun. It is currently T = 23.5° but it varies between 24.5° and 22.5° on a 40,000 year cycle. Precession P is the wobble of Earth’s axis, like the wobble of a top. Currently we’re closest to the sun in January, which corresponds to the axis tilted towards the left in Fig. 12. P varies on a 23,000 year cycle and is strongly modulated by eccentricity. These variations are caused by gravitational forces from the other planets, particularly Jupiter and Saturn.

Figure 12: Milankovitch Cycles. Top: Earth’s orbit around the sun is determined by eccentricity (E), tilt (T) or obliquity, and precession (P). Bottom: Variation of Earth’s orbital parameters through time. Negative numbers towards the left show the past and positive numbers show the future.

Milankovich’s theory is that summer insolation in the northern hemisphere controls the waxing and waning of the ice sheets. When summer insolation is high, all snow from the previous winter will melt away. When summer insolation is low, some snow survives and during the next winter more snow accumulates. In this way, an ice sheet can grow. The northern hemisphere is important because that is where the major land masses are, where additional ice sheets can grow. The Antarctic ice sheet did not fundamentally change during the ice ages, although it grew somewhat bigger during the ice ages and shrunk a bit during interglacials and the Patagonian ice sheet was much smaller than those in the north.

The Milankovich theory was essentially confirmed by spectral analysis of deep sea δ18O data, which shows all the predicted periodicities (Hays et al., 1976). However, exactly when and why ice ages start and end remain active topics of research.

While the Milankovich theory explains the cyclicity and timing of glacial-interglacial cycles it does not explain their amplitudes (how much global average temperatures have changed). Simulations with gobal climate models show that the amplitude of glacial-interglacial temperature changes can only be reproduced if CO2 changes are accounted for (e.g. Shakun et al., 2012). This leads us to conclude that CO2 changes are an important (feedback) factor in determining glacial-interglacial temperature changes although the ultimate cause of the ice age cycles are Earth’s orbital cycles.

Chapter 2: Observations

Although Earth’s climate is currently changing rapidly relative to past changes, in most regions where we live changes are slow enough that we do not notice them directly during our daily lives. However, older people may have noticed changes during their lifetimes and in some regions changes are larger and more obvious than elsewhere. In this chapter we will discuss some observations from the past 100 years and data from regions that are particularly sensitive to climate change, where the most dramatic effects have occurred. This will not be a comprehensive documentation of existing observations. Additional observations will be discussed throughout the remainder of this book.

a) Atmosphere

Observations show that climate is changing on a global scale. Surface air temperature data, averaged over the whole Earth, indicate warming of about 1°C over the last ~100 years (Fig. 1). But that warming was neither steady nor smooth. From 1880 to about 1910 there was cooling, followed by warming until about 1940. After this slight cooling or approximately constant temperatures were observed until the 1970s, followed by a rapid warming until the present. Each year’s temperature is somewhat different from the next. Not all of these year-to-year changes are currently understood, but natural variations, which are not caused by humans, do play a role. For instance, strong El Niño years such as 1997-98 or 2015 show up as particularly warm years, whereas La Niña years such as 1999-2000 or 2011 show up as relatively cool. El Niño and La Niña, also known as ENSO (El Niño/Southern Oscillation), is climate variability in the tropical Pacific that impacts many other regions of the Earth.

Explore Temperature Data

Go to NOAA’s Climate at a Glance website to explore their global temperature data. Select “annual” to get the annual averages. Now list the 20 warmest years by clicking on the ANOMALY column in the table below until you have the warmest year listed on top.

  • Which is the record warmest year?
  • How many of the 20 warmest years have occurred since the year 2000?

Click on the “Display Trend” button and enter different start and end dates.

  • What is the trend over the last 50 (last 100) years?

Interested in how temperature measurements from long ago have been recorded? Have a look at this fascinating interactive article.

The increase in atmospheric temperatures over the last 100 years has not been uniform everywhere. Fig. 2 shows that temperatures over land changed more than over the oceans and the Arctic warmed more than the tropics. These patterns are called land-sea contrast and polar amplification and are quite well understood and simulated in climate models as we will see later. The warming is indeed almost global. The only exception is the northern North Atlantic, which has been slightly cooling for reasons we will discuss later.

Analysis of thousands of temperature records into an estimate of global mean temperature change such as Fig. 1 is not trivial. Issues such as changes in station locations, instrumentation and data coverage have to be taken into account. The fact that five different groups analyzing the data using different methods come to the same conclusions suggests that the results are robust. The reliability of the data has been demonstrated and it has been shown that station locations (e.g. urban versus rural) don’t matter. See this blog for more discussion.

The global warming trend has made the probability of warm and extreme warm temperatures more likely and the probability of cold and extreme cold temperatures less likely. E.g. extremely warm (more than 3°C warmer than the 1951 to 1980 average) summer temperatures over Northern Hemisphere land areas had only a 0.1% chance of happening from 1951 to 1980, whereas during the decade from 2005 to 2015 the chance of those extreme warm temperatures to occur was 14.5%. Conversely, relatively cold summer temperatures (less than 0.5°C cooler than the 1951-1980 average) that happened about every third year from 1951 to 1980 only occurred 5% of the time during 2005 to 2015. See this article and this blog for additional information and graphs.

 b) Cryosphere

One of the most sensitive regions to climate change is the Arctic. Sea ice cover there has decreased dramatically over the past 40 years particularly in late summer (Fig. 3). Arctic sea ice experiences a strong seasonal cycle. In late winter it covers about 15.4×106 km2 and decreases to about 6.4×106 km2 in late summer (Fig. 4). Therefore, the relative changes are larger in late summer, with a reduction of about 46 % or 2.9×106 km2 from 1980 to 2015. In winter the reduction was only about 9 % or 1.5×106 km2. The lost area of summer Arctic sea ice is more than 10 times the size of Oregon (255,000 km2) or four times the size of France (~650,000 km2).

Arctic Sea Ice September 1979Arctic Sea Ice September 2020
Figure 3: Arctic sea ice extent in September 1979 (left) and 2020 (right) from satellite observations. The purple line denotes the median ice extent from 1981-2010. In 1979 the ice extent was 7.1 million sq km, in 2020 it was 3.9 million sq km. Images from the National Snow and Ice Data Center (NSIDC).

In the Antarctic, sea ice experiences an even larger seasonal cycle but it has changed less over the last 40 years compared to the Arctic. Southern hemisphere sea ice has slightly increased. Note that year-to-year fluctuations are larger Antarctic than in the Arctic, whereas the long term trends in the Antarctic are much smaller than in the Arctic. This combination of larger short-term fluctuations and a smaller long-term trend makes Antarctic sea ice trends less statistically significant. A trend is statistically significant if it is larger than the uncertainties from short term fluctuations.

Globally, Earth has lost about 2 million square kilometers of sea ice from 1980 to 2015, which is about 10% of the total. Compare that area to that of your favorite state or country.

Explore Sea Ice Changes

Go to the National Snow and Ice Data Center’s (NSIDC) website and click on a few years to see how Arctic sea ice cover has changed over the year. To get a time series for the current month go to their sea ice index site and click on the Monthly Sea Ice Extent Anomaly Graph in the lower right corner.

  • By how much has the sea ice cover decreased since the 1980s? Estimate the decrease both in relative terms (percentage) and in absolute terms (million square kilometers).

An animation is available here.

Mountain glaciers are also sensitive to climate change. Fig. 5 shows an example from Muir Glacier, which has retreated dramatically since 1941. This is typical for most glaciers around the world. In fact, only a small number of glaciers show advances, whereas the vast majority of glaciers melt and retreat from the valleys up into higher elevations.

The World Glacier Monitoring Service (WGMS) has compiled information on hundreds of glaciers world-wide. Fig. 6 shows that since 1980 glaciers in all regions have been losing mass with an acceleration of loss in recent years. Watch this video of glacier changes in Iceland.

Explore Glacier and Ice Sheet Change

Go to the Glacier Browser and select a glacier of your choice.

  • What do you observe?

Explore Greenland ice sheet change with this interactive chart, which shows the surface area experiencing melting. Click on a few years in the early part of the record (e.g. 1980s) and in the more recent part (e.g. 2010s).

  • How has the melt area changed?

Ice sheets are also melting. Observations from the Gravity Recovery and Climate Experiment (GRACE) satellites, which measure very precisely Earth’s gravity field and can detect changes in mass, show that since 2002 the Greenland ice sheet has lost about 4,000 Gt of mass, and the Antarctic ice sheet has lost about 2,500 Gt (Fig. 7). Here is a presentation about Greenland melting. Melting of the Greenland ice sheet contributes currently about 0.8 mm/yr to global sea level rise. Antarctica’s contribution is 0.4 mm/yr and mountain glaciers add about 0.6 mm/yr, for a total of 1.8 mm/yr sea level rise from melting of ice (IPCC 2019).

Box 1: Rates of Change

The rate of change of a variable X between two points in time t1 and t2 can be calculated as the difference (denoted by the greek letter delta Δ) of the value of the variable at time t2 minus the value of the variable at time t1 ΔX = X2X1 divided by the difference in time Δt = t2t1

(B1.1)   \begin{equation*} \frac{\Delta X}{\Delta t} = \frac{X_2 - X_1}{t_2-t_1}\ . \end{equation*}

Thus the units of the rate of change are the units of the variable divided by time. You can determine the rate of change from a timeseries graph such as Fig. 1 or Fig. 6 by selecting two points in time on the horizontal axis and reading the corresponding values of X1 and X2  from the vertical axis. Using Fig. 1 as an example our variable will be the temperature anomaly T. Choosing t1 = 1940 and t2 = 2010 we can read off T1 = 0ºC and T2 = 0.7ºC. Thus, ΔT = 0.7ºC, Δt = 70 years and the rate of change ΔTt = 0.01ºC/yr = 0.1ºC/decade.

Obviously, calculating the rate of change in this way the resulting value will depend on the two times picked. The rate of change of a whole set of data points can be calculated by assuming a linear relationship and minimizing the distance of all points from a straight line X = S×t + I, where S is the slope and I the intercept with the vertical axis. This is called linear regression. Simple formulae to calculate S and I can be found here. Linear regressions are commonly used to estimate rates of change. E.g. the straight lines in Fig. 4 in this chapter and Figs. 7-12 in chapter 1 have been calculated using the formulae for linear regression. Regression lines are also often referred to as trend lines.

c) Ocean

Subsurface temperature measurements in the oceans document warming over the last 60 years (Fig. 8). The ocean’s heat content has increased by about 30×1022 Joules during that time. The heat content of the ocean is its temperature Ttimes the heat capacity of water cp= 4.2 J/(gK). Prior to 2005 subsurface temperature measurements were more limited in space and time because they were taken from ships by lowering CTD(conductivity, temperature, depth) instruments on a cable into the ocean. Since 2005 autonomous, free-drifting Argofloats measure temperature, salinity, pressure, and velocity of the upper 2 km of the water column. Currently there are about 4,000 floats out there, which provide much better spatial and temporal coverage than the previous ship-based measurements.

The melting of mountain glaciers and ice sheets leads to increased runoff into the ocean, which contributes to sea level rise (Fig. 9). Sea level rise is also caused by warming sea water, which causes expansion and by increased runoff from pumping of groundwater out of aquifers. Estimates based on tide gauge records indicate that sea level has risen by about 20 cm from the 1870s to the year 2000 and another 6 cm since. The melting of mountain glaciers and ice sheets contributes about similarly to the current sea level rise, but if current trends continue it is likely that many mountain glaciers will completely disappear and the large ice sheets will contribute more and more to global sea level rise. Note that sea level rise is not spatially uniform.

Explore Sea Level Changes

Goto noaa.gov and explore local sea level changes from tide gauges.

  • Where is sea level increasing?
  • Where is it decreasing?
  • Select a tide gauge. What is the time period covered?

Here is a map of sea level measured from satellites.

  • What time period is covered by the satellite data?
  • Where is sea level increasing?
  • Where is it decreasing?

d) Biosphere and Carbon Cycle

Plant and animal species have been observed to move poleward and upward in the mountains (Parmesan and Yohe (2003). This response is consistent with global warming and the tendency of organisms to stay in a temperature range to which they have adapted. Flowering dates of many plants have also shifted earlier in the spring such as for cherry blossoms.

Carbon dioxide (CO2) has been measured in the atmosphere since 1958 at Mauna Loa Observatory in Hawaii (Fig. 10). At that time concentrations were just below 320 parts per million (ppm). Subsequently they increased to values of just over 400 ppm today. That is a 25 % increase. Overlaid on the long term trend is a seasonal cycle. Growth of the terrestrial biosphere in northern hemisphere spring leads to CO2 drawdown and decay of organic matter such as fallen leaves increases CO2 in the fall. As we will see later, CO2 is an important greenhouse gas, and its increase over the past decades is the main cause of the recent global warming.

Figure 10: Atmospheric CO2 measured at Hawaii’s Mauna Loa Observatory. The measurements were pioneered by Charles Keeling from Scripps Institution of Oceanography in 1958. From noaa.gov.

Chapter 1: Weather