Judith Curry: Decision Making Under Climate Uncertainty: Part I

  • Date: 01/11/10

Based upon the precautionary principle, the United Nations Convention on Climate Change (UNFCCC) established a qualitative climate goal for the long term: stabilization of the concentrations of greenhouse gasses in the atmosphere. The view of climate change held by the UNFCCC regards both the problem and solution as irreducibly global. This view of the problem has framed the IPCC’s assessment and national funding priorities on the subject of climate science.

The UNFCCC’s policy solution places us all between the proverbial rock and a hard place, as evidenced by the international and national debates being conducted on the topic.  The dilemma is aptly  described by Obersteiner et al. (2001):

The key issue is whether “betting big today” with a comprehensive global climate policy targeted at stabilization “will fundamentally reshape our common future on a global scale to our advantage or quickly produce losses that can throw mankind into economic, social, and environmental bankruptcy”.

 

Weitzmann (2009) characterizes the decision making environment surrounding climate change in the follow way:

“Much more unsettling for an application of expected utility analysis is deep structural uncertainty in the science of global warming coupled with an economic inability to place a meaningful upper bound on catastrophic losses from disastrous temperature changes. The climate science seems to be saying that the probability of a system-wide disastrous collapse is non-negligible even while this tiny probability is not known precisely and necessarily involves subjective judgments.”

 

The question needs to be asked as to whether the early articulation of  a preferred policy option by the UNFCCC has stimulated a positive feedback loop between politics, science, and science funding that has accelerated the science (and its assessment by the IPCC) towards the policy option (CO2 stabilization) that was codified by the UNFCCC. This feedback loop marginalizes research on natural climate variability (forced and unforced) on regional and global scales, focuses research on model development rather than observations (particularly paleoclimate), and values model agreement over a full exploration of model uncertainty (including model structure).   The net result of such a feedback loop is an overconfident assessment of the importance of greenhouse gases in future climate change.  Which has brought us to our current position between a rock and a hard place, where we lack the kinds of information that we need to understand climate change more broadly and develop and evaluate a broad range of policy options.

My particular interest in this situation is to understand the dynamics of uncertainty at the climate science-policy interface.  I am questioning whether these dynamics are operating in a manner that is healthy for the science and for the policy making process.  The IPCC’s efforts to consider uncertainty focus on communicating uncertainty (apparently motivated by building the political will to act), rather than on characterizing uncertainty in a way that would be useful for risk managers and resource managers, not to mention the scientists and the institutions that fund science.

While I am an novice (in academic terms) at considering these issues,  I  would like to raise them here at Climate Etc. for discussion.  We need additional perspectives and more discussion on these issues, and I look forward to your input and ideas.

 

Overall framework of the series

I am envisioning this series in three parts, organized around three different decision making strategies:

I.  The decision making strategy associated with the UNFCCC global emissions stabilization targets

II.  Robust decision making and the “fat tail” issue raised by Weitzmann

III.   Regional adaptation strategies that focus on reducing vulnerability to extreme events and resource management.

Background on decision making under uncertainty

The most useful overviews that I have found on decision making under uncertainty in the context of the climate problem are Obersteiner et al. (2001), Morgan et al. (2009) and van der Sluijs et al. (2010).  Note to the plagiarism police: none of these are my original ideas and I am claiming no academic credit for them, I have done my best to synthesize my knowledge into clear statements and attribute specific ideas to their source.

Decision making identifies and choose alternatives based on the values and preferences of the decision maker. Circumstances of relatively low uncertainty and low stakes are a comfortable domain for applied science and engineering, where application of cost benefit analysis and expected utility is straightforward. However, if the uncertainty is high and/or the stakes are high, the decision making environment is much more volatile.

Classical decision analysis identifies an optimal choice among actions based upon the probability of occurrence of possible outcomes and the decision maker’s utility functions. Uncertainty in the input parameters are propagated through a model to generate the expected utility of the different options.  Decision rules are then applied (e.g. the maximum expected utility). While probability theory has been the foundation of classical decision analysis, other alternatives have been employed including possibility theory and Dempster-Shafer theory. An example of a nonprobabilistic decision rule is minimax, which minimizes the possible loss while maximizing the possible gain.

The classical linear technocratic model of decision making assumes that more scientific research leads to more reliable knowledge and less uncertainty, and that the scientific knowledge forms the basis for a political consensus leading to meaningful action (van der Sluijs et al.). When uncertainty is well characterized and there is confidence in the model structure, classical decision analysis can provide statistically optimal strategies for decision makers.

Classical decision making theory involves reducing the uncertainties before acting.  In the face of irreducible uncertainties and substantial ignorance, reducing the uncertainty isn’t viable, but not acting could be associated with catastrophic impacts.  While a higher level of confidence can make decision makers more willing to act,  overestimating the confidence can result in discounting the value of information in the decision making process if the confidence later proves to be unwarranted.

Under conditions of deep uncertainty, optimal decisions based upon a consensus can carry a considerable risk. Obersteiner et al. describes the uncertainty surrounding the climate change science is a two-edged sword that cuts both ways:  what is considered to be a serious problem could turn out to be less of a threat, whereas unanticipated and unforeseen surprises could be catastrophic. Obersteiner et al. argues that the strategy of assuming that climate models can predict the future of climate change accurately enough to choose a clear strategic direction might be at best marginally helpful and at worst downright dangerous: underestimating uncertainty can lead to strategies that do not defend the world against unexpected and sometimes even catastrophic threats. Obersteiner et al. notes that another danger lies on the other side of the sword if uncertainties are too large and analytic planning processes are abandoned.

Resilient and adaptive decision making strategies are used in the face of high uncertainty.  Resilient strategies seek to identify approaches that will work reasonably well across the range of circumstances that might arise.  Adaptive strategies can be modified to achieve better performance as more information becomes available.   Adaptive strategies work best in situations where large nonlinearities are not present and in which the decision time scale is well matched to the actual changes. (Morgan et al.)

Robustness is a strategy that formally considers uncertainty, whereby decision makers seek to reduce the range of possible scenarios over which the strategy performs poorly.  As an example, Info-gap decision theory sacrifices a small amount of optimal performance to reduce sensitivity to what may turn out to be incorrect assumptions. A robustness criteria helps decision makers to to distinguish  reasonable from unreasonable choices. Robustness suggests decision options that lie between an optimal and a minimax solution. Relative to optimal strategies that focus on the best estimate, robustness considers unlikely but not impossible scenarios without letting them completely dominate the decision.

The precautionary principle is a decision strategy often proposed for use in the face of high uncertainty, which consists of precaution practiced in the context of uncertainty.  While there are many different notions of what the precautionary principle does and does not entail, there is a very clear meaning of the principle in the context of climate change.  Principle #15 of the Rio Declaration from the Earth Summit, in 1992 states (UNEP, 1992):

“In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”

This statement clearly explains the idea that scientific uncertainty should not preclude preventative measures to protect the environment. However, the precautionary principle implies the need for a minimal threshold of scientific certainty or plausibility before undertaking precautions.

Obersteiner et al.  argues that stabilization of greenhouse gas concentrations at a target level is a non-robust strategy in an environment that is extremely uncertain and most likely nonlinear.  The UNFCCC strategy has dominated the framing of IPCC assessment by focusing their efforts on the documentation of dangerous climate change in the context of specific levels of warming associated varying amounts of atmospheric carbon dioxide.  Embellishing the IPCC’s climate paradigm has come to dominate national and international climate programs and their funding.

Using climate models to optimize stabilization targets

Given the inadequacies of current climate models, how should we interpret the IPCC’s multi-model ensemble simulations of the 21st century climate?  This ensemble-of-opportunity is comprised of models with similar structures but different parameter choices and calibration histories, running simulations under different emissions scenarios with a few models conducting multiple simulations for individual scenarios.

There are several different viewpoints regarding the creation of meaningful probability density functions (PDFs) from climate model simulations. Staniforth et al. (2007) argue that model inadequacy and an inadequate number of simulations in the ensemble preclude producing meaningful probability PDFs from the frequency of model outcomes of future climate. However, Stainforth et al. emphasize that models can provide useful insights without being able to provide probabilities, by providing a lower bound on the maximum range of uncertainty and a range of possibilities to be considered.  Knutti et al. (2008) argues that the real challenge lies more in how to interpret the PDFs rather whether they should be constructed in the first place.   There is a growing emphasis on trying to assign probabilities to the distribution of simulations from the ensemble of opportunity.  Staniforth et al. is consistent with Betz (2007), who views a climate model simulations as a modal statements of possibilities.

Inferring an actual prediction/projection/forecast of 21st century climate from these simulations assumes that the natural forcing (e.g. solar, volcanoes) will be essentially the same as for the 20th century.  Apart from the uncertainty in forcings, the models have substantially varying sensitivities to CO2, marginal capabilities for simulating multidecal natural internal variability, and there is also the issue of scenario uncertainty.

If we assume that CO2 sensitivity dominates any conceivable combination of natural (forced and unforced) variability, what do the simulations actually say about 21st century climate?  Well, the sensitivity range for the IPCC calculations are essentially in the same range (1.5-4.5C) that was estimated in the 1979 Charney report.  And the calculations show that the warming proceeds until about 2060 in a manner that is independent of the emissions scenario.

So exactly what have we learnt about possible 21st century climate from the AR4 relative to the TAR (and even relative to the 1979 Charney report) that refines our ability to set an optimal emissions target?  I suspect that we are probably at the point of diminishing returns from learning much more in the next few years (e.g. AR5) from additional simulations by the large climate models of the current structural form.

Where is all this heading?

There are three overall thrusts in climate model development that I am aware of, which are driven by the needs of decision makers.

The big new push in the climate modeling enterprise is for Earth Systems Models. These models are beginning to include biogeochemistry (including a carbon cycle) and ecosystem interactions.  Some are proposing to incorporate human dimensions, including economics models and energy consumption models.  Such models would could in principle generate their own scenarios of CO2, and so reduce the scenario uncertainty that is believed to become significant towards the end of the 21st century.

There is also a push for higher resolution global models to provide regional information, particularly on water resources.  There is currently no evidence that global models can provide useful simulations on regional scales, particularly of precipitation.

Another push is for credible predictions on a time scale of decades (out 20 years in advance).  This necessitates getting the natural variability correct: both the external forcing, and the decadal scale ocean oscillations.  I don’t expect the models to do much of use in this regard in the short term, but a focus on the natural variability component is certainly needed.

So it seems like we are gearing up for much more model development in terms of higher resolution and adding additional complexity. Yes, we will learn more about the climate models and possibly something new about how the climate system works.  But  it is not clear that any of this will provide useful information for decision makers on a time scale of less than 10 years to support decision making on stabilization targets, beyond the information presented in the AR4.

Conclusions

The current decision making framework based on the UNFCCC/IPCC has led to a situation where we are between a rock and hard place in terms of decision making.  The strategy (primarily model based) has provided some increased understanding and a scenario with about 3C sensitivity that is unlikely to budge much with the current modeling framework.  A great deal of uncertainty exists, and emissions target policies based on such uncertain model simulations are  not robust policies.

It seems that we have reached the point of diminishing returns for the science/decision making strategy reflected by the UNFCCC/IPCC. Its time to consider some new decision making frameworks and new scientific ideas. In Part II, we will explore some robust decision making strategies that consider Weitzman’s “fat tails” and ponder the kinds of scientific information needed to support such strategies.

Climate Etc, 31 October 2010

Recent Popular Articles


We use cookies to help give you the best experience on our website. By continuing without changing your cookie settings, we assume you agree to this. Please read our privacy policy to find out more.