This report elaborates on four issues – technological innovation, the behavior of firms, intergenerational equity, and climate “surprises” – that have profound implications for the modelers and makers of climate policy. Computer models that integrate climate science, policy, and economic research have become essential to climate change policy discussions. These “integrated assessment” (IA) models are extremely useful for several reasons: they assess specific climate change policies, coordinate the multiple issues in a systematic framework, and provide an analytical method for comparing climate policies to other, non-climate related policies. However, most IA is based largely on economic theories whose simplifications are not always applicable to climate change policy. This paper examines four kinds of assumptions that underlie most IA models, and shows how different approaches more in line with the latest research might change our view of the economics of the climate problem.
The first paper, by Alan Sanstad, focuses on technological innovation and its treatment in IA models. Most models do not incorporate a realistic assessment of how market forces drive innovation. While innovation would clearly lower the costs of addressing climate change, many modelers focus on the opportunity cost of encouraging technological progress on climate-friendly technology. The fear is that climate-related R&D will “crowd out” other kinds of R&D. Sanstad’s work examines this question, taking into account that the economy systematically underinvests in R&D, and shows that policies promoting climate-related R&D may simultaneously encourage, not discourage, R&D in other sectors.
The second paper by Stephen DeCanio discusses how IA models characterize the behavior of firms by assuming they do no more than maximize profits, and that they always succeed perfectly in doing so. This often leads to misunderstandings about: (1) how firms innovate, and (2) the trade-offs firms must make between environmental and economic performance. DeCanio’s model describes firms as information networks with multiple objectives, which leads to a more complete picture of how firms innovate. The model also shows that both superior economic and environmental performance can be achieved through technological and organizational innovation.
The third paper by Richard Howarth addresses how future generations are depicted in most IA models. Models typically use a single, simple discount rate to make intertemporal comparisons for anywhere from 50 years to sometimes 300 years into the future. But over very long periods of time, these comparisons involve different generations of people. Howarth accounts for these differences using the so-called “overlapping generations” models – a model that incorporates the detail of IA models while providing a more realistic assessment of each generation’s spending and savings behavior. This work indicates that policies inclined towards climate stabilization provide an “insurance” policy that protects future generations against potentially catastrophic costs. Even if damage costs turn out to be moderate, Howarth finds, emissions control is still consistent with maintaining long-term economic well-being.
Stephen Schneider and Starley Thompson, in the final paper, provide a new model to explore the causes and consequences of one major type of “climate surprise” – the collapse of the “conveyor belt” circulation of the North Atlantic Ocean. Climate “surprises” are the low-probability but high-consequence scenarios driving much of the international concern about climate change. Currently, most IA models assume the climate responds slowly and predictably. The authors find IA models that ignore the implications of rapid, non-linear climatic changes or surprises are likely to overestimate the capacity of humans to adapt to climatic change and underestimate the optimal control rate for GHG emissions. The conclusion is that it is critical that the full range of plausible climatic states become part of IA policy analysis.
This report is the latest in the Pew Center’s economics series. As with the rest of the series, these reports will help to demystify the models and explain what type of questions they can (and cannot) answer. But whereas until now we have focused on what has been done in the past, we now begin to focus on what needs to be done in the future. This report includes four critiques of the assumptions underlying IA, and suggests ways in which new and improved models could provide greater insights into what policies would be most efficient and effective in reducing greenhouse gas emissions:
- IA models that more accurately portray innovation will help policy-makers answer questions such as the following: Should the government subsidize climate-friendly R&D? Will increasing carbon prices alone drive sufficient innovation to solve the GHG problem? How should we time and phase emission reductions to take maximal advantage of technological progress?
- IA models that more realistically portray businesses will make it clear that the challenge for policy-makers is to find ways to encourage businesses to innovate in multiple dimensions to meet multiple objectives.
- IA models that take into account the standpoint of future generations will enable policy-makers to explicitly consider the implications of policy for equity as well as efficiency.
- IA models that can explore the causes and consequences of “climate surprises” will help policy-makers to understand the implications of speeding up or slowing down the rate of greenhouse gas build-up, which may turn out to be as important as the size of the build-up.
Earlier versions of the papers in this report were first presented during the Pew Center’s July 1999 economics workshop, which convened leading experts to discuss potential improvements to current IA modeling methods. The insights of participants in that workshop were invaluable.
This report benefited greatly from the comments and input from several individuals. The Pew Center and authors would like to thank Kenneth Arrow, Larry Goulder, Robert Lind, Klaus Hasselman, and Bruce Haddad. Special thanks are also due to Ev Ehrlich and Judi Greenwald for serving as consultants on this project.
Our knowledge of the global climate system, and of how human actions may be changing it, is the product of a large and expanding body of scientific research. Translation of this knowledge into policies for dealing with the possibility of global climate change, however, has been largely carried out using the concepts and methods of economics. Unique among the social sciences, modern economics provides a set of powerful analytical and computational tools that support quantitative modeling of economy- and society-wide policies over the long run. The formidable challenges posed by the complexity of climate policy have made economic modeling an especially attractive means of organizing and applying a range of scientific, economic, and social research to analyzing how we should respond to the threat of climate change.
In practice, such analysis is typically carried out through the construction and application of large-scale computer models that combine scientific and economic theories and data into unified quantitative frameworks. These “integrated assessment” models have emerged as decision-makers’ primary tool for quantitative climate policy analysis.
In keeping with their origins, integrated assessment models (IAMs) are commonly built on the principles of what is often referred to as “standard” or “conventional” economic theory. The papers in this volume deal with four of the key assumptions underlying this theory as it has typically been applied to climate economics and integrated assessment. The first assumption is that technological change — increases in outputs of goods and services without increases in productive inputs — originates outside of the economy itself; in other words, technological progress is “exogenous” with respect to the market economy. The second is rational behavior on the part of consumers and firms. Colloquially, this is usually thought to mean no more than “enlightened self interest.” In the theory and its applications, however, “rationality” is a considerably stronger assumption. It means complete optimization by economic agents over all possibilities open to them in the choice of commodities and actions: nothing is ignored or misunderstood, and no mistakes are made. The third assumption is that economic rationality takes into account all future as well as present possibilities: agents have perfect foresight infinitely far into the future. In practice, this assumption is represented by an infinitely-lived decision-maker, a representative consumer, or a social planner, who optimizes over a completely foreseen infinite horizon.
The final assumption has to do with the representation of the “externalities” or deleterious effects that could arise from climate change. The common approach in integrated assessment is to represent climate-related externalities as a function of the total stock of greenhouse gases (GHGs) in the atmosphere. A key conclusion of this method is that the climate problem is fundamentally “slow-moving,” and that even “large” anthropogenic emissions constitute only “small” additions to the global GHG stock at any given time, so the total stock changes slowly relative to the time-scales on which policies are usually formulated.
These assumptions — exogenous technological change, rational behavior, the infinitely-lived agent, and the basic stock externality model of GHGs — are fundamental design principles underlying standard climate economics and almost all integrated models. The papers here report on the results of research in which these fundamental elements are altered and the resulting implications for climate policy modeling are analyzed. The first paper, by Alan Sanstad, considers the consequences of recognizing that technological change is not typically “exogenous” but rather is strongly influenced by market incentives. In the second paper, Stephen DeCanio explores what happens when the basic rationality assumption as it applies to firms is replaced by a model in which firms are viewed as complex communication networks that do not engage in the fully-informed, optimal decision-making posited in the neoclassical model. In the third paper, by Richard Howarth, the infinitely-lived decision-maker is replaced by a series of distinct demographic generations. In the concluding paper, Stephen Schneider and Starley Thompson describe a model that can display abrupt, non-linear changes in the ocean-atmosphere system as a result of increased carbon dioxide (CO2) concentrations. These particular ideas constitute a sampling, in effect, of important recent developments in economics and climate science that warrant application to climate policy and integrated assessment modeling. The aim is to indicate several directions in which integrated assessment can and should develop in order to better enable policy-makers and citizens to grapple with the daunting risks and challenges posed by global climate change. The sections below provide a brief introduction to these topics.
A. Endogenous Technological Change
The standard models rule out the possibility of entrepreneurial responses to climate policy — the new innovation aimed at carbon reductions that would arise in response to new incentives. This innovation would be a form of “endogenous” technological change, in that it would occur within the economy in response to market forces. This omission raises the possibility that the models as currently structured systematically overestimate the costs of carbon abatement because they do not account for the accelerated carbon- or energy-saving innovation that would result from price-based carbon reduction policies.
In the past two decades, economists have made considerable strides in modeling the underlying processes of technological change and economic growth, focusing on how technical innovation arises within a market economy in response to economic incentives. This work — the “new growth theory” or theory of “endogenous technological change” — has been recognized as potentially significant for climate policy, and in recent years several initial applications have appeared. Sanstad discusses the key ideas of this theory and several of its applications to climate policy in the first paper.
As Sanstad describes, economists acknowledge (and partially confirm) the cost-saving potential of endogenous technological change. However, they have also emphasized the losses that would arise from reallocating resources such as human expertise to new carbon- or energy-saving innovation and away from other applications. For example, as engineers turn their attention to energy efficiency and away from other activities, there could be a slow-down of technical innovation in other sectors. Alternatively, there would be costs associated with training new engineers. It has been suggested that such opportunity costs of stimulating new “climate-friendly” technical change would be sufficiently large to nearly or completely offset the benefits.
Sanstad notes, however, that the modeling of technological change as an endogenous phenomenon is closely linked with the finding that the market system may systematically under-invest in innovation. This effect results from the “public good” character of knowledge as an economic commodity: the use of an idea by one does not preclude its use by another. The new growth theory provides tools for the rigorous analysis of this phenomenon in the general equilibrium setting necessary for applications to integrated assessment. Sanstad shows that, when this finding is taken into account, the opportunity cost problem may be substantially mitigated. In fact, it may be the case that policies to speed up one form of innovation would actually also speed up competing forms. These results follow from the fact that the economy’s initial equilibrium may allocate too few resources to innovation overall, so that policies that encourage a specific form of innovation may improve overall economic efficiency. As he discusses, this conclusion rests on the empirical question of the degree to which the new growth theory’s prediction of under-investment in research and development (R&D) is borne out in practice. This question is thus a key priority for further research.
B. The Theory of the Firm
Within the economics community there has been a lively and long running debate on the nature of the firm and assumptions regarding the degree to which the typical firm’s behavior can be characterized as “rational.” Beginning with the work of Herbert Simon in the 1940s and 1950s, there has been a steady expansion of theoretical and empirical efforts to open up the “black box” of the profit-seeking private sector firm to better understand how companies actually behave in a market economy. In the second paper, DeCanio summarizes several aspects of the modern critique of the neoclassical theory of the firm that have a bearing on integrated assessment issues. The questionable elements of neoclassical theory include: (1) the assumption that firms have a unitary objective — profit maximization — rather than the multiple objectives they are known to pursue; (2) the exclusive focus on the firm’s selection of how much of each aggregate “factor of production” (land, labor, capital, materials) to employ, when these choices actually occupy only a small portion of managers’ time and attention; (3) the assumption that technological change arises from “exogenous” factors, independent of the activity of the firm, instead of its being in large part a product of the procedures and decisions of the firm; and (4) the premise that firms always make optimal decisions, rather than, as in reality, searching for improvements in an environment too complex to allow full optimization.
DeCanio goes on to describe modern advances in the theory of the firm from fields such as the new institutional economics and management science, showing how these ideas could improve the treatment of firms in integrated assessment. He describes how these alternative frameworks call into question the conventionally assumed trade-off between environmental quality and the production of other goods. Instead, he argues for a perspective in which these two objectives are complementary.
DeCanio next presents results from the application of a mathematical “network” model of organizational structure and evolution that contrasts sharply with the neoclassical model. The premise of the network model is that patterns of communication and control within the firm are fundamental to understanding the dynamics of decision-making. Accordingly, the focus is on the behavior of the firm as an information processing system that is capable of “learning” over time in the sense of establishing new internal patterns of communication links. The model is explicitly economic in that it includes the costs associated with establishing and maintaining communications within the firm. This richer representation makes it possible to analyze rigorously phenomena that are essentially ignored in the neoclassical framework.
Among the most important of these phenomena is the manner in which the firm evolves in order to improve its performance on specific tasks — such as adopting a profitable technological innovation (e.g., in energy efficiency). All else being equal, increasing the density of communication links yields an economic gain to the firm; at the same time, however, it carries a commensurate cost. Thus, the organizational structure arrived at by an evolutionary process will depend on the particular form and parameters of the cost and reward functions. As a result, there will in general be no single “optimal” internal organization for the firm that prevails under all circumstances: the result of evolutionary learning will depend on the changeable nature of the firm’s tasks and opportunities. In addition, the evolutionary course of a firm’s development is likely to depend on the path it takes, with multiple outcomes — having roughly equal profitability but different organizational structures — possible. One corollary of these findings with particular significance for environmental policy is that different organizations may be comparable in profitability but can exhibit very different environmental behaviors and impacts. This means that improvement in environmental performance is possible without sacrificing overall profitability. In essence, the trade-off between profitability and environmental protection dissolves.
C. Intergenerational Fairness and Efficiency
One of the most basic features of global climate change is that while the present generation is deciding what if anything to do about it, the impacts of climate change (and hence the consequences of today’s actions or inaction) are likely to be borne by future generations. Cost-benefit analysis that ignores the standpoint of future generations sidesteps some of the issues of fairness and equity associated with climate change, notably including the risks that today’s lifestyles and technologies may be imposing on posterity through GHG emissions.
In the third paper, Howarth conducts a quantitative analysis that emphasizes the differential impacts that climate change response strategies would have on the welfare of present and future generations. This analysis employs a so-called “overlapping generations” (OLG) model, which posits (as the name suggests) a succession of generations. OLG models were pioneered in the 1950s by Paul Samuelson, and have since become a mainstay in the field of public finance, where they are used to study the impacts of taxation and government debt on the distribution of income between generations. This framework, however, has not been widely used in climate policy modeling.
Howarth uses an OLG-based IAM to compare the impacts of three policy regimes on the welfare of present and future generations. In the first scenario — the laissez faire base-case — the economy is managed according to free-market political precepts, and no steps are taken to reduce GHG emissions. Over the long-term future, this scenario yields an increase in mean global temperature of 8.0 ºC relative to the pre-industrial norm, which imposes costs on future generations equivalent to 9 percent of economic output. In the second scenario — cost-benefit analysis — conventional economic criteria are used to balance the present costs and expected future benefits of climate change mitigation measures. In this scenario, future environmental benefits are discounted relative to the present, so that only modest steps are taken to reduce GHG emissions. Relative to the laissez faire baseline, the emissions control rate rises from 15 to 23 percent between the years 2000 and 2105. These reductions provide relatively small environmental benefits to future generations.
In the third policy scenario — climate stabilization — GHG emissions are reduced to the levels required to maintain mean global temperature at its current level, which requires a GHG emissions tax that rises from $560 per metric ton of carbon in the year 2000 to $1,081 in the long-term future. Although critics claim that such aggressive policies might “lock up” the resources required to sustain a productive economy to the detriment of both present and future society, Howarth’s analysis reaches a rather different conclusion. In comparison with the laissez faire and cost-benefit scenarios, climate stabilization reduces short-term consumption by 7 percent. In the long run, however, climate stabilization confers welfare gains of $6.4 trillion per year on members of future generations in comparison with the laissez faire baseline, or $2.4 trillion per year relative to the cost-benefit scenario.
This analysis suggests that although GHG emissions are an important contributor to short-term economic welfare, sustained climatic stability may be viewed as an economic asset that would contribute strongly to the welfare of future generations. The results highlight the importance of moral considerations in the identification of “optimal” policies, finding that conventional cost-benefit analysis tends to favor the interests of present producers and consumers at the expense of future society.
D. Climatic Nonlinearities
The standard assumption in most IAMs is that the climate responds slowly and predictably, gradually warming as atmospheric GHG concentrations increase. Recent research on the long run behavior of the climate, however, has focused attention on the possibility of quite different climate dynamics. It is possible that, in fact, the climate may be subject to very rapid changes or “nonlinearities.” An important example of this kind of behavior has to do with the Atlantic thermohaline circulation, or “conveyor belt.” This is the natural process by which warm water moving northward from the Gulf stream into the Atlantic Ocean transports heat from more southerly latitudes, thereby increasing the temperature of the North Atlantic region. It is now thought possible that this conveyor belt might collapse under certain scenarios of anthropogenic CO2 emissions, rapidly altering the global climate and profoundly changing the climate in western Europe.
Determining how climate policies should take into account this possibility is clearly a high priority for integrated assessment modeling. Full computer models of the global climate system are far too large and complex to be embedded in IAMs containing economic detail. Indeed, the trend in climate modeling is toward super-computer-run models with integrated atmosphere, land, and ocean sub-models. Thus, economic IAMs have generally incorporated highly simplified representations of the global climate. The immediate challenge is thus to capture these more complicated dynamics in a simplified form that is amenable to linkages with economic models. In the fourth paper, Schneider and Thompson describe the results of such an effort, a “Simple Climate Demonstrator” (SCD) model. Technically, SCD is a simplified model of the northern hemisphere atmosphere-land-ocean system. Overall, the model replicates the behavior of more elaborate climate models. Schneider and Thompson study the conditions under which a conveyor belt collapse would occur, and find that the probability of this event is increased by: (1) greater CO2 concentrations, (2) higher rates of increase in CO2 concentrations, (3) greater sensitivity of the climate to CO2 concentrations, and (4) assumption of a weaker initial circulation. These findings confirm that IAMs with simpler representations of the climate may not be appropriate for studying the policy implications of rapid climate shifts. It also provides an alternative means of representing such shifts that is sufficiently complex to capture the behavior of more complex climate models while being sufficiently simple for applications to integrated assessment. Preliminary analyses coupling the SCD model to the Nordhaus 1992 Dynamic Integrated Climate Economy (DICE) model demonstrate that the potential for severe climatic damages as a result of non-linear climatic behavior in the twenty-second century and beyond can have a substantial influence on present climate policy decisions if discount rates are below 2 percent (Mastrandrea and Schneider, submitted).
E. Summary Remarks
Integrated assessment modeling is still in its early stages. Because it is by nature an interdisciplinary endeavor, it is ultimately based on the ideas and methods of its constituent disciplines. To date, IAMs have drawn most heavily on neoclassical economics, which is well developed and lends itself to this kind of application. As integrated assessment matures, it will need to broaden its scope to incorporate key ideas at the frontiers of research in economics and in other fields. In this volume, several such ideas are presented. The hope is that these papers will serve to advance discussion and applications that will contribute to the evolution of the integrated assessment modeling of global climate change.