Climate Compass Blog
“Verifiable.” That is arguably the most important word in the Bali Action Plan, the agreement two years ago that launched the global climate negotiations about to culminate in Copenhagen. Our future climate commitments and actions, governments agreed, must be “measurable, reportable and verifiable.”
This construct is critical because, done right, “MRV” offers the promise of a global climate agreement in which countries can confidently ascertain whether others are doing what they promised.
Yet many governments now seem decidedly uncomfortable with the concept. Developing countries say MRV shouldn’t apply to any actions they take on their own, only those receiving international support (a point underscored last week by China when it announced its new carbon intensity target). In the case of a country like China, that means virtually none of its actions would be subject to international verification.
The United States, for its part, has offered up an MRV proposal that avoids the term verification altogether. This is a worrisome omission, one that underscores perhaps the most glaring weakness in the U.S. position going into Copenhagen – its absolute silence on the question of compliance.
This Thanksgiving, I’m thankful we base policy decisions on peer-reviewed science instead of emails!
The kerfuffle over email correspondence hacked from a server at the University of East Anglia’s Climatic Research Unit is making climate change deniers giddy. But just like all the other non-smoking guns they’ve waived around over the years, this “mushroom cloud” will soon blow away. Nothing has come to light that undermines scientific assessments of the climate system, which are firmly anchored in peer-reviewed scientific publications, not emails.
Some of the past “smoking guns” that were supposed to put the theory of human-induced climate change in an early grave are among the hot topics flying around in the hacked emails. One was a paper by Soon and Baliunas published in a peer-reviewed journal called Climate Research in 2003. That paper was supposed to put to rest the conclusion of the 2001 IPCC report that the late 20th century was warmer than any previous period in the past millennium, but it was quickly and thoroughly refuted in the peer-reviewed literature (not in emails). Another was a 2005 paper by McIntyre and McKitrick in an often-not-peer-reviewed journal called Energy & Environment. This paper had the same goal as the first one, but it too was rebutted in the peer-reviewed literature and in a report by the National Academy of Sciences (not in emails). In the emails, climate scientists complained about these papers and expressed frustration that they were published in spite of serious flaws. However, policymakers did not use these emails to help them determine America’s policy actions on climate change.
Much of the bickering in the emails boils down to scientists’ irritation over serious breaches of the normal peer review system to get denialist papers published (see here, here and here). The publisher of Climate Research (CR) admitted that the major conclusions of the paper by Soon and Baliunas “cannot be concluded convincingly from the evidence provided in the paper. CR should have requested appropriate revisions of the manuscript prior to publication.” The paper was so bad that three editors resigned in protest over its publication. The paper by McIntyre and McKitrick (2005) wasn’t peer reviewed at all and the editor of Energy & Environment openly stated, “I'm following my political agenda – a bit, anyway. But isn't that the right of the editor?” Scientists value the peer review process and they find it unfair and objectionable when they subject their own work to potential rejection while others circumvent this critical step in the scientific process to force low-quality research into the debate.
Luckily, none of this matters since the scientific assessments produced to inform policymakers about the science of climate change are based on peer-reviewed science publications, not on the opinions of individuals expressed in email correspondence. As happens in the normal scientific process, when the occasional bad paper slips through the peer review cracks it gets refuted through subsequent scrutiny in the peer-review literature. In the end, what may be said in emails doesn’t matter; the scientific peer-review process will prevail.
Jay Gulledge is Senior Scientist and Program Manager for Science & Impacts
This post originally appeared in Yale Environment 360.
Two years ago in Bali, climate negotiators set an extremely ambitious goal for Copenhagen that quickly came to be viewed as a deadline for achieving a new, ratifiable global climate agreement. Striking such a deal is certainly in line with what the science says is urgently needed. But political realities, not the science, dominate global climate negotiations.
And the political reality is that many of the major players are not yet ready to sign a binding deal. Many, including the United States, China and India, are making encouraging progress domestically. Yet there remain wide differences among parties on many of the core issues – the nature of parties’ commitments, how they will be verified, how to generate new public and private finance, etc. So the objective in Copenhagen must be a strong interim agreement that captures what progress has been achieved and creates fresh momentum toward a full and final deal.
Two major components involve carbon cuts and money. On emissions, a probable Copenhagen deal includes pledges from developed countries to meet reduction targets and pledges from major developing countries (e.g., China, India, Brazil) to meet other mitigation actions such as carbon intensity goals. On finance, developed countries would pledge near-term funding to help developing countries adapt to climate change and develop low-carbon strategies. It’s also imperative that Copenhagen produce a clear deadline for concluding a final legal agreement, with the December 2010 Mexico City climate summit providing a reasonable timeframe.
A Copenhagen deal should also go as far as possible in outlining the architecture of a legally-binding treaty. This includes: the nature of commitments for developed and major developing countries; how to verify that countries are complying with their commitments; and new financial mechanisms.
Achieving strong national pledges of action and making available some quick-start money to address immediate climate-related needs for developing countries will represent genuine progress, and will help bridge the gap between developed and major developing countries. But to be a true success, Copenhagen must be a springboard toward a legally-binding agreement in 2010.
Read more here.
Eileen Claussen is President
This week, Senators Lamar Alexander (R-TN) and Jim Webb (D-VA) released a bill intended, among other things, to dramatically expand the U.S. nuclear reactor fleet and, reportedly, to double the production of nuclear power in the United States by 2020.
In previous blog posts, we have highlighted what proposed climate and energy legislation in the House and Senate does for nuclear power. Many analyses, such as studies by the U.S. Environmental Protection Agency (EPA) and the Energy Information Administration (EIA), agree that the bulk of the most cost-effective initial greenhouse gas (GHG) emission reductions are found in the electricity sector and that nuclear power can play a key role in reducing GHG emissions from electricity generation as part of a portfolio of low-carbon technologies.
Putting a price on carbon, as a GHG cap-and-trade program would do, is likely the best option for expanding nuclear power generation since it makes the cost of electricity from nuclear and other low-carbon technologies more economical compared to traditional fossil fuel technologies. For example, in its analysis of the American Clean Energy and Security Act of 2009 (ACESA) passed by the House of Representatives in June of 2009, EIA projected that nuclear power might provide nearly twice as much electricity in 2030 as it does today.
A key challenge is cost. The construction of much of the existing nuclear fleet saw significant cost overruns and delays, which makes financing the first new plants after a hiatus of several decades difficult. Government loan guarantees can help the first-mover new nuclear power plants overcome the financing challenge. The demonstration of on-budget and on-time construction and operation by these first movers would facilitate commercial financing of subsequent plants.
Could the U.S. undertake a very large expansion of nuclear power? Nuclear power plants are massive undertakings, and a typical plant might cost on the order of $6 billion dollars and take 9-10 years to build from licensing through construction. Nonetheless, 17 applications for construction and operating licenses (COLs) for 26 new reactors are under review by the Nuclear Regulatory Commission (NRC)—all submitted since 2007. One can also look at the historical pace of nuclear power deployment in the United States for a sense of what might be reasonable once the nuclear industry ramps up. More than a third of the 100 gigawatts (GW) of nuclear generating capacity that provides a fifth of U.S. electricity came online in 1971-75, and more than 90 GW of U.S. nuclear power came online in the 1970s and 1980s.
One can see that putting a price on carbon, via cap and trade, will likely spur a significant expansion in U.S. nuclear power over the coming decades (as part of a portfolio of low-carbon technologies) facilitated by loan guarantees to support a few first-mover projects.
Steve Caldwell is a Technology and Policy Fellow
Bacteria that produce gasoline. “Blown wing” technology for wind turbines. Enzymes that capture carbon dioxide. Batteries that store solar energy overnight. This is a short list of the 37 projects recently selected as the recipients of $151 million in research grants from the Advanced Research Projects Agency-Energy, or ARPA-E. In short, it’s the Department of Energy’s version of going rogue.
ARPA-E is a new agency within the DOE that aims to fund cutting-edge energy and climate research. This may not be the conventional approach of government programs, but it is not unprecedented: ARPA-E is modeled on a Defense Department program, known as DARPA, that played a significant role in the commercialization of microchips and the Internet along with other high-tech innovations.
ARPA-E was created by Congress in August 2007 under the America COMPETES Act, but was left unfunded until Congress authorized $400 million for the agency in this year’s stimulus bill. The agency began to mobilize its resources this fall. In September, Arun Majumdar, a scientist at the Lawrence Berkeley National Laboratory in California, was confirmed as the agency’s director and soon after announced the winners of the first round of proposal solicitations. The 37 winning projects represent 1% of submitted proposals and include high-risk and high-payoff ideas and technologies in all stages of development. ARPA-E hopes that down the line the more promising projects will get picked up by venture capitalists or major companies willing to invest more resources to bring these projects from the laboratory to the marketplace.
The focus on high risk and high payoff means that ARPA-E must expect failure as well as success. Energy Secretary Steven Chu, one of the original visionaries of the ARPA-E concept, believes a few projects could have “a transformative impact.” In this economic climate, many investors overlook high-risk, but also high-reward, energy research and technology development. ARPA-E is an innovative and welcome approach to keep these projects in the pipeline, as a radical breakthrough in advanced technology could facilitate a U.S.-led transition to a global clean energy economy.
Olivia Nix is the Solutions intern