The new IPCC climate report is already in troubleby Jonathan DuHamel on Jul. 30, 2013, under Climate change, Politics
The Fifth Assessment Report (AR5) from the Intergovernmental Panel on Climate Change (IPCC) is due to come out this fall. A major question of contention in climate science is the magnitude of “equilibrium climate sensitivity” which is often defined as the amount of global warming that would be produced from a doubling of the atmospheric carbon dioxide content.
Much new research in the last few years points to a sensitivity much lower than in previous IPCC estimates. The question is, how will the byzantine IPCC handle the information.
Both articles note that what the IPCC does with its upcoming report will have broad policy implications. For instance all the carbon dioxide regulations issued by the EPA are based mostly on previous IPCC prognostications. Yet none of the models used by the IPCC have been validated, and as Dr. Roy Spencer and Dr. John Christy pointed out, the models have been spectacularly bad at predicting global temperature (see here).
Paul C. “Chip” Knappenberger and Patrick J. Michaels, of the CATO Institute opine that the IPCC has three options:
“1. Round-file the entire AR5 as it now stands and start again.
2. Release the current AR5 with a statement that indicates that all the climate change and impacts described within are likely overestimated by around 50%, or
3. Do nothing and mislead policymakers and the rest of the world.”
Knappenberger and Michaels are betting on #3. They also note that “the problem of large government climate change assessments being scientifically outdated even before they are released is not atypical of ‘group science,’ which is hugely expensive, grossly inefficient, and often is designed to justify policy.” The US has spent about $160 Billion on climate change activities from 1992 to 2012.
P.S., Besides the problem cited above, a recent study published by the American Meteorological Society (here) found that individual climate models produced different results when run on different computers, even though the models contained the same coding and input data. And we base expensive policy decisions on this?