Here is a nice bit about Michael Mann of the climate hockey stick graph fame. [bolding is mine]
Perhaps the most damage to Michael Mann's credibility came from Michael Mann himself in his 2006 testimony before a congressional oversight committee where he stated, "Hundreds of scientists work in this field and we are a competitive bunch. We compete for scarce research dollars, academic recognition and professional standing." He further testified that the word "likely" only carried a "65% probability" and that his work in 1998 that was accepted by the IPCC was temperature reconstruction in its infancy. If anything in his 2006 testimony is valid, it is that most studies in the seven years since "...using different data and different statistical methods have re-affirmed...Northern Hemisphere warmth appears to be unprecedented over at least the past 1,000 years."So who else comes up with that 65% number? Nir Shaviv.
What Michael Mann and his supporters never mentioned is that solar activity is also unprecedented for the past 1,000 years and this was known to science at that time. What was not known to science at that time is that solar activity is actually unprecedented for the last 11,000 years. (Solanki et al. 2006)
Evidently, we do not know the total Anthropogenic forcing. We don't know its sign. We also don't know its magnitude. All we can say is that it should be somewhere between -1 to +2 W/m�. Sounds strange, but we may have actually been cooling Earth (though less likely than warming). It is for this reason that in the 1970's, concerns were raised that humanity is cooling the global temperature. The global temperature appeared to drop between the 1940's and 1970's, and some thought that anthropogenic aerosols could be the cause of the observed global cooling, and that we may be triggering a new ice-age.Let's see. A forcing range of 3. A likely positive value of 2 (at most). Two divided by 3 in percent is 66.7%. Hey! That looks a lot like 65%.
So why all the uncertainty? My crystal ball is cloudy. However, Nir to the rescue.
There is however one HUGE drawback, because of which GCMs are not suited for predicting future change in the global temperature. The sensitivity obtained by running different GCMs can vary by more than a factor of 3 between different climate models!So what does he mean by small spatial scales? It is on the order of kilometers. Pretty big and hard to miss? Well no. The models are based on chunks 250 kms on a side. I don't know what temporal scales the models operate on but it is probably a similar situation to allow the simulations to be computed in a reasonable amount of time.
The above figure explains why this large uncertainty exists. Plotted are the sensitivities obtained in different GCMs (in 1989, but the situations today is very similar), as a function of the contribution of the changed cloud cover to the energy budget, as quantified using ΔQcloud/ΔT.
One can clearly see from fig. 1 that the cloud cover contribution is the primary variable which determines the overall sensitivity of the models. Moreover, because the value of this feedback mechanism varies from model to model, so does the prediction of the overall climate sensitivity. Clearly, if we were to know ΔQcloud/ΔT to higher accuracy, the sensitivity would have been known much better. But this is not the case.
The problem with clouds is really an Achilles heel for GCMs. The reason is that cloud physics takes place on relatively small spatial and temporal scales (km's and mins), and thus cannot be resolved by GCMs. This implies that clouds in GCMs are parameterized and dealt with empirically, that is, with recipes for how their average characteristics depend on the local temperature and water vapor content. Different recipes give different cloud cover feedbacks and consequently different overall climate sensitivities.
Supposedly, despite all this uncertainty, the models can be made to predict the past. How can this be? Simple, you adjust other parameters and feedbacks and lags until with your chosen number for the cloud effect you get outputs that look like the past. How relevant is this for the future? Not very. Because there is no way to tell if you got things right and the further in the future you look the more likely the model is to diverge from reality.
Commenter Froblyx on this post at Classical Values has an interesting point about determining the value of models.
The way to establish that the system of equations really does describe reality is to compare its results with reality. The better the match, the more confidence we have in the results.
That is a valid point if we KNOW all the parameters involved and include them all.
Then you test it by introducing peturbations in the real world and see if the results follow the model.
Since we can't disturb just one element of the real climate system and follow the results we have to assign values to the various sensitivities and see if the what happens in the future is correct. Yet we are not sure of our models because of ALL the interactions involved. We may have assigned incorrect values to the interactions let alone the things we think we know well.
We currently have models that do not include the solar variation of about .5% over 300 years and the cloud/solar magnetism/cosmic ray effect. We know apriori that the values assigned to various interactions that simulate past behavior are WRONG since they do not include these effects. In addition the latest better models (much better than 10 years ago we are told) have not been around long, so we can't be sure that they model the future well because we do not have much future to test them against.
So to be sure the models are correct we should wait a while.
BTW frob, global temperatures have been declining since 1998. Do the models tell us why? Do they expain the anamolous year of 2004 when temperatures spiked? In other words do the models produce noisy data the way the real world does? Or is it all averaged and smoothed? i.e. a rough approximation?
To get the models to run in a reasonable amount of time we have chunked the earth's surface into segments 250 kms on a side. Is this good enough to get the required accuracy? As Nir points out above. Not likely.
An excellent model of an engineered servo system where all the inputs and outputs can be measured to within .1% can come within +/- 1% of real world behavior. Can the climate modelers with their much more complex system come within that range of error? They claim a model error band of .5% from a measurement series where at best (at least until recently) the error band is around .2% for at around 70% of the data and possibly worse since I have seen no reports on how instrument calibrations might affect the measurements over the last 100 years.
Even in the best case of .2% data error that hardly gives much confidence in the .5% model error band (+1.5 to +4.5 deg C change predicted to a roughly 300 deg K base). Normally you want data that is at least 3X as good as the signal you are looking for and the preference for reliable results is 10X the signal to make sure you are not measuring noise. Basically what all this means is that the estimated signal is not far from the noise level of the data. In fact if you look at the figure referenced above it is not impossible that the estimated signal is equal to the noise level.
Pick a number is not science. Science is when you have real data of the required accuracy and known relations between inputs and outputs. i.e. the equations are fixed by scientific understanding, thus known in advance of any predictions (even of the past) and they make reliable predictions without having to adjust the models.
Let me touch on the servo question again. You do not have 13 models of a servo system all making varying predictions. The science of servo systems is well understood. There is one model.
As I have shown we have no such apriori model for clouds. Heck, we are not even sure of the sign let alone the magnitude. Plus we know for sure that the solar magnetic influence on clouds is not in the models because that understanding was only made public within the last year.
And that is only one of many problems in the models. Take this one mentioned by Dr Theodor Landscheidt of the Schroeter Institute for Research in Cycles of Solar Activity with reference to the global mean temperature:
The cyclic variation in the data cannot be explained by general circulation models in spite of the entailing great expense. There is not even an attempt to model such complex climate details, as GCMs are too coarse for such purposes. When K. Hasselmann (a leading greenhouse protagonist) was asked why GCMs do not allow for the stratosphere's warming by the sun's ultraviolet radation and its impact on the circulation in the troposphere, he answered: "This aspect is too complex to incorporate it into models"You have to wonder what else they are leaving out.