Professional scientists mostly ignore the slogans (e.g., “absence of evidence isn’t evidence of absence”) I discussed in my previous post. For example, the professional-scientific conclusion that smoking causes lung cancer came mostly from correlations. This conclusion was criticized, sure, but not by saying “correlation does not equal causation”.
Professional scientists have a much worse problem, which is that they criticize much more easily and fluently than they praise. (Marginal Revolution is an excellent blog partly because it doesn’t suffer from this.) This can be depressing (lots of work is underappreciated), exciting (anyone who sees this has a big advantage), or merely amusing, as in this example to which Stephen Marsh drew my attention:
I just returned from the MS4 conference. It is the fourth year that a group of philosophers of science have gathered to try to tease apart the implications of computer simulation in science. . . .Several presentations gave harsh criticism of climate science models. Bayesian tools (a statistical technique) were given some especially harsh criticisms. Everyone agreed the models were problematic in some sense or another. That the results were subject to all kinds of errors and suspicions, and there were substantially difficult difficulties to sort out. . . . Despite this, everyone concurs the models are robust . . . No one disagreed that the planet was warming.
The poor ability of professional scientists to praise means that comparison of A and B (two theories, say, or two experiments) mainly consists of comparing how much A and B have been criticized. How much A and B would have been praised, had scientists been better at praise, is unknown. This is a very poor way to compare stuff. Inability to praise also means that there is too much criticism. In my experience, scientists have trouble separating serious criticisms from trivial ones. For example, that climate-change models haven’t been shown to predict correctly is a serious criticism not emphasized enough (e.g., at the MS4 conference).
How about praising climate models for, conservatively, underpredicting the worldwide temperature rise? Every time we get a new data point, it’s right at the top of the confidence interval. In other words, the climate trend is not only worse than we hoped, it’s worse than we expected, and exactly as bad as our worst-case fears.
“For example, that climate-change models haven’t been shown to predict correctly is a serious criticism not emphasized enough (e.g., at the MS4 conference)”
Is that really true? When this started becoming a political issue, Hansen produced the basic projections back in 1988, with ‘B’ being the expected outcome, and ‘A’ & ‘C’ bracketing it.
Back in 2007 (about a decade later) we looked to see how those projections went.
Sure enough, the next decade had been predicted well by the climate model.
Scenario B Prediction : 0.21 to 0.31 oC/decade
GISTEMP real data: 0.14 to 0.24 oC/decade
HadCRUT3 real data: 0.14 to 0.24 oC/decade
So the Scenario B Prediction (1988) looks marginally high – although it was within error bars.
However, the model was simple – it didn’t include some things that are included in the 2004 model.
Despite this, however, the main problem is simple – short term data can’t be used to test a long-term model. It’s getting up to a couple of decades since the we starting doing this seriously so we are only how being able to test the early predictions with ‘REAL’ data (rather than running the model forwards and comparing against historical data).
The good news is that it looks like the models are working.
BTW – There are plenty other examples of the models being shown to predict correctly too.
For example, this diagram shows the mapping of future ocean temperatures against the 1990 model.
The filtered trend is well inside the modeled range.
https://www.realclimate.org/images/IPCC_Fig_1_1.jpg
So, again, it simply isn’t true that the models aren’t predicting correctly. The hard part is remembering that you need to smooth the data because it is so ‘noisy’ due to short term uncertainties.
The models are working well, though.
Mac, when I said “predict correctly” it was shorthand for the idea that a model is useful if it tells us (correctly) something we didn’t already know. I didn’t literally mean “predict correctly”. Consider a model that predicts that a not-yet-measured correlation will be between 0.99 and -0.99. Even if this prediction turns out to be correct, it doesn’t increase your confidence in the model. When Hansen produced three widely-varying “predictions”, I assume all three were predictions (outcomes consistent with the model). Since it was very likely a prior that the actual temperature would be somewhere between the three, this is not a success for the model. Perhaps someday Hansen will tell us what the model actually predicted given the events during the decade on which the model depends (e.g., number of volcanic eruptions). Then we will be able to see how well the Hansen model predicted — asking whether it did better than a naive predictor would have done. If temperature has been rising for 100 years it is no great success to predict it keeps rising at the same rate.
Can you demonstrate what good praise looks like? Maybe you have…
“Despite this, everyone concurs the models are robust . . . “: then they are chumps. I’ve spent more than 40 years working with mathematical models, and without good data from controlled experiments I wouldn’t make such a claim – and my physico-chemical systems had many fewer variables than the climate has.
“No one disagreed that the planet was warming”: I can neither agree or disagree since the quality of the observations is dubious, and the data-manipulations used are kept secret or, according to Phil Jones, simply were never recorded. I suspect that there has been warming – consistent with the expression “little ice age” – but I see too few data that have been sceptically scrutinised with intelligence, competence and honesty.
Darrin, see my post on appreciative thinking:
https://sethroberts.org/2008/12/12/whats-appreciative-thinking/