Climatology Light Bulb Joke

Q: How many climate scientists does it take to change a light bulb?

A: None. No need to change it. Because it’s been changed in the past, they say, it will be changed in the future.

A tiny fraction of climate scientists publish papers showing how their model can fit past data — say, global temperatures from 1600 to now. The authors of these papers claim that this sort of thing shows their model can predict accurately. In fact, it means roughly nothing — perhaps the model was flexible enough to fit any plausible past data.

Outsiders take fitting past data seriously, but what do they know? However, when a graduate student in atmospheric science takes fitting past data seriously (“it is perfectly reasonable to treat reproductions of the past climate as [successful] predictions”), the whole field has a problem.

16 thoughts on “Climatology Light Bulb Joke

  1. There is an interesting cognitive process at work here, it seems to me, which I can demonstrate with a thought experiment.

    Suppose I create a stock market simulation which I claim can accurately predict the stock market. If my claim is correct, the simulation is worth millions and millions of dollars. I claim that the simulation is valid on the ground that it accurately matches stock market history for the last 20 years. I offer to sell the simulation to Gavin Schmidt and James Hansen for $10,000.

    Surely these men would refuse to pay even $100 for my simulation. They would instantly see that the correspondence between my simulation and history is not even strong evidence that the simulation is valid, let alone proof.

    So it seems to me that any educated person, including warmist researchers, has the ability to see the worthlessness of these climate simulations. Indeed, it’s reasonably obvious to any educated layman who looks at the issue with an open mind.

    My conclusion is that there is a massive amount of self-deception and motivated reasoning going on here.

  2. As someone once said (and I’ve seen the saying attributed to everyone from Yogi Berra to Niels Bohr) it’s difficult to make predictions, especially about the future

  3. Another interesting thing about climate simulation models is that if you compare them to temperature histories, it’s usually a very nice fit, even looking at history year by year.

    So if the climate simulation models are not BS, they should be able to predict future temperatures on a yearly basis. And do so very nicely.

    Yet they cannot. The only reasonable inference is that these simulation models match history only because they have been tweaked and tuned to do so. Thus the fact that they match history is meaningless.

  4. In fairness to the graduate student, he was talking about fitting the model on one set of data and then “predicting” another set of data (different years) with it. I don’t know whether that’s what climate scientists actually do, but I think you’re misrepresenting his argument here.

  5. There are so many sources of bias within the climate change debate, at all levels, that it’s hard to disentangle them all.

    First you have the media/environmental NGO alarmism bias, which is going to overstate any threat risk to make for higher ratings/funding. This filters down to the general public who become scared and feel the need for political action. The widespread anti-capitalism bias helps, of course (confirmation bias). Now the politicians can look like heroes by funding more climate research, as well as using the issue as a political tool to enact more legislation (for which they can claim credit). Of course, the funding bias will ensure that most of it goes to studies attempting to confirming anthropogenic warming theory, rather than falsifying it. Suddenly a lot of new climatologist jobs are created, as well as entire arms of the UN like the IPCC, the continued existence of which all rely on confirmation of anthropogenic warming, creating heavy researcher bias to justify their jobs.

    This is the same thing that happened with the diet-heart and lipid hypothesis, and almost 5 decades later, were still mired in thick of it. Seems rather hopeless.

  6. LemmusLemmus, if climate scientists tested their models using data that had absolutely nothing to do with the development of those models, I would agree with you. There’s no sign that’s what happens. Here’s what the grad student wrote: “Once the climate model simulates the current climate well enough, then they run the simulation of the past 150 years or so to see whether it accurately reproduces the climate.” Well, what happens if it doesn’t accurately reproduce the climate? I have a funny feeling they change the model. Eventually the model reproduces the past and they publish it. It makes sense to use reproduction of the past as a filter. To completely ignore the past until you are satisfied with your model seems impossible and would require special care. You would have to ignore all previous research that used the past as a guide. I have never heard of anyone doing model-building this way (“don’t show me that paper, it might contaminate my work!”).

    sabril, that is a good point. If those models are so wonderful, why don’t they just predict the next 10 years, everyone will see how well they predict, and the debate will subside? When skeptics come along, they will be easy to answer. Yet that hasn’t happened. And we are 25 years into this. If the predictions turned out to be wrong, that would be good too — at least for mankind — because it could be used to improve the models. Yet somehow we never hear about the predictive record from the warmists.

  7. sabril’s analogy strikes home for me – I used to write stock market simulation software. I was just the programmer, not the modeler, and I thought most (but not all) of the models were hooey. But believe me, we got plenty of investors to buy our software, for a lot more than $100. These investors were no dummies, they knew the models weren’t going to predict the future with any precision. The idea was they would be right often enough to improve the investors’ odds of timing the market correctly.

    Every model had “tunable parameters”, since each stock or commodity has its own unique volatility and behavior patterns. The way I would tell if a model was any good was I’d optimize those parameters to best fit price movements for a given year (say, 2009), then apply that same tuned model to the next year’s data to see how good the predictions were.

    I would hope the climate modelers did the same thing (I don’t know if they did). That is, they should pretend the year is, say, 1900, and create their model and tweak its parameters to best match the previous 100 years’ observed data. Then, they should come back to the 21st century and apply their 1900-tuned model to the 1900-2000 interval. If the model was truly developed without any post-1900 data, but accurately predicted 1900-2000 climate observations, then they’ve got a good model, and, yes, the world should take it seriously.

  8. Mike W, there’s more to the flexibility of the models than the tunable parameters. There are a thousand things (all sorts of approximations) that they can include or leave out. How closely the earth is approximated, for example.

    John S., here’s my version of the Neils Bohr saying: It’s easy to predict the past if you don’t mention your failures.

  9. “If the model was truly developed without any post-1900 data, but accurately predicted 1900-2000 climate observations, then they’ve got a good model, and, yes, the world should take it seriously.”

    I would say it depends. What if you create a model, run it, and it doesn’t match the post-1900 data? What do you do then? Do you quietly discard it and put together a new improved model? Or do you give up on modeling forever after your first attempt fails? (Those are the only two possibilities.)

    Common sense says that researchers do the first, and that they repeat until they have a “good model.” But for all practical purposes, that iterative process is essentially the same as using all available data to construct the model.

    Here’s an analogy: A claimed psychic predicts that there will be a revolution in Egypt in 2011 followed by a massive earthquake in Japan. It sounds impressive, but to really evaluate it you need to know what other predictions the psychic made. If he has been predicting revolutions in every Arab country every year since 1970 and has been predicting massive earthquakes every year in every Pacific rim country since 1970, then he is not so impressive after all.

    It’s the same with climate models. The fact that they have been tested by back-casting on fresh data is not impressive unless you know how many versions of the model ended up on the cutting room floor, so to speak. As Roberts points out, researchers don’t report this kind of information. And even if they did, I would be skeptical of their claims.

    That’s why the acid test is making bona fide, interesting, accurate predictions. And making them publicly.

  10. “Well, what happens if it doesn’t accurately reproduce the climate? I have a funny feeling they change the model. Eventually the model reproduces the past and they publish it. It makes sense to use reproduction of the past as a filter.”

    Exactly. If you discard every model which doesn’t match history, it’s the same as if you tuned your model to fit history from the very beginning.

  11. “There are so many sources of bias within the climate change debate, at all levels, that it’s hard to disentangle them all.”

    There’s a simple way to cut through the Gordian Knot, which is prediction. If a scientist makes interesting, accurate, bona fide predictions, I will listen to him whether he is funded by Exxon-Mobil or Greenpeace.

  12. I keep an open mind about global warming, but this discussion brings to mind the prophecies of Nostradamus. They’re great at predicting past events, but have a bit of trouble with the future. James Randi calls this “retroactive clairvoyance”.

  13. By the way, Warren Meyer had a very insightful point about all this.

    Check out this IPCC graph:

    https://www.coyoteblog.com/global_warming_climate_graphs/image039.jpg

    The pink line represents the average model output; the black line represents historical temperatures. The blue line represents model output once you remove anthropogenic factors from the models, such as CO2. (The IPCC uses graphs like this as “proof” that recent warming must have been caused by CO2.)

    Anyway, if you look at the blue line, it rises until 1950, peaks, and then descends. In essence, the IPCC is saying that without anthropogenic influences, global surface temperatures would have declined after 1950.

    Says Warren Meyer:

    “With the peaked shape . . . they are saying there is some natural effect that is warming things until 1950 and then turns off and starts cooling, coincidently in the exact same year that anthropogenic effects start taking off. I challenge you to read the IPCC assessment, all thousand or so pages, and find anywhere in that paper where someone dares to say exactly what this natural effect was, or why it turned off exactly in 1950. ”

    I agree with Warren Meyer 100%. This unidentified natural forcing is obviously an artifact of aggressive tweaking of climate models; smoking gun evidence that warmist climate modelers tune their models to fit history.

  14. Adam, I listened to the whole interview, thanks for the link. I learned a few things. However, the guy being interviewed, the guy who runs climate-skeptic.com, failed to make the one point that matters: no accurate predictions. If it had been shown that climate models can predict the temperature 10 years in the future, I would put faith in their predictions of temperature 10 years from now. Because that hasn’t been shown, I don’t believe their predictions. Period. The many things discussed in the interview, although interesting, are trivial compared to that.

Leave a Reply

Your email address will not be published. Required fields are marked *