Posted by: reformedmusings | January 5, 2009

Computer models and the global warming myth

“Science is not about building a body of known ‘facts’. It is a method for asking awkward questions and subjecting them to a reality-check, thus avoiding the human tendency to believe whatever makes us feel good.” – Sir Terry Pratchett

Garbage In, Garbage Out (abbreviated to GIGO) is a phrase in the field of computer science or ICT. It is used primarily to call attention to the fact that computers will unquestioningly process the most nonsensical of input data and produce nonsensical output, and is a pun on FIFO (First In, First Out). It was most popular in the early days of computing, but applies even more today, when powerful computers can spew out mountains of erroneous information in a short time. – Wikipedia

I talked about computer models in brief in Anthropological Global Warming – RIP, and mentioned Michael Crichton’s pertinent comments in Aliens, global warming, religion, and science. Upon reflection, I believe that the best way to illustrate the fraud in the use of computer models by eco-socialists is to show how computer models were intended to be used and have been highly successful. I will also show a few real-world examples of the failure of models to accurate reflect reality. The bottom line is that computer models make excellent engineering tools, but are not a substitute for due diligence in science.

Warning for the detail-oriented. This will not be a rigorous or exhaustive explanation, but hopefully an accurate one that will be accessible to the average person with a high school diploma. That will require simplifications, but hopefully none that compromise the overall accuracy of the narrative.

What are computer models? They are mathematical approximations of phenomenon in the real world. All models necessarily fall short of being exact, and sometimes way short of even being close. A good example of a very simple model occurs in ballistic fire-control computers. They use a simple, 2nd order equation representing  gravity’s effects combined with the measured range (usually w/a laser range finder) and  known (experimentally determined) average aerodynamic characteristics of the ammunition. More sophisticated ones account for wind, but usually the user has to input their guess on the wind, because the wind isn’t constant over the flight distance of the round at long ranges, or even over the time between setting the fire control and taking the shot. Even this simple model isn’t exact because of variations in the ammunition, wear in the barrel, variable wind, temperature, humidity, etc. the fire control provides an approximation within these limitations. Reality is consistently full of unknowns.

Some of the most sophisticated engineering models help design aircraft. This was the original use for computational fluid dynamics (CFD). As I said in an earlier post, CFD codes the Navier-Stokes equations (or approximations of them), which are partial differential equations. Weather models generally use simplified versions. These equations must have initial and boundary conditions, some of which can also be complex equations. The overall complexity is such that, unlike the simple ballistics computers, these cannot be solved directly (examples: x+5=10 can be solved directly for x, whereas x/sin(x) = 0.25 cannot). Therefore, CFD not only involves modeling the physical characteristics of the fluid flow with all its constraints, but also must mechanize a solution method that determines a unique answer for the approximated equations with specified initial and boundary conditions. These complexities are captured nicely in this Wikipedia article. Kinda makes your head hurt. The faster and more capable computers become, the more complex the approaches can be solved. And, the faster you can fool yourself.

Fortunately, the physics of flight are pretty well understood. Boundary conditions are determined by the shape of the item being designed, the characteristics of the fluid (air in the case of airplanes), and the necessary stable conditions some distance further from the item being analyzed (called far-field conditions). Hit and miss trials in wind tunnels (also approximations with limitations, but analog ones) and flight test would be prohibitively expensive, and the latter quite hazardous. In the 1940s and 50s, most new designs were based on existing ones slightly modified, then tested heavily in wind tunnels. Now, literally thousands of designs may be modeled and accurately analyzed using computers. But as good as these programs are, they are still just approximations which must be verified in some kind of physical testing.

Oh, and let’s not forget that we’re solving these things digitally, but the real world is analog. Sometimes we forget that important distinction. Sampling rates can be critical.

Not just fluid flow, but also mechanical structures are similarly modeled, although with different equations. The boundary conditions there would be the physical characteristics of the material, shapes involved, standard physical limitations of solid materials, etc.

In the aircraft world, as well as the tall buildings/towers and bridge worlds, the aerodynamics and mechanical structures must be analyzed together because all structures are flexible, some more than others. These aerodynamic and structural interactions fall into the field of aeroelasticity. Probably the most famous failure due to aeroelastic effects and the resultant physical oscillations was the Tacoma Narrows Bridge collapse in 1940. Unlike aircraft, bridges are one-time good deals that aren’t amenable to flight testing, although bridges and buildings can and are wind tunnel tested these days. I saw an engineering lab in Singapore where entire city downtown areas are modeled for aerodynamic effects on new and existing buildings. It’s not a guarantee, but it helps.

A good example of the need to verify models with real-world testing involves the ill-fated Fairchild T-46 trainer. A friend of mine was flying the test bird one day, when suddenly the aircraft began vibrating violently. That was unexpected, as the test program had proceeded smoothly up to that point. He quickly scanned outside and saw that the wings were “flapping” rapidly. That’s always bad. He quickly slowed down, the oscillations ended, and he landed uneventfully. Fairchild had not adequately accounted for the aeroelastic properties of the wings under the tested condition. I don’t know what their models said, but they sure didn’t check out in the real world. And that’s the bottom line: all models must be verified in the real world before proving useful and believing their results. That’s just good, sound engineering.

There’s another great story told that goes way back to the computer printout days. It’s said that a surface-to-surface missile was being designed. After much work with the aerodynamics and the control system using extensive modeling and simulation, plus the sacrifice of forests for all the computer printouts, the end page of the very long printout finally showed the missile impacting the target. Great. So, they built a couple and took them out to the range to test. The first missile flew flawlessly until about 1/2 mile short of the target, when it flew into the ground. Oops. Thinking that it might be a fluke, a second missile was launched. Same result. Curiously, it crashed in almost the exact same spot as the first missile. How could this be? The model clearly showed that it should hit the target. Back to the drawing board.

After much hair pulling, someone finally unfolded the model printouts to look at the entire predicted flight path. Sure enough, the model flew a perfect path up to 1/2 mile short of the target. At that point, the model showed the missile diving under the ground and then hit the target dead on – from underneath. The computations did not model the ground as a boundary condition. Double oops. Incorrect or missing boundary conditions can ruin your whole day, maybe run right into the weekend. Bottom line: Model but verify. Remeber: Garbage in, garbage out.

OK, so what about global warming? Hopefully you can already see the fatal flaw in using computer models as a truth test for climate change – there’s no way to verify the models. What these folks do is use a “known” initial condition or historic points to “test” the model, but that doesn’t test the host of boundary conditions or the equations over a studied time period. Remember the T-46 and the missile? Both their models worked fine over a wide range of conditions, but failed at a critical point. So it goes back to the question I posed in a previous post: If you can’t predict the weather a week ahead of time, how can you predict the climate 100 or 1,000 years in the future? The answer is that you can’t. We can’t even precisely predict a hurricane’s path and/or strength two or three days in advance. Over longer periods, the changes in all conditions can be dramatic. There are too many unknowns.

Simply put, you just can’t test a model that predicts something 100 or 1,000 years in the future. That makes these models essentially worthless for any kind of scientific conclusions or political decision making. That’s why Dr. Richard Feynman, one of the most brilliant physicists to ever live, called scientists’ fascination with them a disease. He knew better.

Then there’s all the boundary and initial conditions. Some are basic physics, but the number of effects for which we must account globally are astronomical. Which effects you throw out or keep can have a dramatic impact on the model’s output. And that’s not counting the dominating effect of the sun, whose weather patterns and variable energy output we don’t even remotely understand. Any numerical model that can be brought to a unique solution would have to be grossly oversimplified. Keep my weather question above in mind. Predicting the local weather next week is many orders of magnitude easier than trying to account for global interactions and solar process we can’t begin to understand.

Let’s touch on the statistical climate prediction. The use of historical data as predictive is beyond absurd unless the underlying basis of that data is well understood. In the case of climate information, it isn’t (see above). Anyone who has invested in the stock market has hopefully read the small print at the bottom of your prospectus: “Past performance is no guarantee of future results.” If the recent financial meltdown taught us anything, it should be the force of that warning. That’s probably what Michael Crichton meant when he asked if you would invest your personal money based on global warming predictions. If you wouldn’t do it in the stock market, why would you do it with the entire industrial world’s economy based on unsupportable global warming predictions? There’s absolutely no mathematical basis for doing so.

Worse, some of the data for the statistical models, which probably doubles as initial conditions for some of the climate models, turned out to be wrong. NASA was forced by Steve McIntyre, a Canadian, to revise its US surface temperature data downward. You’ve seen the propaganda that recent years are the hottest on record and we’re accelerating to global cinder status. It’s almost too late! Well…not quite:

“Four of the top 10 are now from the 1930s: 1934, 1931, 1938 and 1939, while only 3 of the top 10 are from the last 10 years (1998, 2006, 1999),” he wrote. “Several years (2000, 2002, 2003, 2004) fell well down the leaderboard, behind even 1900.”

That, coupled with the massive cooling over the last few years to below the 1900 level, should be enough to put a stake through the heart of the global warming myth. But we all know that political lies never die easily.

So, let’s go back to the beginning again. Science is a process, not a destination. Good science requires observation, carefully recorded experiments, a predictive hypothesis, and repeatable, independently corroborative results that support the hypothesis. Every scientific theory must make testable predictions to be taken seriously. A good example was Einstein’s theory of relativity. Based on his theory, he hypothesized that light would be “bent” around the sun so that stars just behind edge of the solar disk would appear displaced slightly so as to be visible during an eclipse. (I put bent in parentheses because nothing is being bent – space-time is measurably curved in the vicinity of the sun because of its mass. Just so I don’t get email on relativity.) That was an incredibly bold hypothesis in his day because it turned Newtonian physics on its head, and it took years to verify because of the exacting conditions necessary to measure the predicted effect. But it was verified and relativity became a household word (even if hardly anyone really knows what it means).

So, if the global warming frauds are so sure of their models, let them make a testable, repeatable hypothesis for independent verification. And I’m talking about one that can be tested in a year or two (or even sooner), not 1,000 years from now. If they’re so sure, they could do it on their heads. I’ll tell you right now that they cannot. Oh, they could make something up (just look at Paul Ehrlich’s & friends’ track record), but even they know it has little chance of happening. The recent global cooling trend already shows that they are wrong. And that’s why global warming is, and will remain, nothing but numerical smoke and mirrors.

Don’t be fooled. If there’s no independently-repeatable experiment and testable hypothesis, it isn’t science. A model that cannot be independently verified against the real-world is just an expensive computer game.

Allow me to close with a few great quotes from the late Dr. Feynman:

“Science is the belief in the ignorance of experts. “

“…there is one feature I notice that is generally missing in ‘cargo cult science‘… It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty — a kind of leaning over backwards… For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it… Details that could throw doubt on your interpretation must be given, if you know them.”

“The first principle is that you must not fool yourself – and you are the easiest person to fool.”




  1. […] trumps science. Congress seems poised to pass the largest energy tax in history based on the myth of global warming. The EPA has even suppressed opposing opinions within its ranks. You can download the banned […]

  2. […] takes another one on the chin in a continuous string of inconvenient reality checks. They use their erroneous models to fool themselves. It’s awfully chilly in this […]

  3. […] is what gets you paid and weather is what you dismiss as irrelevant when it contradicts your faulty long-range models. Go […]

  4. […] agenda. I talk in some detail about the shortfalls of computer simulations here and here. It’s been 10 years, so time to check out some of those alarming […]

  5. […] said on this blog before that the largest influence on the global climate isn’t CO2 or errant computer modelers, […]

  6. […] they can predict the world’s climate hundreds of years from now? The models are based on the same underlying physics. Global warming isn’t science, it’s socialist politics pure and simple, designed to […]

  7. […] bottom line is that weather is incredibly complex, resulting from a host of initial and boundary conditions driving complex fluid equations. Don’t fall for the lies. No one saw this derecho coming a day before it formed, how can they […]

  8. […] universe. Only arrogant men and women would claim to have a lock on a situation this complex. As I’ve posted before, the underlying equations require a host of assumptions to solve. A change in any initial or […]

  9. […] written so many times about the fraud of anthropological global warming, now called “man-made climate change” by those embarrassed by their idiocy. I have […]

  10. […] won’t end anytime soon. Models can show whatever outcome you want with the appropriate inputs as I’ve explained before. Don’t believe the […]


%d bloggers like this: