Jump to content

Unmentioned Assumptions in Models


JohnB

Recommended Posts

How much do the basic assumptions and beliefs of the modellers effect the models?

 

Firstly may I make it clear that while this thought occurred while considering General Climate Models and AGW this is not meant as an attack on any or all GCMs.

 

Secondly it is not about AGW or GW at all, but about the methods used to construct models in general. I know bugger all about how they're made so I thought I'd ask the question.

 

Background. (Please note that this refers to natural forcings only) While reading various articles and papers on GW it became obvious that there has been a gradual shift in the perspective of Climatologists from the Gradualist to the (Semi) Cataclysmacist camp.

 

Climate change was originally thought to occur over millenial periods. This view has been shown to be at odds with the data and the view has shifted to climate change occurring on centennial scales, about .1-.2 degrees K per century. (Recent work by Lilley and others, particularly in respect to the "Younger Dryas" period shows that climate change has occurred on a global scale in a decadal timeframe.)

 

In the case of GCMs there are numerous feedbacks that for want of a better word we assign "arbitary" values to. (We don't know what the values actually are so we juggle them a bit until the model fits the observations. Which is fair enough.)

 

The point of the question in this case is that if the person designing the model believes that natural forcings and feedbacks could only result in a maximum change in global temperature of .1 or .2 degrees won't his/her model reflect that? As in, if you take CO2 out of the picture, no matter what values are fed in, the total change will be between +.2 and -.2 degrees per century.

 

Wouldn't a modeller (again taking out CO2 and any truly cataclymic event) who created a model that with everything turned on gave a result of say, +1.5 degrees/century then tweak his feedback values to give the .1 or .2 degree change that he/she knows is the correct value?

 

It seems that the unmentioned assumption in GCMs is that natural forcings and feedbacks can't result in a greater than .2 degree movement. What if the assumption is wrong? And how do we know? It is important to remember that from the POV of the modeller it is not an assumption but a self evident fact not worthy of mention.

 

Again I'm not after GCMs, it's just that they're what I've been reading about. I can easily imagine that similar unspoken assumptions occur in all models, perhaps Biochemistry?

 

Could someone who deals with models enlighten me on this matter?

 

Again, could we please not turn this into another GW thread but keep it about the mechanics of models and their assumptions.

Link to comment
Share on other sites

The point of the question in this case is that if the person designing the model believes that natural forcings and feedbacks could only result in a maximum change in global temperature of .1 or .2 degrees won't his/her model reflect that? As in, if you take CO2 out of the picture, no matter what values are fed in, the total change will be between +.2 and -.2 degrees per century.

 

Wouldn't a modeller (again taking out CO2 and any truly cataclymic event) who created a model that with everything turned on gave a result of say, +1.5 degrees/century then tweak his feedback values to give the .1 or .2 degree change that he/she knows is the correct value?

Why wouldn't such biases be caught during peer review and prior to publication, or by readers of the publication after?

Link to comment
Share on other sites

I can't claim any special expertise in GCMs, but AFAIK the feedback terms are named thus because they will appear due to any change in temperature, and it seems to me that cataclysmic events that would help allow you to test that, since it's unlikely to have other events that would mask them. In other models you'd look for similar behavior — any kind of single-variable "impulse" event that would allow you to test the model. Of course, if it's a system where you can control individual parameters, that makes testing a whole lot easier.

Link to comment
Share on other sites

Why wouldn't such biases be caught during peer review and prior to publication, or by readers of the publication after?

inow, I wasn't ignoring the question but taking time for background checks.

 

From what I've read the researchers publishing the papers don't create the models, they use ones already existing. For example from a paper by Hansen; "We define the radiative forcings used in climate simulations with the SI2000 version of the Goddard Institute for Space Studies (GISS) global climate model." There are of course others.

 

From reading the websites of those who supply models it appears that they are evaluated internally by the supplier. So while the papers resulting from their use undergo peer review, the models themselves do not. I find this interesting. (BTW, reading the sites reminds me of reading retail sites espousing why their product is superior. It's more genteel and on a higher plane than the usual sales pitch, but it's there.)

 

Again I must stress I'm not after the GCMs. Is internal testing usual for all models, regardless of field?

 

Since the GISS model has been released into the public domain as EdCGM, I'm starting to think that the only way I'll find some sort of solid answer about model limits is to run the damn thing myself.:D Might wait till I've built the new system though, that way I'll have a lot more grunt.

Link to comment
Share on other sites

Peer-review of a paper that uses a model tests the model. Either the results are good or they're not. And Hansen works at GISS, so that's probably not a good example of a researcher using a model developed elsewhere.

 

And YMMV. I've developed models of atomic behavior in papers I've written. Nothing on the scale of GCMs, but any application of equations to predict or explain behavior is a model. And I've borrowed models from other papers, because the paper demonstrated that they work.

Link to comment
Share on other sites

The question is a good one, I'm just comfortable that the answer is that "the models are continually refined and would be quickly rejected if they were built incorrectly or failed."

 

Specific to global climate models, I like this statement (it really rings a chord with me):

 

 

 

http://www.skepticalscience.com/climate-models.htm

There is a notion that we should wait till models are 100% sure and get it perfectly right before we act on reducing CO2 emissions. If we waited for that, we would never act. Models are in a constant state of improvement as they include more processes, rely on fewer approximations and increase their resolution as computer power develops. The complex and non-linear nature of climate means there will always be refinements and subtleties to be included.

 

The main point is we know enough to act. Models have evolved to the point where they successfully predict long term trends and are always improving on predicting the more chaotic, short term changes. Multiple lines of evidence tell us global temperatures will change 3°C with a doubling of CO2. The uncertainty is ±1°C degree but this uncertainty is decreasing (and the climate sensitivity of 3°C reaffirmed) as new studies refine our understanding.

 

Models don't need to be exact in every respect to give us an accurate overall trend and its major effects - and we have that now. If you knew there was a 10% chance you'd be in a car crash, you'd wear a seatbelt. In fact, if there was any possibility, you'd still do it. The IPCC consider it at least 90% sure humans are causing global warming. Considering the negative impacts of global warming, to wait for 100% certainty before acting is recklessly irresponsible.

Link to comment
Share on other sites

This may go some way towards my initial question but I'm not sure how the pieces fit yet. (And yes it is a 2000 report)

 

The control run of 15 GCMs is interesting in two ways, firstly the wide variety of assumed average temperatures and secondly the "flatness" of the runs. (They do not include CO2 or Solar Variance.) Does this mean that a basic assumption of the models is that without Solar Variation the climate will not change? (Which is not too unreasonable.)

 

Figure 1 showing this result is here. The CERFACS model shows an increase of around .5 degrees over the 80 year runs but the others as far as I can tell are zero variance or close to.

Peer-review of a paper that uses a model tests the model.

Good point.

Link to comment
Share on other sites

The control run of 15 GCMs is interesting in two ways, firstly the wide variety of assumed average temperatures and secondly the "flatness" of the runs. (They do not include CO2 or Solar Variance.) Does this mean that a basic assumption of the models is that without Solar Variation the climate will not change? (Which is not too unreasonable.)

 

From a physics standpoint, with no variation in the driving terms, and only feedback, and under the assumption that you start in equilibrium, that sounds like a reasonable assumption.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.