Jump to content

global warming: salvaging fact from heaps of BS


Recommended Posts

Really? Then how would you classify someone who says "Why should I make the data available to you, when your aim is to try and find something wrong with it.">:D

 

Sometimes people want the data so they can find something "wrong" with it, which is in fact not wrong but can be made to appear so to the average person.

Link to comment
Share on other sites

Where was this said, and by whom? Context would help in addressing your inquiry.

It was said by Dr. Phil Jones of UEA (Hadley Centre) in response to a 2005 request from Warwick Hughes for the station data used as a basis for papers by Jones et al. Regardless of what people may think of his work at Surfacestations, Hughes is a published researcher in the field of climate and IMO therefore should have the same data access as other researchers. Or is it the practice of science to only share data with those who agree with you?

 

In a prior email Dr. Jones states;

However, it was hinted at to me a year or two ago that I should also

not make the station data available.

Apparently referring to the WMO. However he goes on to say;

Even if WMO agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.

 

The full correspondance is here.

 

There are other comments by climatologists that I have to view with alarm.

From Esper 2003;

Before venturing into the subject of sample depth and chronology quality, we state from the beginning, "more is always better". However as we mentioned earlier on the subject of biological growth populations, this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal. The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.

(Emphasis mine.)

Pick and choose? Well, as another climatologist said; "You've got to pick cherries if you want to make cherry pie."

 

Jacobys comments about why he and co-authors only used 10 out of 36 samples for their 1989 paper is also troubling. They chose the 10 which were "judged to provide the best record of temperature-influenced tree growth". What?

If we get a good climatic story from a chronology, we write a paper using it. That is our funded mission. It does not make sense to expend efforts on marginal or poor data and it is a waste of funding agency and taxpayer dollars. The rejected data are set aside and not archived.

So by analogy, if you administered a drug to 36 patients and 26 showed no change and 10 did, you would ignore the 26 and just write your report on the 10?

 

I have no idea what this is, but it sure don't sound like science as espoused here.

 

In the area of full disclosure of all data, methodology and code I have to stand firmly in the McIntyre camp, for without full disclosure how is replication possible?

 

I realise that many here don't trust/like SteveM, however from my reading of his site in many cases he is simply asking/demanding that climatologists adhere to the level of disclosure and proof that other disciplines expect. Heck, he's not asking for more than we ask people to provide in the Pseudoscience subforum.

 

It's this sort of thing that puts me in the "unconvinced" camp.:D

Link to comment
Share on other sites

It's amazing the number of self-proclaimed "experts" there are willing to interpret the data the wrong way to support their B.S. assumptions.

 

But hey, why do a rigorous statistical analysis when you can just simply look at what other non-experts, charlatans, think-tanks, and corporate/political interests are saying and make conclusions based off of that?

Link to comment
Share on other sites

Not all data are created equal. If you collect data but realize that you had e.g. some faulty electronics in your data acquisition system, do you use the data? Do you make it available to others? Marginal/poor data is not synonymous with "data that doesn't support our conclusion," so your analogy isn't apt.

Link to comment
Share on other sites

But hey, why do a rigorous statistical analysis

I believe this is the exact problem and it goes back to MBH 1998. The statistical analyses aren't as rigorous as you think.

 

Ian Jolliffe, who would appear to know what he is talking about had this to say about the use of his name as a reference and non centred PCA as used by some paleoclimatologists.

Apologies if this is not the correct place to make these comments. I am a complete newcomer to this largely anonymous mode of communication. I’d be grateful if my comments could be displayed wherever it is appropriate for them to appear.

 

It has recently come to my notice that on the following website, tamino.wordpress.com/2008/03/06/pca-part-4-non-centered-hockey-sticks/ .. , my views have been misrepresented, and I would therefore like to correct any wrong impression that has been given. (Link to original article here.)

 

An apology from the person who wrote the page would be nice.

 

In reacting to Wegman’s criticism of ‘decentred’ PCA, the author says that Wegman is ‘just plain wrong’ and goes on to say ‘You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe, one of the world’s foremost experts on PCA, author of a seminal book on the subject. He takes an interesting look at the centering issue in this presentation.’ It is flattering to be recognised as a world expert, and I’d like to think that the final sentence is true, though only ‘toy’ examples were given. However there is a strong implication that I have endorsed ‘decentred PCA’. This is ‘just plain wrong’.

 

The link to the presentation fails, as I changed my affiliation 18 months ago, and the website where the talk lived was closed down. The talk, although no longer very recent – it was given at 9IMSC in 2004 - is still accessible as talk 6 at http://www.secamlocal.ex.ac.uk/people/staff/itj201/RecentTalks.html

It certainly does not endorse decentred PCA. Indeed I had not understood what MBH had done until a few months ago. Furthermore, the talk is distinctly cool about anything other than the usual column-centred version of PCA. It gives situations where uncentred or doubly-centred versions might conceivably be of use, but especially for uncentred analyses, these are fairly restricted special cases. It is said that for all these different centrings ‘it’s less clear what we are optimising and how to interpret the results’.

 

I can’t claim to have read more than a tiny fraction of the vast amount written on the controversy surrounding decentred PCA (life is too short), but from what I’ve seen, this quote is entirely appropriate for that technique. There are an awful lot of red herrings, and a fair amount of bluster, out there in the discussion I’ve seen, but my main concern is that I don’t know how to interpret the results when such a strange centring is used? Does anyone? What are you optimising? A peculiar mixture of means and variances? An argument I’ve seen is that the standard PCA and decentred PCA are simply different ways of describing/decomposing the data, so decentring is OK. But equally, if both are OK, why be perverse and choose the technique whose results are hard to interpret? Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

 

I am by no means a climate change denier. My strong impressive is that the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics. Misrepresenting the views of an independent scientist does little for their case either. It gives ammunition to those who wish to discredit climate change research more generally. It is possible that there are good reasons for decentred PCA to be the technique of choice for some types of analyses and that it has some virtues that I have so far failed to grasp, but I remain sceptical.

 

Ian Jolliffe

 

(Link to original article added)

 

So who do you believe? The climatologist who says decentred PCA is an acceptable statistical tool, or the statistician who says it isn't?

 

Reaper, I read a lot of papers on this topic and one thing that is obvious to me is that there would be far less argument if these "rigorous" statistical analyses were actually done (or at least checked) by statisticians. Would you not say in that in general a climatologist would classify as a "non expert" in the field of statistics?

 

So if non experts call the statistical analysis "rigorous" or robust" and statisticians call it "dubious", who do you go with? The expert or the non expert?

 

Now can you provide links to show these rigorous statistical analyses are correct, or are you reduced to arm waving and name calling?

 

Swansont, while I agree all data are not created equal, there is a limit. If I were to core sample 36 trees and only 10 showed a temperature "signal", wouldn't this cast doubt on the idea that the core samples actully showed a viable temp signal at all? This isn't about dumping some of the data, but rejecting more than two thirds.

 

I'm not a scientist, and maybe this type of thing is actually acceptable in science. But in my world of business, this sort of thing would have a high probability of earning you a long stay at one of Her Majestys resorts. The ones with high walls, barbed wire and guard towers.:D

 

Straight question. When was the last time you threw out two thirds of your data and based your conclusions on what was left?

 

Also, would you care to comment on;

The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.

One of the first things I learnt here is that Cherry Picking = Bad Science.

 

I realise I may come across sometimes as in the denier camp, but I think of myself more as unconvinced. To that end I examine what is said and ask questions. While my questions may sometimes seem argumentative, I'm often simply trying to resolve a contradiction or confusion.

 

For example. Why was Loehle 2007 roasted at RC for his choice of low frequency proxies? Moberg 2005 (a paper that RC seems to have no problem with) used 11 low frequency proxies, 9 of these were used by Loehle. To my simple mind, if it wrong for Loehle to use those proxies, then it must be also wrong for Moberg to. So why is their use in one paper reviled and in another acceptable?

 

Why do the Briffa MXD series on his website list values for the Omoloya River from 1400-1991 when the dataset that he uses, archived at the ITRDB starts at 1496? Where did the extra 96 years come from? (For those actually read the links, Omoloya is column 7 on the Briffa table.)

 

Like I said in the hurricane thread, the devil is often in the details.

Link to comment
Share on other sites

Swansont, while I agree all data are not created equal, there is a limit. If I were to core sample 36 trees and only 10 showed a temperature "signal", wouldn't this cast doubt on the idea that the core samples actully showed a viable temp signal at all? This isn't about dumping some of the data, but rejecting more than two thirds.

 

I'm not a scientist, and maybe this type of thing is actually acceptable in science. But in my world of business, this sort of thing would have a high probability of earning you a long stay at one of Her Majestys resorts. The ones with high walls, barbed wire and guard towers.:D

 

Straight question. When was the last time you threw out two thirds of your data and based your conclusions on what was left?

 

 

I don't recall a date, but I know it's happened. A laser servo unlocks and after that all of the data are worthless — it's just noise. You throw it out. I have the luxury of restarting the experiment and getting more data. If I had to go into the field, I would be stuck with using what was left.

 

The problem here is that there is no description of why some of the data are considered bad. There's not enough information to decide if something improper is going on.

 

There's a section of the book Lucy by Donald Johanson where he describes getting the date of the KBS tuff, a volcanic layer above where the famous fossils were found. They took a sample for K-Ar dating and got some number that was inconsistent with other information. When they investigate further, they concluded that the crystals they had collected were damaged, allowing some Ar to escape, skewing the results. They gathered more samples and examined them to ensure they were intact, and were able to come up with a good date, which, owing to the nature of the technique, was older than the earlier values.

 

This story is often used by creationists trying to discredit the age of the fossils. They claim that the data were discarded because they disagreed with the answer the scientists wanted, and that's a bunch of crap. The data were thrown out because they were bad.

 

I have no insight into they decisions made about dendrochronology data. But I will not assume deceit based solely on "we discarded some data" because it's not enough information on which one can base a decision.

Link to comment
Share on other sites

I don't recall a date, but I know it's happened.

Fair enough. Such an event would fall into the "Uncommon, but it happens" category then? Not a trick question, I'm trying to establish a mental guage to compare against.

But I will not assume deceit based solely on "we discarded some data" because it's not enough information on which one can base a decision.

I'm not assuming deciet either. However there are discrepancies that exist that require explanation. If data is discarded, a better explanation than "It was bad" should be included in the literature.

The problem here is that there is no description of why some of the data are considered bad.

That is exactly the problem. Is it unreasonable to ask for such a description?

 

In the example of the Briffa data, the values don't exist in the ITRDB but do exist in the gridded data. I fail to see how it is regarded as unreasonable to ask where the extra values came from. It is not a discrepancy that rings alarm bells with me, but the dogged resistance to providing an explanation.

Link to comment
Share on other sites

I fail to see how it is regarded as unreasonable to ask where the extra values came from. It is not a discrepancy that rings alarm bells with me, but the dogged resistance to providing an explanation.

I agree that it is not unreasonable to ask, so why don't you ask? I mean, seriously... you're throwing around rhetoric like "dogged resistance to providing an explanation," when (in fact) you've never even asked for one.

 

Go ask. Then, if they reply with something like, "We won't be sharing that because we had nefarious ends," then let's roast the bastards.

 

 

The difference is that AFACT the spinsters (that's what they are in my estimation) motivating your inquiries here asked them for the data itself, not the reason it wasn't used.

Link to comment
Share on other sites

The difference is that AFACT the spinsters (that's what they are in my estimation) motivating your inquiries here asked them for the data itself, not the reason it wasn't used.

 

And if the data are bad, it seems pretty obvious why you wouldn't want to share it — no valid conclusions can be drawn from it.

Link to comment
Share on other sites

And if the data are bad, it seems pretty obvious why you wouldn't want to share it — no valid conclusions can be drawn from it.

 

It would seem to me to be reasonable to still present the information with the disclaimer that the data is bad because of whatever happened, and provide as much details on this as possible. Perhaps someone else might be able to salvage the data where the original researchers did not have the expertise to do so. Perhaps other researchers trying to duplicate, or expand, the test results would then be able to avoid mistakes and therefore provide a more complete dataset. If the goal is truely science, then all the information should be available for further research, even data that appears "bad".

 

My point is that even it appears that no conclusions can be drawn from the data, that isn't necessarily the case. And even if it is true that the data cannot be used to draw conclusions, the data could still have value.

 

I agree that it is not unreasonable to ask, so why don't you ask? I mean, seriously... you're throwing around rhetoric like "dogged resistance to providing an explanation," when (in fact) you've never even asked for one.

 

iNow, do we know whether or not this information has been requested by any bonafide scientist? It would seem to me to be logical for a researcher to reply to a legitimate request for data, "absolutely you can have it, but it is not valid data because..."

 

I'm not trying to question the motives of anyone here, but it bothers me when scientists do these kinds of things. It does gives the appearance of falsifying the data by cherry picking the results. This, in my opinion, is more damaging in the public eye than whatever mudslinging might be caused by those pushing an agenda as that is usually self-evident.

Edited by SH3RL0CK
Link to comment
Share on other sites

It would seem to me to be reasonable to still present the information with the disclaimer that the data is bad because of whatever happened, and provide as much details on this as possible. Perhaps someone else might be able to salvage the data where the original researchers did not have the expertise to do so. Perhaps other researchers trying to duplicate, or expand, the test results would then be able to avoid mistakes and therefore provide a more complete dataset. If the goal is truely science, then all the information should be available for further research, even data that appears "bad".

 

It's possible, and likely, perhaps, that there just isn't enough manpower to do that. And if the data are bad, it's possible the information simply isn't there — it's not a matter of massaging it. As with the dating example I gave earlier, and the damaged samples. The only option is to throw the results out and get more data.

 

And as with JohnB's earlier request — there are mounds of results I didn't even consider because I don't think of it as data. When we do the initial trouble shooting of our device, there are lots of results we would not want to share, because they are meaningless. For whatever reason, and there are dozens of possibilities, the signal is degraded from what the result should be. All that gets thrown out. It's not a matter of running it through some analysis to improve it. You can't.

Edited by swansont
Link to comment
Share on other sites

It's possible, and likely, perhaps, that there just isn't enough manpower to do that. And if the data are bad, it's possible the information simply isn't there — it's not a matter of massaging it. As with the dating example I gave earlier, and the damaged samples. The only option is to throw the results out and get more data.

 

I can appreciate the problems with the manpower necessary to maintain data that apparently has no value. It is however somewhat difficult to reconcile for me in that the data storage is a very small part of the projects I am familar with. The hard part is setting up the experiment, then analyzing it later. It doesn't seem like it would be too much effort to dump the data onto a private website, but maybe there is a tremendous amount of old data in obsolete formats that would have to be manually converted before sharing.

 

Or maybe, as you alluded to, the data has been discarded. While this is certainly a possibliltiy, especially if the information isn't there, then this should be stated as the explaination as to why someone can't have it.

 

But at the very least, the reasons as to why the data was lost might certainly be useful to someone trying to duplicate/verify/extend the work that was done so that they might possibly avoid the same loss of data.

 

And as with JohnB's earlier request — there are mounds of results I didn't even consider because I don't think of it as data. When we do the initial trouble shooting of our device, there are lots of results we would not want to share, because they are meaningless. For whatever reason, and there are dozens of possibilities, the signal is degraded from what the result should be. All that gets thrown out. It's not a matter of running it through some analysis to improve it. You can't.

 

Interesting...where I work we are not permitted to discard any data, even if we know the results are incorrect. We must instead provide the reason for the incorrect data (even something as simple as a thermocouple becoming disconnected from a sample during the test) into the records and store the data. Of course, this test would be repeated to get the correct results which would be in the formal report, but the erronous data is always available, even when it is meaningless.

Link to comment
Share on other sites

The issue isn't data storage. It's taking the time and effort to document — in a way an outside user would understand — why the data are unusable. Discard isn't the same as delete. You can keep the data, but just not feed it into any analysis from which you are going to draw conclusions. In terms of that analysis, the data have been discarded, even though they reside somewhere in your database.

Link to comment
Share on other sites

I agree that it is not unreasonable to ask, so why don't you ask?

Good idea. I have. Specifically, I asked where the extra data in his dataset came from, given that the information is not in the original dataset archived at the ITRDB. I'll let you know what happens.

 

As to "spinsters", I see alot of spin on both sides of the argument. I try to ignore it.:D

When we do the initial trouble shooting of our device, there are lots of results we would not want to share, because they are meaningless.

Fully understandable. The only reason I can see for using such data is if the device is a prototype and the odd readings could be used later for troubleshooting later versions.

 

BTW, I've been dying to ask. What is your device?

 

More importantly, will it help those of us here at SFN achieve our dreams of world domination?:D

Link to comment
Share on other sites

It's an atomic device.

Wonderful. We can certainly use one of those.>:D

Specifically, a clock.

To um, time our coup?:D

 

Seriously, that is amazing. One ten trillionth of an atmosphere. An atom fountain. To quote Zaphod, "That is amazingly amazing".

 

Thank you.

Link to comment
Share on other sites

  • 2 weeks later...

Now that the melt season for Arctic ice is well and truly over, I thought I'd have a bit of a look at predictions v reality.

 

The Damocles Project ran through their models and predicted (not too badly);

The probability that in 2008 the ice extent will fall below the minimum from September 2007 is about 8%, the probability to fall below the minimum of 2005 (second lowest value in the last 20 years) is practically 100%.

However, they also hedged their bets;

With the atmospheric forcing from the extraordinary year 2007, the minimum sea ice extent occurring in September 2008 comes out even lower than it was in 2007 by 0.22 million km2.

This statement is actually above the first one, so unless you are willing to read the whole thing, you would get the impression they are predicting increased ice loss. A bit ambiguous, I think.

NSIDC blew it.

In their May update they said;

To avoid beating the September 2007 record low, more than 50% of this year’s first-year ice would have to survive; this has only happened once in the last 25 years, in 1996. If we apply the survival rates averaged over all years to current conditions, the end-of-summer extent would be 3.59 million square kilometers (1.39 million square miles). With survival rates similar to those in 2007, the minimum for the 2008 season would be only 2.22 million square kilometers (0.86 million square miles). By comparison the record low extent, set last September, was 4.28 million square kilometers (1.65 million square miles).

 

Well, I guess it's happened twice now.:D Remember all the hoopla about 2008 starting with so much first year ice that the North Pole would probably be ice free? One could say, with tongue firmly in cheek, that the recovery after 2007 was unprecedented.:D

 

What I think is interesting is that if we look the NSIDC "2008 year in review" it shows the winter sea ice extent to be only marginally below normal, around 6%. The lack of ice growth from December 12-19 is still apparently a bit of a puzzle. (As in, there seem to be a few possible reasons, but nothing definitive)

 

If we look at Cryoshpere Today, (I won't post the graph as it is very wide) and just eyeballing, it would appear the Arctic underwent a phase shift of some kind in around 1997, with the increased decline in Arctic SIE really beginning in 2003/2004.

 

If we also look at their Global Sea Ice Anomaly, the red line at the bottom, we see a similar thing. The decrease didn't really kick in until 2003.

 

We are currently at around the 1979 figure for total extent. (Satellite measurements only started in 1979, so that's why the graph starts there.)

 

2004 was average, 2005, 2006, 2007 were below average and 2008 was average and then below average again. There is still an apparent decreasing trend in Global minimums, however, the years where Global maximums fail to reach average, they miss it by about the same amount that other years were above average. (1986 and 1988) Except for 2004/5, the mid year amounts since 2002 are consistently below average.

 

I do sometimes wonder about the accuracy of Cryosphere as this graph is meant to show Sea Ice Extent from 1900 to 2007. It's uniform from 1900-1950 and then shows a decline. Where on Earth did they get their data? Satellites began in the '70s, and we know the North West Passage was open in the 1930s. How can they say that the ice extent was the same for the 1930s as the 1900s? Odd.

 

This post is not meant to further the cause of either side of the current debate, but to provide some information and perhaps engender some debate aside from AGW.

 

Does anybody else see the phase shift in the Northern Hemisphere? Or are my eyes playing tricks? If you see it, what do you think caused it?

 

Why is the increased decline only apparent after 2003? Why didn't it kick in sooner?

Link to comment
Share on other sites

I do sometimes wonder about the accuracy of Cryosphere as this graph is meant to show Sea Ice Extent from 1900 to 2007. It's uniform from 1900-1950 and then shows a decline. Where on Earth did they get their data? Satellites began in the '70s, and we know the North West Passage was open in the 1930s. How can they say that the ice extent was the same for the 1930s as the 1900s? Odd.

 

That graph is fed from this data:

http://arctic.atmos.uiuc.edu/SEAICE/timeseries.1870-2008

 

 

On their Documentation page, it says the folllowing:

 

Technical Overview

Arctic Monthly Sea Ice Concentrations: 1870 - 1998

 

Mid-month values of sea ice concentration for the Arctic are digitized on a standard 1-degree grid (cylindrical projection) to provide a "relatively uniform set of sea ice extent for all longitudes, as a basis for hemispheric scale studies of observed sea ice fluctuations" (Walsh, 1978).

 

These data are a compilation of data from many sources integrated into a single gridded product by John Walsh and Bill Chapman, University of Illinois. The sources of data for each grid cell have changed over the years from infrequent land/sea observations, to observationally derived charts, to satellite data for the most recent decades. Temporal and spatial gaps within observed data are filled with a climatology or other statistically derived data.

 

Please note that large portions of the pre-1953, and almost all of the pre-1900 data is either climatology or interpolated data and the user is cautioned to use this data with care (see “Expert user guidance”, below).

 

 

From that "Expert User Guidance" section referenced above:

The temporal and spatial inhomogeneities in the data sources that went into the construction of this dataset require that any historical analysis of the data is done with caution and an understanding of the limitations of the data.

 

There are three periods for which the sources of the data change fundamentally:

 

1972-1998: Satellite period - hemispheric coverage, state-of-the-art data accuracy

1953-1971: Hemispheric observations - complete coverage from a variety of sources. The observational reliability varies with each source, but is generally accurate.

1870-1952: Climatology with increasing amounts of observed data throughout the period.

 

Because most of the direct observations of sea ice (1870-1971 period) are from ships at sea, they are generally the most complete near the ice edge. The conditions north of the ice edge are often assumed to be 100% covered during this period. The satellite era has shown otherwise with concentrations between 70-90% frequently occurring well north of the ice edge in the post-1972 data. For this reason, we recommend using a measure of ice extent, when doing historical comparisons of hemispheric sea ice coverage for periods which include data prior to 1972. This is done by assuming that all grid points with ice concentrations greater than some threshold (15% is commonly used) is assumed completely covered by sea ice.

 

Regional or grid point analyses may benefit by using the concentration data as it is distributed but the completeness of the historical record will vary regionally. Please contact Bill Chapman (chapman@atmos.uiuc.edu) if you have a question regarding the inventory of data included in this dataset for a specific region.

 

 

 

Contrary to your interpretation of this being "odd," I see this as perfectly normal and rather clearly expressed. Hope that helps.

Link to comment
Share on other sites

Odd in the sense that they don't show loss in the 1940s yet we know that it occurred because the North West Passage opened.

 

When the passage opened recently, it was due to ice extent loss, so presumably it opened 70 years ago for the same reason. Ergo, there must have been ice loss in that period. The graph doesn't show it.

 

Unless there was massive ice loss above North America and an increase in ice extent throughout the rest of the Arctic, and that just doesn't make sense.

 

(And yes, I did read those articles when I got the data.:D)

 

I d/loaded their data to have a look at it. Unfortunately I don't have a program capable of opening an 833 MB ASCII file. Any suggestions? R, maybe?

 

It's one hell of a big file.:D

Link to comment
Share on other sites

personally, I don't find any of this convincing....

 

ANDERSON, J.B., and Andrews, J.T. 1999. Radiocarbon constraints on ice sheet advance and retreat in the Weddell Sea, Antarctica. Geology 27: 179-182.

 

BALTUCK, M., Dickey, J., Dixon, T., and HARRISON C.G.A. 1996. New approaches raise questions about future sea-level change. EOS 1: 385–388.

 

BOND, G., Kromer, B., Beer, J., Muscheler, R., Evans, M.N., Showers, W., Hoffmann, S., Lotti-Bond, R., Hajdas, I., and Bonani, G. 2001. Persistent solar influence on North Atlantic climate during the Holocene. Science 294: 2130-2136.

 

BRIFFA, K. R. 2000. Annual Climate Variability in the Holocene: Interpreting the Message of Ancient Trees. Quaternary Sci. Rev. 19: 87-105.

 

CAILLON, N., Severinghaus, J.P., Jouzel, J., Barnola, J.-M., Kang, J. and Lipenkov, V.Y. 2003. Timing of atmospheric CO2 and Antarctic temperature changes across Termination III. Science 299: 1728-1731.

 

CESS, R.D., Zhang, M.-H., Potter, G.L., Barker, H.W., Colman, R.A., Dazlich, R.A., Del Genio, A.D., Esch, M., Fraser, J.R, Galin, V., Gates, W.L., Hack, J.J., Ingram, W.J., Kiehl, J.T., Lacis, A.A., LeTreut, H., Li, Z.-X., Liang, X.Z., Mahfouf, J.-F., McAvaney, B.J., Meleshko, K.P., Morcrette, J.-J.,Randall, D.A., Roeckner, E., Royer, J.-F., Sokolov, A.P., Sporyshev, P.V., Taylor, K.E., Wang, W.-C., and Wetherald, R.T. 1993. Uncertainties in CO2 radiative forcing in atmospheric general circulation models. Science 262: 1252-1255.

 

CHEN, L., et al. 2003. Characteristics of the heat island effect in Shanghai and its possible mechanism. Advances in Atmospheric Sciences 20: 991-1001.

 

CHOY, Y., et al. 2003. Adjusting urban bias in the regional mean surface temperature series of South Korea, 1968-99. International Journal of Climatology 23: 577-591.

 

CHRISTY J. R., W. B. Norris, R. W. Spencer, J. J. Hnilo (2007); Tropospheric temperature change since 1979 from tropical radiosonde and satellite measurements J. Geophys. Res., 112, D06102, doi:10.1029/ 2005JD006881

 

COMISO, J.C. 2000. Variability and trends in Antarctic surface temperatures from in-situ and satellite infrared measurements. Journal of Climate 13: 1674-1696.

 

CHYLEK, P., et al. 2004. Global warming and the Greenland ice sheet. Climatic Change 63: 201-221.

 

DAVIS, C.H., et al. 2005. Snowfall-driven growth in East Antarctic ice sheet mitigates recent sea-level rise. SciencExpress, 19 May 2005.

 

DE LAAT, A.T.J., et al. 2004. Industrial CO2 emissions as a proxy for anthropogenic influence on lower tropospheric temperature trends. Geophysical Research Letters 31: 10.1029/2003GLO19024.

 

DEMING, D. 1995. Climatic warming in North America: analysis of borehole temperatures. Science 268: 1576-1577.

 

DEMING, D. 2005: Global warming, the politicization of science, and Michael Crichton's ‘State of Fear’. Journal of Scientific Exploration, 19: no.2.

 

DICKINSON, R.E. 1982. In Carbon Dioxide Review [Clark, W.C., ed.]. Clarendon, New York, 1982, 101-133.

 

DORAN, P.T., Priscu, J.C., Lyons, W.B., Walsh, J.E., Fountain, A.G., McKnight, D.M., Moorheat, D.L., Virginia, R.A., Wall, D.H., Clow, G.D., Fritsen, C.H., McKay, C.P. and Parsons, A.N. 2002. Antarctic climate cooling and terrestrial ecosystem response. Nature, 415, 517-520.

 

ETHERIDGE, D.M., et al. 1996. Natural and anthropogenic changes in atmospheric CO2 over the last 1,000 years from air in Antarctic ice and firn. Journal of Geophysical Research 101: 4115-4128.

 

FISCHER E. M. et al., Contribution of land-atmosphere coupling to recent European summer heat waves (2007), Geophys. Res. Lett., 34, L06707, doi:10.1029/2006GL029068.

 

GROVE, J. M. 1996. The century time-scale. In Time-scales and Environmental Change (eds. Driver and Chapman), Routledge, London 1996, 39-87.

 

GROVE, J. M.. 2001. The onset of the Little Ice Age. In History and Climate-memories of the Future? (eds. Jones, Ogilivie, Davis, and Briffa), Kluwer, New York 2001, 153-185.

 

HABERZETTL, T., Fey, M., Lucke, A., Maidana, N., Mayr, C., Ohlendorf, C. Schabitz, F., Schleser, G.H., Wille, M., and Zolitschka, B. 2005. Climatically-induced lake level changes during the last two millennia as reflected in sediments of Laguna Potrok Aike, southern Patagonia (Santa Cruz, Argentina). Journal of Paleolimnology 33: 283-302.

 

HANSEN, J., Nazarenko, L., Ruedy, R., Sato, M., Willis, J, Del Genio, A., Koch, D., Lacis, A., Lo, K.,

 

Menon, S., Novakov, T., Perlwitz, J., Russell, G., Schmidt, G., and Tausnev, N. 2006. Earth’s energy imbalance: confirmation and implications. Science 308: 1431-1434.

 

HEMER, M.A. and Harris, P.T. 2003. Sediment core from beneath the Amery Ice Shelf, East Antarctica, suggests mid-Holocene ice-shelf retreat. Geology 31: 127-130.

 

HOYT, D.V., and Schatten, K.H. 1993. A discussion of plausible solar irradiance variations, 1700-1992. Journal of Geophysical Research, 98: 18895-18906.

 

HU, F.S., Ito, E., Brown, T.A., Curry, B.B., and Engstrom, D.R. 2001. Pronounced climatic variations in Alaska during the last two millennia. Proceedings of the National Academy of Sciences 98: 10552-10556

 

HUANG, Shaopeng. and Pollack, H.N. 1997. Late Quaternary temperature changes seen in worldwide continental heat-flow measurements. Geophysical Research Letters 24: 1947-1950.

 

HUANG, Shaopeng, Henry N. Pollack and Po Yu Shen. 1997. Late Quaternary Temperature Changes Seen in Worldwide Continental Heat Flow Measurements. Geophysical Research Letters 24: 1947-1950.

 

HUFFMAN, T.N. 1996. Archaeological evidence for climatic change during the last 2000 years in southern Africa. Quaternary International 33: 55-60.

 

JOHANNESSEN, O.M., et al. 2005. Recent Ice-Sheet Growth in the Interior of Greenland, Sciencexpress, 20 October 2005.

 

JONES, P.D., Briffa, K.R., Barnett, T.P., & Tett, S.F.B. 1998: High-Resolution Paleoclimatic Records for the Last Millennium: Interpretation, Integration and Comparison with General Circulation Model Control-run Temperatures. Holocene 8: 455–471.

 

JOUGHIN, I., et al. 2002. Positive mass balance of the Ross ice streams, West Antarctica. Science, 295, 476-480

 

KALNAY, E., et al. 2003. Impact of urbanization and land use change on climate. Nature, 423: 528-531.

 

KERR, R. A., 2006, Atlantic Conveyor Belt Hasn't Slowed Down. Science, 314, 1064, doi: 10.1126/science.314.5802.1064a.

 

KHANDEKAR, M.L., Murty, T.S., and Chittibabu, P. 2005. The global warming debate: a review of the state of science. Pure and Applied Geophysics 162: 1557-1558.

 

KHIM, B.-K. et al. 2002. Unstable climate oscillations during the Late Holocene in the Eastern Bransfield Basin, Antarctic Peninsula. Quaternary Research 58: 234-245.

 

KRABILL, W., et al. 2005. Greenland ice sheet: high-elevation balance and peripheral thinning, Science 289: 428-430.

 

LAMB, H. 1965. The Early Medieval Warm Period and its Sequel, Paleogeography, Paleoclimatology & Paleoecology 1: 13–37.

 

LAMB, H. H. 1972a. Climate: Present, Past and Future. 3 vols. (Methuen, London, 1972).

 

LAMB, H. H. 1972b. Weather, Climate and Human Affairs: A Book of Essays and other Papers (Routledge, London, 1972).

 

LAMB, H., et al. 2003. Vegetation response to rainfall variation and human impact in central Kenya during the past 1100 years. The Holocene 13: 285-292.

 

LANDSCHEIDT, T. 2003. New Little Ice Age instead of global warming? Energy & Environment 14: 2, 327–350.

 

LEAN, J., Beer, J., and Bradley, R.S. 1995. Reconstruction of solar irradiance since 1610: implications for climate change. Geophysical Research Letters, 22: 3195-3198.

 

LIU, J, et al. 2004. Interpretation of recent Antarctic sea-ice variability. Geophysical Research Letters 31: 10:1029/2003 GLO18732.

 

LYMAN, John M., Willis, J.K., and Johnson, G.C. 2006. Recent cooling of the upper ocean. Geophysical Research Letters, 33: L18604, doi:10.1029/2006GL027033,

 

MARTINEZ-CORTIZAS, A., Pontevedra-Pombal, X., Garcia-Rodeja, E., Novoa-Muñoz, J.C., and Shotyk, W. 1999. Mercury in a Spanish peat bog: archive of climate change and atmospheric metal deposition. Science 284: 939-942.

 

McINTYRE, Steven and McKitrick, Ross. 2003. Corrections to the Mann et. al. (1998) proxy database and Northern Hemisphere average temperature series. Environment and Energy 14: pp. 751-771.

McKitrick, R. R., Michaels P. J., Quantifying the influence of anthropogenic surface processes and inhomogeneities on gridded global climate data: JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 112, D24S09, doi:10.1029/2007JD008465, 2007.

McKENDRY, Ian G. 2003. Applied Climatology. Progress in Physical Geography 27: 4, 597-606.

 

MONNIN, E., Indermühle, A., Dällenbach, A., Flückiger, J, Stauffer, B., Stocker, T.F., Raynaud, D. and Barnola, J.-M., 2001. Atmospheric CO2 concentrations over the last glacial termination. Science 291: 112-114.

 

MULLER, Richard. 2004. Global Warming Bombshell. Article in MIT Technology Review, can be seen at http://www.technologyreview.com/articles/04/10/wo_...

 

NCDC. 2006. Global annual land and ocean mean temperature anomalies. Data should be available at ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/annual.land_and_ocean.90S.90N.df_1901-2000mean.dat.

 

NOON, P.E., et al. 2003. Oxygen-isotope (δ18O) evidence of Holocene hydrological changes at Signy Island, maritime Antarctica. The Holocene 13: 251-263.

 

OGILVIE, A. E., and JONSSON, T. 2001. Little Ice Age – a perspective from Iceland. Climatic Change 48: 9–52.

 

PARKINSON, C.L. 2002. Trends in the length of the southern ocean sea ice season, 1979-99. Annals of Glaciology 34: 435-440.

 

PETIT, J.R. et al. 1999. Climate and atmospheric history of the past 420,000 years from the Vostok Ice Core, Antarctica. Nature 399: 429-436.

 

POLISSAR, P.J., Abbott, M.B., Wolfe, A.P., Bezada, M., Rull, V., and Bradley, R.S. 2006. Solar modulation of Little Ice Age climate in the tropical Andes. Proceedings of the National Academy of Sciences 10.1073/pnas.0603118103.

 

PUDSEY, C.J., Murray, J.W., Appleby, P., and Evans, J. 2006. Ice shelf history from petrographic foraminiferal evidence, Northeast Antarctic Peninsula. Quaternary Science Reviews, 25, 2357-2379.

 

RAMANATHAN, V., Cicerone, R., Singh, H., and Kiehl, J. 1985. Trace gas trends and their potential role in climate change. J. Geophys. Res., 90, 5547-5566.

 

REIN, B., et al. 2005. El Niño variability off Peru during the last 20,000 years. Paleoceanography 20: 10.1029/2004PA001099.

 

REITER, Paul. From Shakespeare to Defoe: Malaria in England in the Little Ice Age. CDC, Vol. 6, No. 1, 2000.

 

ROHM, R. 1998. Urban bias in temperature time series – a case study for the city of Vienna, Austria. Climatic change 38: 113-128.

 

SANSOM, J. 1989. Antarctic Surface Temperature Time Series. Journal of Climate 2: 1164-1172.

 

SCHATTEN, K.H. and Tobiska, W.K. 2003. Solar Activity Heading for a Maunder Minimum? Bulletin of the American Astronomical Society 35: 3, 6.03.

 

SOLANKI, S. K. and Fligge, M. 1998. Solar irradiance since 1874 revisited. Geophysical Research Letters, 25: 341-344.

 

SOLANKI, S.K., Usoskin, I.G., Kromer, B., Schüssler, M. and Beer, J. 2005. Unusual activity of the Sun during recent decades compared to the previous 11,000 years. Nature 436: 174 (14 July 2005) doi: 10.1038/436174b

 

SOON et al. 1996. Inference of solar irradiance variability from terrestrial temperature changes, 1880-1993 – an astrophysical application of the sun-climate connection. The Astrophysical Journal 472: 891-902.

 

SOON, W. and Baliunas, Sallie. 2003. Proxy Climate and Environmental Changes of the Past 1000 Years, Climate Res. 23: 89–110.

 

STREUTKER, D.R. 2003. Satellite-measured growth of the urban heat island of Houston, Texas, Remote Sensing of Environment 85: 282-289.

 

SVENSMARK, H., Pedersen, J, et al. 2006. Experimental evidence for the role of ions in particle nucleation under atmospheric conditions, Proceedings of the Royal Society A, London, October 2006.

 

THOMPSON, D.W.J., et al. 2002. Interpretation of recent Southern Hemisphere climate change, Science 295: 895-899.

 

THOMPSON, L. G., Yao, T. E., Mosley-Thompson, E., Davis, M. E., Henderson, K. A. & Lin, P. N. 2000. A high-resolution Millennial Record of the South Asian Monsoon from Himalayan Ice Cores. Science 289: 1916–1919.

 

THOMPSON, L.G., et al. 2003. Tropical glacier and ice core evidence of climate change on annual to millennial time scales. Climatic Change 59: 137-155.

 

TYSON, P.D., et al. 2000. The Little Ice Age and medieval warming in South Africa. South African Journal of Science 96: 121-126.

 

UN. 1996. The Science of Climate Change: Contribution of Working Group I to the Second Assessment Report of the IPCC (eds. J. T. Houghton et al.), Cambridge University Press, London, 1996.

 

UN. 2001. Climate Change, The Scientific Basis, Cambridge University Press, London, 2001.

 

VAN DORLAND, Rob. 2005. Article in Natuurwetenschap & Techniek, Netherlands, 27 Feb. 2005.

 

VECCHI G. A., B. J. Soden (2007), Increased tropical Atlantic wind shear in model projections of global warming. Geophys. Res. Lett., 34, L08702, doi:10.1029/ 2006GL028905

 

VILLALBA, R. 1990. Climatic Fluctuations in Northern Patagonia during the last 1000 Years as Inferred from Tree-ring Records. Quat. Res. 34: 346–360.

 

VILLALBA, R. 1994: Tree-ring and Glacial Evidence for the Medieval Warm Epoch and the Little Ice Age in Southern South America. Climate Change 26: 183–197.

 

VON STORCH, Hans; Zorita, Eduardo; Jones, Julie M.; Dimitriev, Yegor; González-Rouco, Fidel; and Tett, Simon F.B. 2004. Reconstructing past climate from noisy data. Science 306: 679-682.

 

VYAS, N.K., et al. 2003. On the secular trends in sea ice extent over the Antarctic region based on OceanSat-1 MSMR observations. International Journal of Remove Sensing 24: 2277-2287.

 

WILLIAMS, P.W., et al. 2004. Speleothem master chronologies:combined Holocene 18O and 13C records from the North Island of New Zealand and their palaeoenvironmental interpretation. The Holocene 14: 194-208.WILLSON, R.C., and Mordvinov. A.V., 2003. Secular total solar irradiance trend during solar cycles 21-23. Geophysical Review Letters, 30: 5, 1199, doi:10.1029/2002GL016038.

 

WILSON, A.T., et al. 1979. Short-term climate change and New Zealand temperatures during the last millennium. Nature 279: 315-317.

 

WINGHAM, D.J., A. Shepherd, A. Muir, and G.J. Marshall. 2006: Mass balance of the Antarctic ice sheet. Philosophical Transactions of the Royal Society A, 364, 1627-1635 (cf Joughin & Tulaczyk 2002: Positive Mass Balance of the Ross Ice Streams, West Antarctica. Science 18 Vol. 295. No. 5554, pp. 476 – 480).

Link to comment
Share on other sites

TrueBeliever - You just copy/pasted 83 references, and all you said was "I don't personally find any of this convincing," yet didn't say which conclusions you have a problem with or why. I speculate that either you don't actually understand what is being presented in those studies, or that you are just blanketly dismissing data that is contrary to your worldview (or, more likely, some combination of both of those).

 

Unless you're going to be more specific, then your post is truly meaningless. I'd like to hear more about your objections, as that's what good science is about... Precision and specific challenges to specific points. Unfortunately, as it stands now, your post above is mostly a garbage one sentence opinion that offers nothing to the topic or our collective dialog.

Link to comment
Share on other sites

Hey, at least he's got both sides of the fence covered.:D

 

He's got Briffa, Jones, Hughes, Schmidt, Hansen and Thompson, AND McIntyre, Mc Kitrick, Soon, Baliunas, Spencer and Michaels.:D

 

Mind you, he should get up to speed, Cess et al 1993 is a bit old for discussing GCMs.:D

 

TrueBeliever, what don't you find convincing?

 

(And thanks Bear's Key, I'll give it a go.)

 

PS. For those interested, I tried it and EditPad opened the file perfectly. (10,797,346 lines of data. Sweet.)

Edited by JohnB
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.