Blog Archive

Saturday, July 3, 2010

Fastest disinformer retraction: Watts says Goddard’s “Arctic ice increasing by 50000 km2 per year” post is “an example of what not to do when graphing trends”

Fastest disinformer retraction: Watts says Goddard’s “Arctic ice increasing by 50,000 km² per year” post is “an example of what not to do when graphing trends”


Plus science blogosphere roundup: "Fred Pearce is a rubbish journalist" and Eric Stieg on PNAS

by Joseph Romm, Climate Progess, July 3, 2010

Ironygate:  The king of cherry picking just threw the prince (jester?) of cherry picking under the bus for picking some really, really bad cherries:
WUWT
Thank you Anthony Watts for leaving this nonsense on your blog and acknowledging it as “an example of what not to do when graphing trends, to illustrate that trends are very often slaves to endpoints” — as opposed to 99% of your other posts, which you continue to embrace even though they are also examples of what not to do, such as these recent absurdities:

“The death spiral continues, with Arctic ice extent and thickness nearly identical to what it was 10 years ago.” (5/31)
“Over the last three years, Arctic Ice has gained significantly in thickness….  Conclusion : Should we expect a nice recovery this summer due to the thicker ice? You bet ya.” (6/2)
“Arctic Basin ice generally looks healthier than 20 years ago.” (6/23)
Those laughable conclusions should have been a warning sign to Watts not to keep posting Goddard’s nonsense — except, of course, nonsensical graphing, cherry picking endpoints, and generally making up stuff is what Watts specializes in (see links below).

Since I generally don’t read WattsUpWithThat, except when I am doing a post on Arctic ice and looking for a good LOL, I wouldn’t have actually caught this if I hadn’t been reading the comments at Tamino, one of which notes:
BTW, “the mistake was noted by Steven immediately after publication,” is Watts-speak for “Ian H pointed it out in comment #3 within maybe 15 minutes after the posting.”
Tamino has a good post debunking Watts/Goddard on Arctic ice thickness (here).

This CP post originally began as a round up of science blogs, with this opening:  The bad news is there’s too much damn stuff to report on or debunk.  The good news is there’s a lot of good analysis and debunking in the science blogosphere.

Who knew that the debunking would be by the disinformers themselves!

Here’s more:
If anyone needs evidence that the “reporting” crutch of He Said, She Said is still being employed by stenographers masquerading as journalists, here’s Fred Pearce in New Scientist.
No serious effort is made to inform the reader which of the parties is actually supported by reality. Note the weasel wording and false balance throughout, e.g.: “some of the researchers involved take issue with a suggestion that greenhouse gases are not primarily responsible for global warming”; “Foster’s team concludes… But de Freitas says”; “The vitriol continues”; etc.
It’s a stereotypical example of the “on the one hand, on the other” style that has so distorted the public’s understanding of the issue of anthropogenic climate change.
It’s 2010, FFS. This article should be held up as a model for how reporting should not be done.
Last week, CEI’s Christopher Horner, writing at Pajamas Media claimed that Gabriel Calzada (author of a dodgy study claiming that Spain’s green energy program had cost many jobs) had been mailed a dismantled bomb by a solar energy company. As Ed Darrell observes, the story is preposterous (even without considering the source), but a whole lot of self-styled global warming skeptics uncritically accepted it. And even after the story was completely retracted, folks like Anthony Watts and Andrew Bolt did not make corrections.
… a new paper in PNAS (Anderegg et al, 2010) that tries to assess the credibility of scientists who have made public declarations about policy directions….
It is completely legitimate to examine the credentials of people making public statements (on any side of any issue) – especially if they make a claim to scientific expertise.  The database that Jim Prall has assembled allows anyone to look this expertise up – and since any new source of information is useful, we think this can be generally supported. Prall’s database has a number of issues of course, most of them minor but some which might be considered more problematic: it relies on citation statistics, which have well-known problems (though mostly across fields rather than within them), it uses Google Scholar rather than the standard (ISI) citation index, and there are almost certainly some confusions between people with similar names. Different methodologies could be tried – ranking via h-index perhaps – but the as long as small differences are not blown out of proportion, the rankings he comes up with appear reasonable….

So, do the climate scientists who have publicly declared that they are ‘convinced of the evidence’ that emission policies are required have more credentials and expertise than the signers of statements declaring the opposite? Yes. That doesn’t demonstrate who’s policy prescription is correct of course, and it remains a viable (if somewhat uncommon) position to acknowledge that despite most climate scientists agreeing that there is a problem, one still might not want to do anything about emissions. Does making a list of signers of public statements, or authors of the IPCC reports, constitute a ‘delegitimization’ of their views? Not in the slightest. If someone’s views are widely discounted, it is most likely because of what they have said, not who they sign letters with.

However, any attempt to use political opinions (as opposed to scientific merit) to affect funding, influence academic hiring, launch investigations, or personally harass scientists has no place in a free society – from whichever direction that comes. In this context, we note that once the categorization goes beyond a self-declared policy position, one is on very thin ice because the danger of ‘guilt by association’. For instance, one of us (Eric) feels more strongly that some of Prall’s classifications in his dataset cross a line (for more on Eric’s view, see his comments at Dotearth).

More on PNAS soon.

Related posts on Watts:

No comments: