Crossposted at WUWT (with some improvements here to the intro and postscript)
After finalizing a long
post on John Cook’s crowd-sourced consensus-rating survey (to be titled “I take
Cook’s survey so you don’t have to”), I submitted my completed survey to Cook’s
website and received an automated response that included a key new bit of
information, suggesting what likely shenanigan Cook has planned.
I am not going to
rewrite the post because it describes why I gave the ratings I did to each
abstract in the random survey that Cook’s website compiled for me. The likely
shenanigan has to do with how the rating rules are applied so I want it to be
clear that what I wrote on that subject was innocent of any awareness of how
Cook might be biasing the survey. I am just adding this brief introduction.
The new information
(new to me) is that Cook seems to be claiming to have in his back pocket a correct answer to what each of the
ratings should be. From the automated response:
Of the 10 papers that you rated, your average rating was 3.1 (to put that
number into context, 1 represents endorsement of AGW, 7 represents rejection of
AGW and 4 represents no position). The average rating of the 10 papers by the
authors of the papers was 2.6.
It seems impossible
that Cook could actually have gotten large numbers of authors to apply his
rating scale to their papers. Maybe this is why he drastically reduced the
number of included papers from the 12,000 mentioned in the survey to only those
with abstracts shorter than 1000 characters (as discovered by Lucia at The Blackboard).
Maybe the full reduction is to papers that not only have short abstracts but
were also self-rated by authors. If so there is a clear selection bias and the
abstracts in Cook's sample are not representative of the literature [commenters
say that this limitation to self-rated abstracts has also been verified at The Blackboard].
Supposing that Cook
really does have author-ratings for all the papers used in the survey, there
is a major slip between the cup and the lip. The authors are described as
rating the papers, while
surveyors are asked to rate only the
abstracts. This is critical because according to Cook’s rating
rules the ratings are supposed to be based on what is or is not specifically
mentioned. Obviously full papers discuss a lot more things than abstracts,
especially unusually short abstracts. Thus if everyone is applying the rules
correctly surveyors ratings should be systematically higher (assessing less
conformity with consensus assumptions) than authors’ ratings.
This stood out to me
because I had just spent several hours describing how I had to rate abstract
after abstract "neutral," even though it clearly proceeded from
"consensus" presumptions, just because its abstract had not directly
mentioned those assumptions. The full papers might well have, making the author
ratings and the surveyor ratings apples and oranges.
Suppose (as is likely)
that survey participants who are referred by skeptic websites rate the
abstracts accurately according to the instructions while those who are referred
by credulous websites misapply the instructions so as to exaggerate the degree
of consensus. This misapplication of the rules will bring the ratings of the
consensoids closer to the ratings of the authors than the accurate ratings from
the skeptics will be, making the consensoid surveyors look less biased than the
skeptic surveyors when they are in fact more biased. Mission accomplished.
My original post is
after the jump. It's point is rather different: that the revelation from the
survey is how neatly the various papers fall into simple categories of wasted
effort on the "consensus" plantation. In contrast, Cook's attempt to
gauge the degree of consensus turns out to be not very effective, which is another
reason (besides its coming from John Cook) why people shouldn't feel much need
take it.
Read more »
# posted by Alec Rawls : 5/10/2013 06:58:00 AM
1 comments