On Sun, 10 Nov 2002, Peter Suber wrote:
> For a recent example of a publication claiming that arXiv uses a new form
> of peer review, See <http://www.parliament.uk/post/pn182.pdf>. Scroll to
> text box #3 on page 4.
>
> Even though I support what I've called retroactive peer review, I think
> it's a mistake to classify what arXiv does under the category of peer
> review. From the source, it appears that this mistake may have some
> influence on how research funds are allocated in the UK.
>
> Peter Suber, Professor of Philosophy
> Earlham College, Richmond, Indiana, 47374
> Editor, Free Online Scholarship Newsletter
> http://www.earlham.edu/~peters/fos/
Dear Peter,
Many thanks for drawing my attention to the erroneous description of arXiv
as an alternative form of peer review in the United Kingdom Parliament
Webpage:
http://www.parliament.uk/post/pn182.pdf
All peer review (for research publication) is "retroactive" in the sense
that it takes place after a paper has been written, not before. (It is
peer review for research-funding that occurs beforehand.) The critical
difference between having this (always retroactive) quality-control
for research paper validity and quality done through (i) classical peer
review or through (ii) self-selected vetting would be this:
(i) With classical peer review, the quality-control process is
systematic, answerable, and has an independent, qualified editor
responsible for selecting the referees, mediating the vetting,
ensuring any necessary corrections and revisions are made, and
signposting the outcome (with the journal-name, track-record, and rank
in the journal quality/impact hierarchy) as having been peer-reviewed.
(ii) With self-selected vetting -- i.e., with anyone on the internet
choosing or not choosing to read the unrefereed preprint and provide
feedback, qualified or not, with the author in turn choosing whether
or not to use or respond to it -- there would be no way to ensure
or even to ascertain that peer review had taken place, let alone at
what quality level.
Anarchic self-selected vetting, if it were ever actually tested, could of
course be formally constrained in various ways, to make it more reliable
and answerable, with the outcome recognizably tagged as such. But then the
degree to which this anarchic process was systematically tamed in these
ways would simply be the degree to which classical peer review was being
re-invented under another name!
Until and unless self-selected vetting is tested alone, however, no longer
parasitic, as it is now, on a classical peer review system that still
remains in place as its universal backup, the posting and exchange of
pre-refereeing preprints, whether it is called "self-selected vetting"
or "retroactive peer review," can and will serve only as a supplement,
not a substitute, for classical peer review -- for both logical and
methodological reasons.
There is a second respect in which "retroactive" is the wrong descriptor:
In general, the corrective feedback from peer review alters the paper
in question; if the paper is being upgraded in real time, such a process
can hardly be called "retroactive."
This, by the way, is equally true of (1) classical peer review,
(2) self-selected vetting of pre-peer-review preprints, and the (3)
self-selected vetting of post-peer-review postprints. All stages in the
research communication continuum are amenable to corrective feedback
and upgrading, especially in the online medium (and this will be one
of the many, many benefits of open access). But classical peer review --
which is nothing more than systematized vetting mediated by independent
and answerable third-parties with known track-records, i.e., journals
-- will remain the critical mainstay and milestone of the entire
process. Without it, even the ensuing decline in quality levels would
be masked by the fact that the continuum would have become unnavigable,
with only the secondary sign-posts consisting of researchers' names and
institutions left to guide us -- until classical peer review was reinvented.
(Before someone asks: Usage statistics such as online "hit-parade"
ratings --
http://citebase.eprints.org/cgi-bin/search -- are likewise
promising supplements to, but no-wise substitutes for, classical peer
review: Citations come too late, and hits and links and comments are too
crude: peer-reviewing is not, never was, and cannot be gallup-polling.)
I am sending this reply to the webmaster_at_parliament.uk hoping he will
ask those responsible for that page and that program to have a look at:
Harnad, S. (1998/2000) The invisible hand of peer review. Nature
[online] (5 Nov. 1998)
http://helix.nature.com/webmatters/invisible/invisible.html
and
"Self-Selected Vetting vs. Peer Review: Supplement or Substitute?"
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2340.html
Best wishes,
Stevan Harnad
> [from prior exchange:]
>sh> I have devoted considerable space to trying to point out exactly why
>sh> I think arXiv is in no way a test of the hypothesis that self-selected
>sh> vetting can or will serve as a substitute rather than merely a supplement
>sh> for classical peer review (while still yielding a literature of at least
>sh> equal quality): ArXIv preprints and self-selected vetting co-exist and
>sh> have always co-existed in parallel with classical peer review, and hence
>sh> with answerability (and the expectation of answerability) to classical
>sh> peer review, exerting their quality-controlling and sign-posting effects,
>sh> as they always did. The only way to test whether self-selected vetting
>sh> can -- unlike in arXiv -- actually serve as a substitute for classical
>sh> peer review rather than merely a supplement to it (while still yielding
>sh> a literature of at least equal quality) is by testing a representative
>sh> sample of research WITHOUT any classical peer review at all to back it
>sh> up, only self-selected vetting (and a large enough sample, long enough,
>sh> for reasonable confidence that any effect would endure, and would scale
>sh> up to the literature as a whole).
Received on Mon Nov 11 2002 - 11:41:02 GMT