On Sat, 27 Nov 1999, Prof. Tom Wilson wrote:
sh> Correct, but the publishers implement the refereeing, and that costs
sh> some money (about $300 per paper).
>
> Interesting - where does that figure come from? As one who initiated
> and edited two journals, I know that none of the refereeing that I
> put in place cost the publisher (Butterworths at the time) a penny.
> Clearly, different publishers have different practices.
The figures come from the actual costs reported to me by The Journal of
High Energy Physics (JHEP) <
http://jhep.cern.ch/> and from analyses
like those by Andrew Odlyzko
<
http://www.research.att.com/~amo/doc/economics.journals.txt>
and have been confirmed in this Forum by Mark Doyle of the American
Physical Society (see the "2.0K vs. 0.2K" thread, discussing the true
cost of quality-control-only per article).
Someone has to pay for the administration of the refereeing and the
editorial dispositions. Some small journals can poach this from their
editors' universities but this is not a solution for most journals, and
certainly not for the big ones, like JHEP, with hundreds or even thousands
of submissions to process annually.
And I repeat, it is not the referees who cost money, but the
implementation of the refereeing.
tw>> it ought to be debated whether a more economically efficient quality
tw>> control process is to publish openly and freely without refereeing and
tw>> rely upon the reader and user of the information to make his or her own
tw>> quality judgements when using or deciding not to use a text.
>
sh> Such a question is not settled by debating but by testing.
>
> True, but is not debate necessary to persuade the scholarly community
> that testing would be worthwhile, and have they yet been persuaded,
> apart from the BMJ running a test, by -
> http://www.ecs.soton.ac.uk/~harnad/nature2.html?
It is not the scholarly community that needs to be persuaded of
anything. The only ones who can test variants or alternatives to
classical peer review are (1) social scientists who do empirical
research on peer review (such research is ongoing) and perhaps (2)
journal editors who might wish to experiment with new methods (e.g.,
the BMJ experiment above).
But what advocates of "peer review reform" have mostly tended to do is
to promote untested, notional alternatives (such as open commentary,
or no review at all) to the scholarly community. I think that is not
very useful at all.
Besides, peer review reform has absolutely nothing to do with the
movement to free the refereed journal literature, and it has repeatedly
been pointed out in this Forum -- and in the discussion of the
NIH/Ebiomed proposal and the Scholars Forum proposal -- that the fate
of the latter should not be yoked to the former in any respect. There
is no reason whatsoever why the freeing of the current refereed journal
literature (such as it is) -- a desideratum that already has face
validity as optimal for research and researchers now -- should depend
in any way on the implementation of speculative notions about how peer
review might be improved or replaced.
http://library.caltech.edu/publications/ScholarsForum/042399sharnad.htm
http://www.nih.gov/welcome/director/ebiomed/com0509.htm#harn45
> In any event, my suggestion is not that there should necessarily be
> public feedback from those who use specific texts productively, but
> that the citation record will reveal which texts have proved useful.
This sounds to me like a complete non sequitur. The current citation
record is based on a peer-reviewed literature. (Moreover, virtually
ever one of the 120,000 preprints so far archived in the Los Alamos
Physics Archive was likewise submitted to and eventually accepted by
refereed journals, and the refereed reprint was swapped or added when
available; Les Carr <lac_at_ecs.soton.ac.uk> will soon be reporting data on
this.)
Hence there is no empirical evidence WHATSOEVER about (1) what would
happen to the literature if there were no peer review, nor (2) what
citations would or could have to do with it: If, as I think is likely,
quality plummeted with the elimination of formal quality control in
favor of opinion polls, no one would have any idea what to make of
those opinions, whether they came in the form of comments or
citations.
But that would only be a part of it: They would know even less what to
do with that vast, raw literature itself, no longer pre-filtered
through formal answerability to peer expertise and certified as ready
for consumption: neither expert nor novice would know how to sort the
serious from the sewage in the vast unfiltered flow that would confront
us all daily.
To propose abandoning peer review and instead letting "nature" take its
course is rather like proposing to abandon water filtration: You may be
right, but I suggest you try it out on a sample of heroic volunteers
before advocating it for the rest of us.
> Of course, avenues for peer commentary could be opened but my guess
> is that, for the reasons mentioned in http://www.ecs.soton.ac.uk/
> ~harnad/nature2.html they are unlikely to be enthusiastically used.
I have given reasons why peer commentary is a supplement to, not a
substitute for, for peer review (exactly as citation impact is).
sh> And it has already been much discussed in this forum.
>
> But, without, it seems, any great degree of consensus arising.
The point of the discussion was that the empirical status of a radical
reform proposal such as eliminating peer review cannot be settled a
priori, by debate; it has to be tested empirically. A fortiori, it is
not something whose validity can be determined by prior consensus: Even
if one managed to persuade an entire populace that it would be a good
idea to stop filtering their water without first carefully testing the
consequences, nothing would have been demonstrated by that consensus
except the power of persuasion. The validity of the proposal depends
entirely on what would be the actual empirical outcome.
But if you have empirical data to bear on this, it is certainly
welcome; or even reasons why you think it is NOT an empirical matter.
On the other hand, the value of freeing the literature online is
already empirically demonstrated (for those who could not already see
that it was optimal a priori) by the collossal success of Los Alamos
(of which the existence of JHEP is one of the consequences):
http://xxx.lanl.gov/cgi-bin/show_monthly_submissions
http://xxx.lanl.gov/cgi-bin/show_weekly_graph
--------------------------------------------------------------------
Stevan Harnad harnad_at_cogsci.soton.ac.uk
Professor of Cognitive Science harnad_at_princeton.edu
Department of Electronics and phone: +44 23-80 592-582
Computer Science fax: +44 23-80 592-865
University of Southampton
http://www.ecs.soton.ac.uk/~harnad/
Highfield, Southampton
http://www.princeton.edu/~harnad/
SO17 1BJ UNITED KINGDOM
NOTE: A complete archive of this ongoing discussion of "Freeing the
Refereed Journal Literature Through Online Self-Archiving" is available
at the American Scientist September Forum (98 & 99):
http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html
Received on Wed Feb 10 1999 - 19:17:43 GMT