tw> I know that none of the refereeing that I put in place cost the
tw> publisher (Butterworths at the time) a penny. Clearly, different
tw> publishers have different practices.
>
sh> The figures come from the actual costs reported to me by The Journal of
sh> High Energy Physics (JHEP) <
http://jhep.cern.ch/> and from analyses like
sh> those by Andrew Odlyzko
sh> <
http://www.research.att.com/~amo/doc/economics.journals.txt> and have
sh> been confirmed in this Forum by Mark Doyle of the American Physical
sh> Society (see the "2.0K vs. 0.2K" thread, discussing the true cost of
sh> quality-control-only per article).
sh> Someone has to pay for the administration of the refereeing and the
sh> editorial dispositions. Some small journals can poach this from their
sh> editors' universities but this is not a solution for most journals, and
sh> certainly not for the big ones, like JHEP, with hundreds or even thousands
sh> of submissions to process annually.
sh> And I repeat, it is not the referees who cost money, but the
sh> implementation of the refereeing.
I totally agree with the last point - but I wonder if high
submission, high cost journals are the norm? I referee regularly for
five or six journals and in all cases the papers for review come
directly from the editor rather than from the publisher, so I suspect
that for many journals (and, given a probable Bradford/Zipf
distribution for submissions to journals, those with thousands of
submissions must be a very small minority) it is the editor's
institution that is bearing the cost rather than the publisher - so,
once again, academia is subsidising the publisher and perhaps this,
rather than the $300 a paper for the JHEP is the norm. The case of
scientific societies is rather different, since they often make the
journals available to their members at rates well below the
commercial and the whole activity takes the form of scientific
collaboration.
tw>> it ought to be debated whether a more economically efficient quality
tw>> control process is to publish openly and freely without refereeing and
tw>> rely upon the reader and user of the information to make his or her own
tw>> quality judgements when using or deciding not to use a text.
>
sh> Such a question is not settled by debating but by testing.
>
tw> True, but is not debate necessary to persuade the scholarly community
tw> that testing would be worthwhile, and have they yet been persuaded,
tw> apart from the BMJ running a test, by - >
tw>
http://www.ecs.soton.ac.uk/~harnad/nature2.html?
>
sh> It is not the scholarly community that needs to be persuaded of
sh> anything. The only ones who can test variants or alternatives to
sh> classical peer review are (1) social scientists who do empirical
sh> research on peer review (such research is ongoing) and perhaps (2)
sh> journal editors who might wish to experiment with new methods (e.g.,
sh> the BMJ experiment above).
It seems that the scholarly community is still not completely
persuaded of the virtues of freely accessible self-archiving
(although I am persuaded - and have been since the idea was first
mooted) - so there are at least some quarters of the community that
need to be persuaded. Perhaps more difficulty is involved in
persuading the universities that action of this kind is necessary -
in spite of the economics of the situation there appears, at least in
the UK, to be a kind of institutional blindness to the possibilites
of reform, of which self-archiving is one. And the institutions are
likely to have something to say in the matter, given the emerging
awareness of their stake in intellectual property - some Universities
may decree that, in certain areas, work is of such commercial
significance that their stake must be protected.
sh> But what advocates of "peer review reform" have mostly tended to do is to
sh> promote untested, notional alternatives (such as open commentary, or no
sh> review at all) to the scholarly community. I think that is not very useful
sh> at all.
But, of course, as you say, we have no empirical evidence as to
whether it is useful or not.
sh> Besides, peer review reform has absolutely nothing to do with the
sh> movement to free the refereed journal literature, and it has repeatedly
sh> been pointed out in this Forum -- and in the discussion of the NIH/Ebiomed
sh> proposal and the Scholars Forum proposal -- that the fate of the latter
sh> should not be yoked to the former in any respect. There is no reason
sh> whatsoever why the freeing of the current refereed journal literature
sh> (such as it is) -- a desideratum that already has face validity as optimal
sh> for research and researchers now -- should depend in any way on the
sh> implementation of speculative notions about how peer review might be
sh> improved or replaced.
I entirely agree - the issues are completely separate
tw> In any event, my suggestion is not that there should necessarily be
tw> public feedback from those who use specific texts productively, but that
tw> the citation record will reveal which texts have proved useful.
>
sh> This sounds to me like a complete non sequitur. The current citation
sh> record is based on a peer-reviewed literature. (Moreover, virtually
sh> ever one of the 120,000 preprints so far archived in the Los Alamos
sh> Physics Archive was likewise submitted to and eventually accepted by
sh> refereed journals, and the refereed reprint was swapped or added when
sh> available; Les Carr <lac_at_ecs.soton.ac.uk> will soon be reporting data on
sh> this.)
Well of course the CURRENT citation record is based on a peer-
reviewed literature, because there is nothing else, and until there
is something else no testing is possible.
sh> Hence there is no empirical evidence WHATSOEVER about (1) what would
sh> happen to the literature if there were no peer review, nor (2) what
sh> citations would or could have to do with it: If, as I think is likely,
sh> quality plummeted with the elimination of formal quality control in favor
sh> of opinion polls, no one would have any idea what to make of those
sh> opinions, whether they came in the form of comments or citations.
Well that, i.e., plummeting quality, is as much a guess as my guess
that it would make very little difference - neither of us have any
data and the data are not going to be available without experiment.
sh> But that would only be a part of it: They would know even less what to do
sh> with that vast, raw literature itself, no longer pre-filtered through
sh> formal answerability to peer expertise and certified as ready for
sh> consumption: neither expert nor novice would know how to sort the serious
sh> from the sewage in the vast unfiltered flow that would confront us all
sh> daily.
We still have that problem with the filtered product, since the
tendency is for more and more titles to emerge as publishers find
gaps in the market - the market either of papers to be published or
potential subscribers to be satisfied. The difficult thing is that no
matter how far down the 'food chain' of journals one goes, one still
cannot rule out the possibility of something of value being found -
and the literature is full of cases of early discovery being
overlooked and research being expensively repeated because the search
(if indeed any was conducted) did not go far enough or deep enough.
sh> To propose abandoning peer review and instead letting "nature" take its
sh> course is rather like proposing to abandon water filtration: You may be
sh> right, but I suggest you try it out on a sample of heroic volunteers
sh> before advocating it for the rest of us.
I'm not advocating nor promoting anything - I am raising questions -
questions that, as you say, could be answered through empirical
testing. But as long as the focus is on ensuring the survival of the
commercial trade in scholarly communication rather on freeing that
communication *completely* from the profit motive the less likely it
is that any testing can take place.
tw> Of course, avenues for peer commentary could be opened but my guess is
tw> that, for the reasons mentioned in
http://www.ecs.soton.ac.uk/
tw> ~harnad/nature2.html they are unlikely to be enthusiastically used.
>
sh> I have given reasons why peer commentary is a supplement to, not a
sh> substitute for, for peer review (exactly as citation impact is).
>
> sh> And it has already been much discussed in this forum.
>
tw> But, without, it seems, any great degree of consensus arising.
>
sh> The point of the discussion was that the empirical status of a radical
sh> reform proposal such as eliminating peer review cannot be settled a
sh> priori, by debate; it has to be tested empirically. A fortiori, it is
sh> not something whose validity can be determined by prior consensus: Even
sh> if one managed to persuade an entire populace that it would be a good
sh> idea to stop filtering their water without first carefully testing the
sh> consequences, nothing would have been demonstrated by that consensus
sh> except the power of persuasion. The validity of the proposal depends
sh> entirely on what would be the actual empirical outcome.
>
sh> But if you have empirical data to bear on this, it is certainly
sh> welcome; or even reasons why you think it is NOT an empirical matter.
As I note above, empirical data demands experiment and experiment in
this area requires, in some test field, at least a consensus in that
field that the experiment would be valid. And we can only establish
that commentary (or citation) could be a substitute for peer review
by experiment. However, there is a further point about filtering:
the acknowledged success of the Los Alamos archive raises the
question of how those thousands of users manage to cope with the
unfiltered dross it holds - before the cream (to mix a metaphor)
reaches JHEP? Perhaps they do it in the same way that referees judge
suitability for publication?
sh> On the other hand, the value of freeing the literature online is
sh> already empirically demonstrated (for those who could not already see that
sh> it was optimal a priori) by the colossal success of Los Alamos (of which
sh> the existence of JHEP is one of the consequences):
With that there can be no argument - but it seems to me to be at
least debatable whether the creation of a new print journal from an
archive is either necessary or desirable.
Let me make it clear that I fully support the idea of self-archiving;
nor am I opposed to refereeing - I simply ask whether it is *always*
economically in the best interest of the institutions that bear the
cost, which, in the majority of cases, I suggest, will be academic
institutions. However, given self-archiving, other strategies for
subsequent 'publication' will emerge.
In many small fields of scholarly endeavour it is possible that the
archive itself may be all that that community needs (for example, I
think that there are only about six research centres world-wide that
carry out work on bees, honey and apiculture); but the community may
decide that some form of journal presentation (most probably
electronic) is also needed - refereeing may be employed in some
cases, in other cases editorial selection may be thought sufficient,
and in other cases, perhaps in emergent disciplines or research front
areas, peer commentary will be preferred. In other words, given the
archive of papers, the solutions to the problems of scholarly
communication may vary according to the influence of a number of
factors. In areas like high energy physics, the JHEP solution may be
the norm, but we cannot advocate that solution as a general solution,
since it seems clear that not every field of investigation has the
same characteristics as HEP.
Self-archiving will have a major impact on the mores of scholarly
communication but, just as the pattern is varied at present, so it is
likely to be varied in the future and no one solution will apply
everywhere.
Professor Tom Wilson, Ph.D.
Department of Information Studies
University of Sheffield
Sheffield S10 2TN
Tel: (+44)(0)114-222-2631
Fax: (+44)(0)114-278-0300
Web address:
http://www.shef.ac.uk/~is/lecturer/tom1.html
Received on Wed Feb 10 1999 - 19:17:43 GMT