I have re-routed Andrew Odlyzko's posting from the "cost" thread to the
"reform" thread because that is the way the discussion is again drifting.
I have to repeat that it is very important to distinguish thoughts
pertaining to open access to the refereed research literature *such as
it is today* (classically peer-reviewed, but not yet openly
accessible) from thoughts pertaining to possible peer-review reforms
or modifications. Not only are such further changes hypothetical
(whereas the current peer-reviewed literature, -- and the problem of
toll-based barriers to its access and impact -- is *actual*), but, being
hypothetical, there is reason to believe they are also *antithetical* to
the immediate, indeed overdue, goal of open access, for the simple reason
that many researchers are reluctant about self-archiving precisely because
they are concerned that it might compromise or jeopardize (classical)
peer review.
http://www.eprints.org/self-faq/#7.Peer
Hence it is important to make it clear that there is no connection
whatsoever between providing open access to the peer-reviewed
literature, such as it is, right now, by self-archiving it, and any
possible, hypothetical future change in the mechanism of peer review.
The second factor in the drift away from the non-hypothetical question
of open access, its benefits, and the means of achieving them as soon as
possible, to the hypothetical question of reforming peer review, was the
suggestion by some contributors that with the cost-saving and
space-saving of open online access, more articles that are currently
rejected by journals could instead be accepted. (The segue came from the
question of the cost of refereeing papers that are ultimately rejected.)
There is a very elementary fallacy here, and to realize it, all we
have to do is visualize the gaussian distribution (the "bell curve")
that underlies just about all human quantitative and qualitative output:
The very *definition* of quality and of excelling is a *relative* one:
Who is "tall"? The one who is in the top 1% or top 5% of the human
height distribution. Neither lowering nor raising the high-jump bar in
the Olympics changes the fact that the bronze, silver and gold go to
the highest, next-highest and next-next-highest jump(er).
Now peer-reviewed research is not the olympiad or athletics, but there
*are* quality standards, and the purpose of the journal hierarchy is
to sort publications into (simplifying) the gold, silver and bronze
categories, so users can decide about reliability and the worthwhileness
of investing their limited time and efforts accordingly. (As with the
olymics, the standards evolve with time, usually rising.) But what
does not change with time is the bell-shape of the distribution. So
if a journal wants to be recognized (and used) as the one that publishes
the very best research, say, the top 1%, then it will always have to have
a high rejection rate. Lowering the rejection rate is simply equivalent
to lowering the standards, and hence the quality-level of the journal.
On Thu, 16 Jan 2003, Andrew Odlyzko wrote:
> The recent postings to this list about rejection rates and
> costs of peer review point out yet another way that costs
> can be lowered: Elimination of the wasteful duplication in
> the peer review system.
There is indeed wasteful duplication, and it comes from the strategy
on the part of some authors (not all -- and it varies with the field)
of first submitting their papers (unrealistically) to the highest-level
journal, then if it is rejected, the next level, and so on, till it
finds its proper level. At its worst, this strategy is pursued with minimal
revision in response to the referees' criticisms and recommendations. In
the process, not only does this waste the time of many referees, for
many journals, but sometimes it involves sending the same paper,
vitually unchanged, to the same referee for two or more different
journals -- resulting in considerable irritation on the part of the
referee, and a decreasing inclination to give freely of his refereeing
time, stolen from research and teaching, to the peer-review process
(at a time when referees are becoming an increasingly scarce and
over-harvested resource).
So there are many reasons to want to minimize this sort of wastage and
abuse of the peer review system. One way would be for journals to share
their records, but that jeopardizes journal independence and might even
handicap some deserving authors in special cases. Another way is to
charge -- eventually, if/when peer-review service costs become the norm
-- not just for the peer-reviewing of ultimately-accepted papers, but
also a (lower) submission fee (creditable toward the full peer-review
fee if ultimately accepted) to all authors. This might help to
discourage the nuisance serial submissions working their way down the
journal quality hierarchy, as well as re-submissions with little or no
revision in response to the referee reports already received from the
higher-level journal that has already rejected the paper.
These (and others) are potential ways to deal with the problem of misuse
of scarce peer-review resources, but they do not involve any substantive
change in the classical peer-review system itself. Any such substantive
changes in the peer-review system would first have to be tested and
demonstrated to work, and none that have been proposed, to my knowledge,
have as yet been tested and demonstrated to work. Prominent among these
untested proposals is the oft-repeated one that some form of self-selected
vetting could replace peer review in the open-access/self-archiving era:
"Self-Selected Vetting vs. Peer Review: Supplement or Substitute?"
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2340.html
This is a point on which my comrade-at-arms, Andrew Odlyzko, and I,
disagree (see above).
> It is widely acknowledged that almost all articles are
> published eventually, possibly after some revisions, and
> often after getting rejected by first and second choice
> journals. Thus several sets of referees have to go over
> essentially the same material. If we moved to a system
> of explicit quality feedback, with referees and editors
> providing their evaluations of the correctness, novelty,
> and significance to the readers (beyond the current
> system, where readers never see any negative evaluations,
> and see positive ones only to the extent of knowing that
> a published paper met some quality hurdle that is not
> well formulated, much less known), we could get away from
> all this duplication.
In classical peer-review, an answerable 3rd party -- to whom both the
author and the referees are answerable, and who is in turn answerable
to the user community through the track record of his journal --
the journal's editor, selects the referees, determines which of
their recommendations needs to be met, and whether or not a revised
draft has successfully met them. Andrew is suggesting that authors and
referees could somewhow agree on this amongst themselves, with no more
answerability than the eventual response (if any) from the user community,
which can access everything. Human nature being what it is, there is
no supporting empirical evidence (and a number of a priori reasons for
expecting otherwise) that such a system would result in a literature of
quality comparable to what we now have (but would like to have openly
accessible), nor that all these openly accessible online drafts would
be navigable in any sense.
Moreover, the referee over-use and -abuse problem would only get worse,
with everyone potentially vetting everything that appears. It is not
clear where one would even start in such an anarchic spectrum: no editor
to select me as the qualified referee for this paper, no sense that, if
I take the time to referee it, the outcome will be answerable to the
editor (the author can take/leave whatever he wishes). The ultimate
verdict of those who may or may not bother to read what ultimately may
or may not issue from all this certainly does not seem, on the face of it,
a promising substitute for the former convergent, answerable system. And
rather than remedying the abuse of the old system that came from serial
downward submission, this new system sounds like it would compound it,
with *all* authors now doing essentially the same thing.
The journal quality hierarchy serves a place: It sections the gaussian
distribution into tagged quality sectors, the tag backed up by the
journal's track-record, which amounts mostly to the editor's competence
and conscientiousness in ensuring answerability. Editors do not send only
positive reports to authors (though they may take the ad hominem sting
out of negative ones), and referees are never anonymous to the editor
(they are known and answerable to the editor). The option of being
anonymous to the author (an option which some referees take, others do
not) had better continue to be available, unless we want an era where
I reject your grant request or bid for tenure because you rejected my
article when you refereed it, etc. The editor, if competent, should be
the arbiter of the fairness of my judgment, not me, or the author.
> Unfortunately a change of this type is likely to take
> far longer to achieve than open archiving, since it
> involves changing the basic patterns of scholarly
> communication.
It is not clear whether it is unfortunate that such changes would take
long to achieve until and unless such hypothetical proposals are first
tested and shown to work, and to scale. They have not been. On the other
hand, the feasibility and benefits of open-access to the literature such
as it is, through self-archiving, have been amply tested and shown to
work, and to scale. So the fact that that sure benefit is still so slow
in coming is indeed unfortunate. Alas, however, among the many (needless)
worries that are still retarding self-archiving and its benefits, worries
about precisely this sort of untested peer-review-reform scenario figure
prominently. (I, for one, find myself having to disabuse reluctant
self-archivers of such worries as frequently as about other groundless
worries, such as copyright and preservation.)
http://www.ecs.soton.ac.uk/~harnad/Tp/resolution.htm#8
Stevan Harnad
Received on Thu Jan 16 2003 - 16:25:17 GMT