I think Professor Ransdell is tilting at too many adversaries at once:
copyright, university administrations, peer review, tenure review,
academic inequities. These are all real problems, worthy of attention,
but they are not the problems at issue here, and only confuse the
issue when wrapped into it.
The issue here is quite simple: There currently exists a learned serial
literature; it is largely in paper now, and costly. Is there a way to make
that literature available online, free for all?
I believe that the answer is yes, and that this outcome can be attained
without first having to solve the problems of academic inequity or to
correct the imperfections of peer review as it is currently practised,
worthy as is the goal of doing so. Indeed, I think it is both
short-sighted and defeatist to suppose that the one is contingent on
the other two.
So I am determined not to let worthy but irrelevant causes obstruct or
obscure the road to the optimal and the inevitable for refereed journal
publication. (This is what sounds to Joseph Ransdell's ears like
ritualistic re-incantation of the same agenda and formula in the face
of putative challenges and difficulties.)
So I will not answer the specifics or Joseph's latest posting, except
to deny the contingency of the one objective on the others, and will
append instead the preprint of an essay that is about to appear in
Nature-online about medium-independence of peer review.
Stevan Harnad
-----------------------------------------------------------------------
Refereed Journals, Author Archives, and the
Invisible Hand of Peer Review
Stevan Harnad
Multimedia Research Group
Electronics and Computer Science Department
Southampton University
Highfield, Southampton
SO17 1BJ United Kingdom
harnad_at_soton.ac.uk harnad_at_princeton.edu
http://www.princeton.edu/~harnad/intpub.html
http://cogsci.soton.ac.uk/~harnad/intpub.html
ABSTRACT: The refereed journal literature needs to be freed from
both paper and its costs, but not from peer review, whose
"invisible hand" is what maintains the quality of the literature.
The residual cost of online-only peer review is low enough to be
recovered from author-end page charges, covered from subscription
savings, and vouchsafing a toll-free literature for everyone
forever.
Human nature being what it is, it cannot be altogether relied upon to
be its own policeman. Individual exceptions there may be, but to treat
them as the rule would be to underestimate the degree to which our
potential unruliness is vetted by collective constraints, implemented
formally.
So it is in civic matters, and it is no different in the world of
Learned Inquiry. The "quis custodiet" problem among scholars has
traditionally been solved by means of a "quality assurance" system
called "peer review": The work of specialists is submitted to a
qualified adjudicator, an editor, who in turn sends it to
fellow-specialists, referees, to seek their advice about whether the
paper is potentially publishable, and if so, what further work is
required to make it acceptable. The paper is not published until and
unless the requisite revision can be and is done to the satisfaction of
the editor and referees.
Neither the editor nor the referees is infallible. Editors can err in
the choice of specialists (indeed, it is well-known among editors that
a deliberate bad choice of referees can always ensure that a paper is
either accepted or rejected, as preferred) or they can misinterpret or
misapply referees' advice. The referees themselves can fail to be
sufficiently expert, informed, conscientious or fair.
Nor are authors always conscientious in accepting the dictates of peer
review. (It is likewise well-known among editors that virtually every
paper is eventually published, somewhere: there is a quality hierarchy
of journals, based on the rigour of their peer review, all the way down
to an unrefereed vanity press at the bottom. Persistent authors can
work their way down until their paper finds its own level, not without
considerable wasting of time and resources along the way, including the
editorial office budgets of the journals and the freely given time of
the referees, who might find themselves called upon more than once
to review the same paper, sometimes unchanged, for several different
journals.)
The system is not perfect, but it is what has given us our refereed
journal literature to date, and so far no one has demonstrated any
viable alternative to having experts judge the work of their peers, let
alone one that is at least as effective in maintaining the quality of
the literature as the present imperfect one.
Alternatives have of course been proposed, but to propose is not to
demonstrate viability. Most proposals have involved weakening the
constraints of classical peer review in some way or other. The most
radical way is to do away with it altogether: let authors police
themselves, publish any paper they submit, and let the reader decide
what is to be taken seriously. This would amount to discarding the
current hierarchical filter -- both its active influence, in directing
revision, and the ranking of quality and reliability that it provides
as a guide to the reader trying to navigate the ever-growing
literature.
There is a way to weigh our intuitions about the merits of this
proposal a priori. It is based on a specialist domain that is somewhat
more urgent and immediate than abstract "learned inquiry," but if we
are not prepared to generalise its verdict to scholarly/scientific
research in general, we must ask ourselves how seriously we take the
acquisition of knowledge: If someone near and dear to you were ill with
a serious but potentially treatable disease, would you prefer to have
them treated on the basis of the refereed medical literature or on the
basis of an unfiltered free-for-all where the distinction between
reliable expertise and ignorance, incompetence or charlatanism is left
entirely to the reader, on a paper by paper basis?
A variant on this scenario is about to be undertaken by the British
Medical Journal (
http://www.bmj.com/cgi/shtml/misc/peer/index.shtml),
but instead of entrusting entirely to the reader the quality control
function performed by the referee in classical peer review, this
variant, taking a cue from some of the developments and goings-on on
both the Internet and Network TV chat-shows, plans to publicly post
submitted papers unrefereed on the Web and to invite any
reader to submit a commentary; these commentaries will then be used in
lieu of referee reports as a basis for deciding on formal publication.
Is this peer review? Well, it is not clear whether the self-appointed
commentators will be qualified specialists (or how that is to be
ascertained). The expert population in any given speciality is a scarce
resource, already overharvested by classical peer review, so one
wonders who would have the time or inclination to add journeyman
commentary services to this load on their own initiative, particularly
once it is no longer a rare novelty, and the entire raw, unpoliced
literature is routinely appearing in this form first. Are those who
have nothing more urgent to do with their time than this really the
ones we want to trust to perform such a critical function for us all?
And is the remedy for the possibility of bias or incompetence in
referee-selection on the part of editors really to throw selectivity to
the winds, and let referees pick themselves? Considering all that hangs
on being published in refereed journals, it does not take much
imagination to think of ways authors could manipulate such a system to
their own advantage, human nature being what it is.
And is peer commentary (even if we can settle the vexed "peer"
question) really peer review? Will I say publicly about someone who
might be refereeing my next grant application or tenure review what I
really think are the flaws of his latest raw manuscript? (Should we
then be publishing our names alongside our votes in civic elections
too, without fear or favour?) Will I put into a public commentary --
alongside who knows how many other such commentaries, to be put to who
knows what use by who knows whom -- the time and effort that I would
put into a referee report for an editor I know to be turning
specifically to me and a few other experts for my expertise on a
specific paper?
If there is anyone on this planet who is in a position to attest to the
functional difference between peer review and peer commentary, it is
surely the author of the present article, who has been umpiring a paper
journal of Open Peer Commentary (Behavioral and Brain Sciences
[
http://www.princeton.edu/~harnad/bbs.html], published by Cambridge
University Press) for over 2 decades, as well as an online-only
journal of Open Peer Commentary (Psycoloquy, sponsored by the American
Psychological Association, [
http://www.princeton.edu/~harnad/psyc.html]
for what will soon be a decade too).
Both journals are rigorously refereed; only those papers that have
successfully passed through the peer review filter go on to run the
gauntlet of open peer commentary, an extremely powerful and important
SUPPLEMENT to peer review, but certainly no SUBSTITUTE for it. Indeed,
no one but the editor sees [or should have to see] the population of
raw, unrefereed submissions, consisting of manuscripts eventually
destined to be revised and accepted after peer review, but also (with a
journal like BBS, with a 75% rejection rate) many manuscripts not
destined to appear in that particular journal at all. Referee reports,
some written for my eyes only, all written for at most the author and
fellow referees, are nothing like public commentaries for the eyes of
the entire learned community, and vice versa. Nor do 75% of the
submissions justify soliciting public commentary, or at least not
commentary at the BBS level of the hierarchy.
It has been suggested that in fields such as Physics, where the
rejection rate is lower (perhaps in part because the authors are more
disciplined and realistic in their initial choice of target journal,
rather than trying their luck from the top down), the difference
between the unrefereed preprint literature and the refereed reprint
literature may not be that great; hence one is fairly safe using the
unrefereed drafts, and perhaps the refereeing could be jettisoned
altogether.
Support for this possibility has been adduced from the remarkable
success of the NSF/DOE-supported Los Alamos Physics Archive
(
http://xxx.lanl.gov), a free, public repository for a growing
proportion of the current physics literature, with over 14,000 new
papers annually and 35,000 users daily. Most papers are initially
deposited as unrefereed preprints, and for some (no one knows how
many), their authors never bother replacing them with the final revised
draft that is accepted for publication. Yet xxx is actively used and
cited by the physics community.
Is this really evidence that peer review is not indispensable after
all? Hardly, for the "Invisible Hand" of peer review is still there,
exerting its civilising influence: Every paper deposited in xxx
is also destined for a peer reviewed journal; the author knows
it will be answerable to the editors and referees. That certainly
constrains how it is written in the first place. Remove that invisible
constraint -- let the authors be answerable to no one but the general
users of the Archive (or even its self-appointed "commentators") -- and
watch human nature take its natural course. Standards will erode, as
the Archive devolves toward the canonical state of unconstrained
postings: the free-for-all chat-groups of Netnews, that Global Graffiti
Board for Trivial Pursuit -- until someone re-invents peer review
and quality control.
Now it is no secret that I am a strong advocate of a free literature
along the lines of xxx [
http://cogsci.soton.ac.uk/~harnad/subvert.html].
How are we to reconcile the conservative things said here about quality
control with the radical things advocated elsewhere about public author
archives [
http://www.ecs.soton.ac.uk/~harnad/nature.html]?
The answer is very simple. The current price of the refereed paper
journal literature is paid for by Subscription, Site License and
Pay-Per-View (S/SL/PPV). Both the medium (paper) and the method
of cost-recovery (S/SL/PPV) share the feature that they block access
to the refereed literature, whereas the authors, who contribute
their papers for free, would infinitely prefer free, universal
access to their work.
The optimal (and inevitable) solution is an online-only refereed
journal literature, which will be much less expensive to publish (less
than 1/3 of the current price per page) once it is paper-free
[
http://amsci-forum.amsci.org/archives/september-forum.html]; but it
will not be entirely cost-free, because the peer review (and editing)
still needs to be paid for. If those residual costs are paid at the
author's end (not out of the author's pocket, of course, but out of
publication funds redirected from 1/3 of the 3/3 savings from
subscription cancellations) the dividend will be that the papers are
all accessible for free for all (via discipline-specific archives such
as CogPrints [
http://cogprints.soton.ac.uk] -- to be subsumed, once
viable, by a single international, interdisciplinary archive such as
xxx, mirrored worldwide, which will then have an unrefereed preprint
sector and a refereed, published, reprint sector, tagged by journal
name). Journal publishers will continue to provide the quality
control, while the public archive will serve as the "front end" for
both journal submissions and published articles.
Peer review is medium-independent, but the online-only medium will make
it possible for journals to implement it not only more cheaply and
efficiently, but also more equitably and effectively than was possible in
paper, through subtle variants of the very means I have criticised
above: Papers will be submitted in electronic form, and archived on the
Web (in hidden referee-only sites, or publicly, in xxx, depending on
the author's preferences). Referees need no longer be mailed hard
copies; they will access the submissions from the Web.
To distribute the load among referees more equitably, the jourrnal
editor can formally approach a much larger population of selected,
qualified experts about relevant papers they are invited to referee if
they have the time and inclination. Referee reports can be emailed or
deposited directly through a password-controlled Web interface.
Accepted final drafts can be edited and marked up online, and the final
draft can then be deposited in the public Archive for all, replacing the
preprint.
Referee reports can be revised, published and linked to the published
article as commentaries if the referee wishes; so can author rebuttals.
And further commentaries, both refereed and unrefereed, can be
archived and linked to the published article, along with author
responses. Nor is there any reason to rule out postpublication
author updates and revisions of the original article -- 2nd and 3rd
editions, both unrefereed and refereed. Learned Inquiry, as I have had
occasion to write before, is a continuum; reports of its findings --
informal and formal, unrefereed and refereed -- are milestones, not
gravestones; as such, they need only be reliably sign-posted. The
discerning hitch-hiker in the PostGutenberg Galaxy can take care of the
rest.
Overall, the dissemination of learned research, once we have attained
the optimal and inevitable state described here, will be appreciably
accelerated, universally accessible, and incomparably more interactive
in the age of Scholarly Skywriting than it was in our own pedestrian,
papyrocentric one; Learned Inquiry itself will be the chief
beneficiary.
Harnad, S. (ed.) (1982d) Peer commentary on peer review: A case study in
scientific quality control, New York: Cambridge University Press.
Harnad, S. (1984) Commentaries, opinions and the growth of scientific
knowledge. American Psychologist 39: 1497 - 1498.
Harnad, S. (1985) Rational disagreement in peer review. Science,
Technology and Human Values 10: 55 - 62.
Harnad, S. (1986) Policing the Paper Chase. (Review of
S. Lock, A difficult balance: Peer review in biomedical publication.)
Nature 322: 24 - 5.
Harnad, S. (1990) Scholarly Skywriting and the Prepublication Continuum
of Scientific Inquiry. Psychological Science 1: 342 - 343 (reprinted in
Current Contents 45: 9-13, November 11 1991).
http://www.ecs.soton.ac.uk/~harnad/Papers/Harnad/harnad90.skywriting.html
Harnad, S. (1991b) Post-Gutenberg Galaxy: The Fourth Revolution in the
Means of Production of Knowledge. Public-Access Computer Systems Review
2 (1): 39 - 53.
http://www.ecs.soton.ac.uk/~harnad/Papers/Harnad/harnad91.postgutenberg.html
Harnad, S. (1996a) Implementing Peer Review on the Net:
Scientific Quality Control in Scholarly Electronic Journals. In:
Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic
Frontier. Cambridge MA: MIT Press. Pp. 103-118.
http://www.ecs.soton.ac.uk/~harnad/Papers/Harnad/harnad96.peer.review.html
Harnad, S. (1997d) Learned Inquiry and the Net:
The Role of Peer Review, Peer Commentary and Copyright.
Antiquity 71: 1042-1048
http://citd.scar.utoronto.ca/EPub/talks/Harnad_Snider.html
Harnad, S. (1998d) For Whom the Gate Tolls? Free the Online-Only
Refereed Literature. American Scientist Forum.
http://www.ecs.soton.ac.uk/~harnad/amlet.html
Harnad, S. (1998e) On-Line Journals and Financial Fire-Walls.
Nature 395(6698): 127-128.
http://www.ecs.soton.ac.uk/~harnad/nature.html
Lock, Stephen (1985)
A difficult balance : editorial peer review in medicine
London : Nuffield Provincial Hospitals Trust.
Okerson A. & O'Donnell, J. (Eds.) (1995) Scholarly Journals at the
Crossroads; A Subversive Proposal for Electronic Publishing.
Washington, DC., Association of Research Libraries, June 1995.
http://www.arl.org/sc/subversive/
ftp://ftp.cogsci.soton.ac.uk/pub/psycoloquy/Subversive.Proposal
peer review.
Received on Tue Aug 25 1998 - 19:17:43 BST