Bruce Edmonds <b.edmonds_at_MMU.AC.UK> wrote:
> Where I think we agree:
>
> 1. What you call the organised process of review is largely irrelevant
> (review boards, journals etc.).
No, we don't agree. I think review boards (i.e., qualified editors who
administer peer review, and referees that perform it) and journals
(i.e., entities that implement peer review and provide the quality
control imprimatur or tag for the output) are medium-independent and as
relevant to the online medium as the paper one.
The only difference is that (1) peer review can be implemented much
more efficiently and equitably in the new medium and that (2) all costs
other than quality control vanish, making it possible to (3) cover the
small remaining costs through author page charges instead of
Subscription/Site-License/Pay-Per-View (S/SL/PPV), which in turn makes
it possible to (4) provide the refereed literature, along with the
unrefereed literature, free for all, through the Archive (e.g., Los
Alamos), with (5) all author costs paid for out of just a small part
of the savings from S/SL/PPV termination.
> 2. The advent of widely available word processing and internet access
> has cut many of the costs associated with journals, there is little
> reason why academics should pay a publisher to manage a journal for
> them - after all academics do almost all the work
> (reviewing, marking-up own papers, downloading, writing).
No, we don't agree. Academics do the refereeing as a service for
qualified Editors of established entities normally called "journals,"
with known and dependable quality-control standards. They do not
referee will-nilly for one another; nor will they do so in order to
provide public "stars" and nonbinding referee reports.
Authors do the part authors always did. That is medium-independent.
What is now possible is to pay a publisher to implement the quality
control at the author end, out of a small part of the annual
institutional library savings, in exchange for a free literature for
all at the reader end. (Same outcome you desire, by a slightly
different means, but one that will preserve quality control.)
> 3. That the organisation of the review process (and mark-up of papers
> where it occurs) are the last major unsubsumed costs (cost of archiving
> being small).
This we agree on.
> 4. That reviewer's time is precious and not to be squandered.
We agree, except that your scheme does not seem to take this into
account at all, rather the contrary (see below).
> 5. That readers need some system to help them find the quality papers
> from amongst the multitude, otherwise their time is wasted and they end
> up ill-informed.
Correct. But the system that finds the quality presupposes a system
that makes sure the quality comes into being. Peer review is, as I
said, not merely the assignment of stars. It is a peer-feedback-based
quality control mechanism that must be administered in a way that is
ANSWERABLE to the peer feedback, and not just a distributor of stars
(which do not in any case represent the quality distinctions that the
journal hierarchy represents).
> 6. That archives, rapid peer-commentary etc. have an important role in
> promoting active discussion of ideas.
Agreed.
> 7. That academics need some system to promote their work (if it is
> good) to others and gain recognition for it.
They already have this system. It is called publication in peer reviewed
journals. The paper incarnation of this system was costly and
inefficient, and worst of all, it could only finance itself by levying
tolls for access to the literature (in the form of S/SL/PPV). Far from
helping to PROMOTE work, these inefficiencies and financial firewalls
blocked access to the work.
The solution is not to jettison the quality control system, which is
essential and medium-independent. The solution is to find a way to
provide universal free access while retaining quality control. That
solution is possible by switching from S/SL/PPV to author-end
page-charges (not out of the authors' pockets of course, but out of
part of the S/SL/PPV savings).
The quality control system need not be tampered with at all; it is
certainly open to improvement, but only by alternatives that are first
tested empirically and shown to control quality at least as well, not
by adopting an untested pig-in-a-poke, particularly in the face of a
lot of prima facie evidence that it would undermine rather than provide
peer review by abandoning answerability and destroying referee
incentive in exchange for a primary-school-like "star" system.
> Where I think we disagree:
>
> A) (most fundamentally) in our conceptions of the process of knowledge
> development. (I think) you have a foundationalist conception: each
> paper is checked and worked on until it can be relied on in the
> collective construction of knowledge (rather like building a wall out
> of bricks - you make sure each brick is sound before relying on it to
> support further such bricks). I have a more evolutionary picture in
> mind: academics are continually producing variations on ideas,
> experiments, studies, models etc. (both individually and as part of
> large ecologies), then selection pressures are applied so that
> (probably) the better will emerge.
I don't have strong views either way, and I think it is irrelevant,
because you can have the outcome you desire while leaving quality
control intact by simply promoting author self-archiving (along
with, if desired, posting of the referee reports their unrefereed
have drafts received).
Harnad, S. (1990d) Scholarly Skywriting and the Prepublication
Continuum of Scientific Inquiry. Psychological Science 1: 342 - 343
(reprinted in Current Contents 45: 9-13, November 11 1991).
http://www.ecs.soton.ac.uk/~harnad/Papers/Harnad/harnad90.skywriting.html
ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad90.skywriting.html
Harnad, S. (1995h) Universal FTP Archives for Esoteric Science and
Scholarship: A Subversive Proposal. In: Ann Okerson & James O'Donnell
(Eds.) Scholarly Journals at the Crossroads; A Subversive Proposal for
Electronic Publishing. Washington, DC., Association of Research
Libraries, June 1995.
http://www.library.yale.edu/~okerson/subversive.html
http://cogsci.soton.ac.uk/~harnad/subvert.html
ftp://ftp.cogsci.soton.ac.uk/pub/psycoloquy/Subversive.Proposal/
> B) I see the closed nature of the author-reviewer/editor revision
> process as somewhat of a hang-over from the days when considerable cost
> was expended in publishing. In my experience many such discussions are
> not about simple errors in established fact or
> poor presentation but involve issues of content and/or subject
> demarcation. Such discussion would be better as a public discussion
> rather than a private one where one discussant has power over another.
> Peer commentary of the type you have championed goes some way in this
> regard, but not all the way. I do not suppose that such closed-process
> reviews would disappear, but exist along side (and in competition
> with) the open evaluation-boards.
This is all very vague. Why does self-archiving, along with
self-archiving of referee reports and peer commentary, not provide ALL
the parallelism and competition you refer to here?
And these speculations about quality control in Learned Inquiry --
possibly worthy of discussion, if supported by some evidence and
argument, in a refereed paper on the question, but certainly nowhere
near being an empirically tested or even rationally analyzed
alternative to the current quality control system in science and
scholarship -- what bearing do they have on the continuation of the
current system?
> C) I think that it would be useful to readers to have more ancillary
> information about a paper, beyond the published/not published duality.
Fine. Let the papers be self-archived, along with all referee reports
or peer commentaries they have received, to be read by everyone who has
the time to read unfiltered drafts. (The occasional gems will of course
come to everyone's attention in such a system, but quality control is
not for these, but rather for sorting the 95% that is more chaff than
wheat. That's where your scheme's failure to scale makes itself most
apparent: Quality control schemes should be based on how to do the
triage for the massive middle two thirds of the inevitable Gaussian
Distribution of human endeavour, not the easiest minority cases at the
tail ends. And the triage is not a matter of passive, star-assignment:
it is an active, feedback-governed interaction between the author and
the expert referees, adjudicated by the Editor, in the design of a
"product" that meets that journal's quality standards.)
> Of course, information about the content of the paper is paramount
> (title, author, institution, abstract, keywords,
> commentary, outline, references etc.), but there is far more that
> could be useful. Examples of such are: originality, importance,
> empirical content, accessibility to non-expert, clarity of argument,
> readability of language, presentation, amount of previous us work
> assumed etc. These are regularly assessed by referees, both explicitly
> on many review forms and implicitly in comments, but this useful
> information is not usually imparted to aid the readers find the
> information they want. I know that a "star" system is crude (albeit
> less crude than the present system) - I am arguing for richer
> information to be made available to the reader, especially where it is
> in a form which could be utilised in database-style queries and
> user-specific settings.
Most referee evaluation forms have an implicit "star" system:
***** Accept with no revision
**** Accept with minor revision
*** Accept conditional on major revision
** Require revision and re-refereeing
* Recommend submission elsewhere
0 Reject outright
Surely these stars, together with the reports, self-archived by the
authors with their papers, should accomplish everything you regard as
desirable, with no need whatsoever to tamper with classical peer
review.
> D) The system where a single paper is judged once for all audiences
> belies the fact that the same paper will, in effect, be of a different
> quality for different audiences. If journals are not going to own and
> hold the papers, but concentrate on selecting them, then there is no
> reason why different boards should not review the same paper for
> different audiences. Some mechanism could easily be developed so people
> knew it was the same paper. I know that there is a current prohibition
> against repeat publication, but this is for reasons that are now
> defunct.
Why/how are those reasons now defunct? They will only become defunct when
expert referee time ceases to become a scarce resource. But with the
growth of the literature, nothing like that looks to be in sight,
particularly given the frank profligacy of your own scheme with notional
referee time (I say notional, because I think all indications are that
referees would not perform in a scheme like yours). Referees already refuse
many more calls on their expertise than they accept. And it is
inevitably the identity and prestige of the journal and the editor
calling for their time that occasionally tilts the balance. (Author
identity sometimes does the trick too, but again, that only applies to
the high profile top end of the distribution, not to the anonymous mass
in the middle, yet that mass is the one that needs the quality control
most!)
> E) I do not think that my suggested system would end up with more work
> for reviewers. Authors will be wary of seeing their work get a low
> assessment in public and will adjust their output to suit. Also each
> board would quickly devise their own rules for limiting the amount of
> work to the right level.
Authors who get bad reports can always suppress them; most authors just
want their work published, and would happily ignore negative referee
reports if they were not answerable to an editor who was not ready to
ignore them. These issues are bigger than peer review: They have to
do with human nature, expertise, quality control, the bell curve, and
the pitfalls of any self-policing system.
Your scheme is not well thought out: If I submit my paper to a "board"
and get back low ratings and no stars, am I obliged to live forever
with that publicly advertised verdict? How many authors do you think
would even step INTO such a muddy stream? So if the answer to that
question is accordingly "no," and the only price I pay for suppressing
bad reports is that my paper cannot appear with that board's
imprimatur, then why should I not post my paper publicly without
imprimatur, while I go shopping around for more boards and more stars?
There are many ways to pick and choose to make oneself look good in a
free-for-all like this.
(If your scheme had been submitted to a refereed journal instead of a
public bulletin board like this, these are the kinds of feedback you
would get from your referees, forcing you to think it out more fully
before going public with it -- at least public wearing the imprimatur
of that journal. That's the difference between peer review and open
commentary -- and, as I've had many occasions to say, if anyone on
this planet should know the difference, it is me, after 20 years of
implementing and comparing both in the running of the two refereed
journals of open peer commentary that I edit, BBS and Psycoloquy,
one paper and one online.)
http://www.princeton.edu/~harnad/bbs.html
http://www.princeton.edu/~harnad/psyc.html
> F) I do not think it would end up with readers losing the reliability
> of sources of quality papers. I guess that a similar hierarchy of
> boards would spring up as journals, each offering different styles of
> assessment, browsing tools and review rigour (in fact I would guess
> there would be a greater variety than journals because the expertise in
> mark-up would not be needed). The top-end would undoubtedly be there.
Your boards lack the one critical ingredient of peer review and quality
control: answerability. Not answerability in the sense of bearing the
public stigma of a "star" system, or even public posting of referee
reports, but answerability in the sense of having to be responsive to
the recommendations of the referees, as implemented by the Editor, in
order to receive the "star" of appearing as a refereed article at all.
Without that answerability, it all just becomes a public beauty
contest. I would hate to have to stake, say, urgent medical treatment,
on a literature like that. (Is the rest of Learned Inquiry so much less
of a life-and-death matter? even eggs are quality controlled.)
> Comments:
>
> There are already reviewing services of web-pages (e.g. Magellan or
> Encyclopaedia Brittanica), it is just that they have not been combined
> with paper archives and web-search engines. It is only a matter of time
> before they do. The explosion of available information and papers will
> hasten its appearance.
Let us now treat the illnesses of those who are near and dear to us
based on the reviews the papers reporting them got in Magellan or EB,
rather than the expert referees of Lancet or BJM. Or let Lancet or BJM
just assign them stars, and we'll take it from there...
> Debating which would be better is literally academic: if the systems
> were running side by side, readers, reviewers and authors would soon
> vote with their feet. The system would only adjust to the new if it
> suited people.
This sounds democratic, but it's a bit like saying: Let's not debate
the merits of the present policing system versus my proposed self-policing
alternative. Let's let people choose!
No, the debate is not academic. It is not academic to say that you need
EMPIRICAL evidence that your scheme could do at least as well as
classical peer review in assuring the quality of the literature. Without
that, your speculations are merely academic.
Stevan Harnad
Received on Wed Feb 10 1999 - 19:17:43 GMT