Re: The Economist: Publish and perish
on Wed, 27 Nov 2002 Arkadiusz Jadczyk ark_at_CASSIOPAEA.ORG wrote
> But the main problem in this thread is the proceder of peer reviewing
> and what to do about it. For me the action of the editor of Classical
> and Quantum and Gravity is just funny. If they are really serious, they
> should re-review all the papers published in the journal, because there
> will be more equally or even more controversial. Of course they will not
> do it. Who would do?
Such systematic re-reviewing has been done, in
physics by Conyers Herring, and in clinical
medicine by a Canadian team led by Walter O Spitzer.
Both found considerable amounts of bad research. The
fact that Spitzer required the assistance of a "task
force" to screen and evaluate all the citations on
a common diagnosis is ample evidence in itself that
the volume and specialization of research published
can be beyond the reach of any individual reader.
This, in fact, has been called the major challenge
of any research project. Spitzer also aimed his
comprehensive review to provide all interested
researchers with an authentic fresh starting point.
One of the observations made, in connection with
the re-review, was that peer evaluations will change
over time, as new information is disseminated. The
other observation was that lots of poorly designed,
poorly executed, and duplicative research is not
only done but then published. (Some solace may come
from the fact that most authors publish no more
than one or two papers, never to be heard from again.)
It is far more expensive to do than to publish.
Why are so many resources wasted on useless and
misleading research? Why waste our attention on
nickel and dime issues like publication peer
review when grant review panels permit hundred-dollar
bills to be blown away by researcher sponsors?
The U.S. General Accounting Office studied peer
review at the stage where much waste could be
avoided. In 1994 it reported: "Although most
reviewers reported expertise in the general areas
of the proposals they reviewed, many were not
expert on closely related questions and could cite
only a few, if any, references. This lack of
proximate expertise was most pronounced at NIH.
However, although this raises questions about the
relative adequacy of NIH reviews and ratings, the
greater proximity of NSF reviewers makes them
potentially more vulnerable to apparent or actual
self-interest in their reviews." Moreover, the
report noted that considerable research is financed
with no review, thanks to Congressional earmarks
and agency policy.
A low point in peer review was probably breached
when a research subject at Johns Hopkins died as
a result of researchers and referees failing to
adequately study the scientific record at the
proposal stage. As it is, I wonder if the project
had any merit at all. If Hopkins had done a Spitzer
review, a life would have been spared and the
research would have had a better chance of coming
to reliable conclusions.
Critics of peer review might well concentrate on
the institutional conflict of interest, the motive
that makes grant income as more important than
productivity. The universities that do the
research are also responsible for most reviews.
Wouldn't a low tolerance for poor preparation hurt
their pockets?
My impression is that publishers' peer review is
generally no better than the review that supports
the research. The scientific record is not perfect.
At least it demonstrates an effort to filter out
amateurs, quacks, and poorly prepared contributions.
If there is a weakness, editors point out, it is
their bias against publishing negative results,
reports that might save other researchers from going
down blind alleys.
The open "archive" movement, on the other hand,
welcomes unreviewed contributions, mixing them
with the scientific record. While informal
exchanges of information -- conference papers,
letters, preprints, face-to-face conversations --
are essential, the admission of such material
to "archives" has created some confusion. When
they are cited (as if they were published in
the scholarly sense) we see the authors, in vain
hopes of seeing further, climbing on the backs
of little people sinking in the mud.
> I know physicists who say that 90% of papers published in Phys
> Rev A is junk. My estimate is 40%. It is easier to sort things
> in mathematical journals. My own estimate is that perhaps only 1%
> of papers in Communications in Mathematical Physics is junk. I
> can be wrong, of course. If I am to tell from my own experience,
> it is good to have a variety of journals. Depending on my paper,
> how much time I am going spend on it, whether it is a technical
> paper that will survive any scrutiny, or more speculative or
> controversial one, when it certainly will make some referess hostile
> because it it presents a competitive theory. Referees and editorial
> boards consist of human beings, and sometimes (often?) will lack
> either the necessary objectivity or patience.
> It is good to have highly ranked and difficult to publish journals,
> but sometimes, when in the library, and in search for "fresh and
> crazy ideas" to fuel my own thinking, I would browse through
> "low rank" and "exotic" journals, sometimes with a success.
Such use of a wide range of resources, including
low-ranked journals, by Müller and Bednorz in
their Nobel prize-winning work was documented in
American Scientist in 1996 by Gerald Holton et al.
Notably Müller and Bednorz were highly secretive,
not disclosing their findings even to colleagues
at IBM, until they had published in the real sense.
Best wishes,
Albert Henderson
Former Editor, PUBLISHING RESEARCH QUARTERLY 1994-2000
<70244.1532_at_compuserve.com>
.
.
.
.
Received on Wed Dec 04 2002 - 02:46:27 GMT
This archive was generated by hypermail 2.3.0
: Fri Dec 10 2010 - 19:46:45 GMT