Date: Fri, 7 Jul 2006 18:15:44 +0100 (BST)
From: [journalist, identity deleted]
> I am a journalist at [deleted]... investigating the phenomenon of
> [journal self-citation bias]
>
> I wonder if you could let me know... whether
> any journals... have rules that state that the author of a
> paper it publishes must cite other papers published in its journal.
I have heard rumours, several times now, that some journals have a
policy of encouraging or even requiring their authors to cite papers
in the same journal, in order to raise the journal's citation impact. I
do not have evidence of this, though others might. (I am branching the
query to the sigmetrics list.)
> Also, do you know of any academics who...
> have agreed to cite colleagues if they cite him/her?
That's even harder to track down, but soon it will be possible to track
both: There will be "endogamy/exogamy" indices for articles, authors and
journals, reflecting the degree to which their citation impact comes
from (1) self-citations, (2) citations to and from the same circle
of authors or co-authors, (3) citations to and from the same journal,
or small closed circle of journals, and (4) how this compares with the
pattern for other comparable authors, papers and journals, equated as
much as possible for subject matter and citation level.
Such studies are already possible, in principle, using the ISI citations
database, but the coverage there is not total (it only covers about the top
quarter of the journals published: 8000/24000). Once the research
institutions and funders mandate that their research journal article
output must be made openly accessible free for all on the web, it will
be possible to do exhaustive and rigorous analyses for (1)-(4) and much
more:
Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open
Research Web: A Preview of the Optimal and the Inevitable, in Jacobs,
N., Eds. Open Access: Key Strategic, Technical and Economic Aspects,
chapter 21. Chandos.
http://eprints.ecs.soton.ac.uk/12369/
Practices that are openly detectable are also name-and-shame-able. Hence Open
Access is both the best way to monitor as well as to discourage dubious ones.
Open Access will maximize legitimate research impact and its measurement, while
minimizing abuses.
Stevan Harnad
> Excerpts from the Wall Street Journal
>
> Science Journals Artfully Try To Boost Their Rankings
> By SHARON BEGLEY June 5, 2006; Page B1
>
> John B. West... Distinguished Professor of Medicine and Physiology at the
> University of California, San Diego ...
> submitted a paper on the design of the human lung to the American
> Journal of Respiratory and Critical Care Medicine. [A]n editor emailed him
> that the paper was basically fine. There was just one thing: Dr. West
> should cite more studies that had appeared in the respiratory journal.
>
> ...Scientists and editors say scientific
> journals increasingly are manipulating rankings -- called "impact
> factors" -- that are based on how often papers they publish are cited by
> other researchers.
>
> ...Impact factors are calculated annually for some 5,900 science journals
> by Thomson Scientific, part of the Thomson
> <http://online.wsj.com/quotes/main.html?type=djn&symbol=tms> Corp., of
> Stamford, Conn. Numbers less than 2 are considered low. Top journals,
> such as the Journal of the American Medical Association, score in the
> double digits. Researchers and editors say manipulating the score is
> more common among smaller, newer journals, which struggle for visibility
> against more established rivals.
>
> ...Impact factors matter to publishers' bottom lines because librarians
> rely on them to make purchasing decisions...
>
> ...Self-citation can go too far. In 2005, Thomson Scientific dropped the
> World Journal of Gastroenterology from its rankings because 85% of the
> citations it published were to its own papers and because few other
> journals cited it....
>
> Journals can limit citations to papers published by competitors, keeping
> the rivals' impact factors down...
>
> Journals' "questionable" steps to raise their impact factors "affect the
> public," Ms. Liebert says. "Ultimately, funding is allocated to
> scientists and topics perceived to be of the greatest importance. If
> impact factor is being manipulated, then scientists and studies that
> seem important will be funded perhaps at the expense of those that seem
> less important."
Received on Sat Jul 08 2006 - 19:14:22 BST