My response to Harnad's points (as also submitted to PLoS today) follows below.
Gunther Eysenbach MD, MPH
Senior Scientist, Centre for Global eHealth Innovation
Division of Medical Decision Making and Health Care Research;
Toronto General Research Institute of the UHN;
Associate Professor,
Department of Health Policy, Management and Evaluation, University of Toronto;
Mailing address:
Centre for Global eHealth Innovation
Toronto General Hospital
R. Fraser Elliott Building, 4th Floor, room # 4S435,
190 Elizabeth Street
Toronto, ON M5G 2C4
telephone (+1) 416-340-4800 Ext. 6427
fax (+1) 416-340-3595
geysenba_at_uhnres.utoronto.ca
Personal:
http://yi.com/ey/
eHealth Centre:
http://www.uhnres.utoronto.ca/ehealth/
Journal of Medical Internet Research:
http://www.jmir.org
-------------------------------------------------------------------------
Authors^Ò Response
The introduction of the article and two accompanying editorials [1-3]
already answer Harnads questions why author, editors, and reviewers were
critical of the methodology employed in previous studies, which all only
looked at ^Ógreen OA^Ô (self-archived/online-accessible papers) (hint 1:
^Óconfounding^Ô) (hint 2: arrow of causation: are papers online because they
are highly cited, or the other way round?). The statement in the PLoS
editorial has to be seen against this background. None of the previous
papers in the bibliography mentioned by Harnad employed a similar
methodology, working with data from a ^Ógold-OA^Ô journal.
The correct method to control for problem 1 (multiple confounders) is
multivariate regression analysis, not used in previous studies. Harnads
statement that ^Ómany [of the confounding variables] are peculiar to this
particular [..] study^Ô suggests that he might still not fully appreciate the
issue of confounding. Does he suggest that in his samples there are no
differences in these variables (for example, number of authors) between the
groups? Did he even test for these? If he did, why was this not described in
these previous studies?
The correct method to address problem 2 (the ^Óarrow of causation^Ô problem)
is to do a longitudinal (cohort) study, as opposed to a cross-sectional
study. This ascertains that OA comes first and THEN the paper is cited
highly, while previous cross-sectional studies in the area of ^Ógreen OA^Ô
publishing (self-archiving) leave open what comes first ^Ö impact or being
online.
Harnad - who usually carefully distinguishes between "green" and "gold" OA
publishing ^Ö ignores that open access is a continuum, much as publishing is
a continuum [4], and this study (and the priority claims in the editorial)
was talking about the gold OA end of the spectrum. Publishing in an open
access journal is a fundamentally different process from putting a paper
published in a toll-access journal on the Internet. In analogy, printing
something on a flyer and handing it out to pedestrians on the street, and
publishing an article in a national newspaper can both be called
^Ópublishing^Ô, but they remain fundamentally different processes, with
differences in impact, reach, etc. A study looking at the impact of
publishing a newspaper can not be replaced with a study looking at the
impact of handing out a flyer to pedestrians, even though both are about
^Ópublishing^Ô.
Finally, Harnad says that "prior evidence derived from substantially larger
and broader-based samples showing substantially the same outcome". I rebut
with two points here.
Regarding ^Ólarger samples^Ô I think rigor and quality (leading to internal
validity) is more important than quantity (or sample size). Going through
the laborious effort to extract article and author characteristics for a
limited number of articles (n=1492) in order to control for these
confounders provides scientifically stronger evidence than doing a crude,
unadjusted analysis of a huge number of online accessible vs non-online
accessible articles, leaving open many alternative explanations.
Secondly, contrary to what Harnad said, this study is NOT at all "showing
substantially the same outcome". On the contrary, the effect of green-OA ^Ö
once controlled for confounders - was much less than what others have
claimed in previous papers. Harnad, a self-confessed ^Óarchivengalist^Ô,
co-creator of a self-archiving platform, and an outspoken advocate of
self-archiving (speaking of vested interests) calls the finding that
self-archived articles are [^Å] cited less often than [gold] OA articles from
the same journal ^Ócontroversial^Ô. In my mind, the finding that the impact of
non-OA<green-OA<gold-OA<green+gold-OA is intuitive and logical: The level of
citations correlates with the level of openness and accessibility.
Sometimes our egos stand in the way of reaching a larger common goal, and I
hope Harnad and other sceptics respond with good science rather than with
polemics and politics to these findings. Unfortunately, in this area a lot
more people have strong opinions and beliefs than those having the skills,
time, and willingness to do rigorous research. I hope we will change this,
and I reiterate a ^Ócall for papers^Ô in that area [3].
References
1. Eysenbach G. Citation Advantage of Open Access Articles. PLoS Biol.
2006;4(5) p. e157.
OPEN ACCESS
http://dx.doi.org/10.1371/journal.pbio.0040157
2. MacCallum CJ, Parthasarathy H. Open Access Increases Citation Rate. PLoS
Biol. 2006;4(5) p. e176.
OPEN ACCESS
http://dx.doi.org/10.1371/journal.pbio.0040176
3. Eysenbach G. The Open Access Advantage. J Med Internet Res 2006 (May 15);
8(2):e8
OPEN ACCESS:
http://www.jmir.org/2006/2/e8/
4. Smith R. What is publication? [editorial]. BMJ 1999;318:142
OPEN ACCESS:
http://bmj.bmjjournals.com/cgi/content/full/318/7177/142
Received on Sat May 20 2006 - 14:42:27 BST