On Sun, 3 Jun 2007, Loet Leydesdorff wrote:
> > "All current university rankings are flawed to some extent; most,
> > fundamentally,"
>
> The problem is that institutions are not the right unit of analysis for the
> bibliometric comparison because citation and publication practices vary
> among disciplines and specialties. Universities are mixed bags.
Yes and no. It is correct that the right unit of analysis is the field or even
subfield of the research being compared. But it is also true that in comparing
universities one is also comparing their field and subfield coverage.
The general way to approach this problem is with a rich and diverse set of
predictor metrics, in a joint multiple regression equation that can adjust the
weightings of each depending on the field, and on the use to which the
spectrum of metrics is being put: There can, for example, be "discipline
coverage" metrics (from narrow to wide) as well as "field size" and
"institutional size" metrics, whose regression weights can be adjusted
depending on what it is that the equation is being used to predict,
and hence to rank. The differential weightings can be validated against
other means of ranking (including expert judgments).
Harnad, S. (2007) Open Access Scientometrics and the UK Research
Assessment Exercise. Invited Keynote, 11th Annual Meeting of the
International Society for Scientometrics and Informetrics. Madrid,
Spain, 25 June 2007
http://arxiv.org/abs/cs.IR/0703131
> Our Leiden colleagues try to correct for this by normalizing on the journal
> set which the group uses itself, but one can also ask whether the group is
> using the best possible set given its research profile. Should one not first
> determine a journal set and then compare groups within it?
The three things that are needed are (1) a far richer and more diverse set of
potential metrics, (2) insurance that like is being compared with like, and (3)
validation of the ranking against face-valid external criteria, so that the
metrics can eventually function as benchmarks and norms.
None of this can be done a priori; the methodology is similar to the
methodology of validating batteries of psychometric or biometric tests:
Correlate the joint set of metrics with external, face-valid criteria, and
adjust their respective weights accordingly.
It is unlikely, however, that the relevant and predictive frame of
reference and basis of comparison will be journal sets. Breadth/narrowness
of journal coverage is just one among many, many potential parameters. The
interest is in comparing researchers and research groups or institutions,
within or across fields. The journal does carry some predictive and
normative power in this, and it is one indirect way of equating for field,
but it is one among many ways that one might wish to weight -- or equate
-- metrics, particularly in an Open Access database in which all journals
(and all individual articles and all individual researchers, and their
respective download, citation, co-citation, hub/authority, consanguinity,
chronometric, and many other metrics are all available for weighting,
equating, and validating).
What we have to remember is that the imminent Open Access (OA) world
is incomparably wider and richer -- and more open -- than the narrow,
impoverished classical-ISI world to which we were constrained in the
Closed Access paper-based era.
> Furthermore, Brewer et al. (2001) made the point that one should also
> distinguish between prestige and reputation. Reputation is field specific;
> prestige is more historical. (Brewer, D. J., Gates, S. M., & Goldman, C. A.
> (2001). In Pursuit of Prestige: Strategy and Competition in U.S. Higher
> Education. Piscataway, NJ: Transaction Publishers, Rutgers University.)
This is still narrow journal- and journal-average-centred thinking. Yes,
journals will still be the entities in which papers are published, and journals
will vary both in their field of coverage and their quality, and this can and
will be taken into account. But those variables constitute only a small fraction
of OA scientometric and semiometric space.
Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open
Research Web: A Preview of the Optimal and the Inevitable, in Jacobs,
N., Eds. Open Access: Key Strategic, Technical and Economic Aspects,
Chandos.
http://eprints.ecs.soton.ac.uk/12453/
> Many of the evaluating teams are institutionally dependent on the contracts
> for the evaluations. Quis custodies custodes?
OA itself is transparency's, diversity's and equitability's best defender.
Stevan Harnad
AMERICAN SCIENTIST OPEN ACCESS FORUM:
http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/
UNIVERSITIES and RESEARCH FUNDERS:
If you have adopted or plan to adopt a policy of providing Open Access
to your own research article output, please describe your policy at:
http://www.eprints.org/signup/sign.php
http://openaccess.eprints.org/index.php?/archives/71-guid.html
http://openaccess.eprints.org/index.php?/archives/136-guid.html
OPEN-ACCESS-PROVISION POLICY:
BOAI-1 ("Green"): Publish your article in a suitable toll-access journal
http://romeo.eprints.org/
OR
BOAI-2 ("Gold"): Publish your article in an open-access journal if/when
a suitable one exists.
http://www.doaj.org/
AND
in BOTH cases self-archive a supplementary version of your article
in your own institutional repository.
http://www.eprints.org/self-faq/
http://archives.eprints.org/
http://openaccess.eprints.org/
Received on Sun Jun 03 2007 - 14:10:50 BST