"Statistics to dominate research assessment"
Donald MacLeod, Guardian (Educations) Tuesday June 13, 2006
http://education.guardian.co.uk/RAE/story/0,,1796532,00.html
As announced earlier, the costly and time-consuming UK Research Assessment
Exercise will be scrapped as of 2008 and replaced by "metrics".
This is a splendid move overall, both for UK researchers and institutions
(who can at last stop wasting all that research and researcher time
preparing elaborate RAE returns that are already highly correlated with
-- hence predictable from -- metrics that can be gathered cheaply and
semi-automatically, and can devote that time and energy to doing the
research itself instead!) and for the Open Access movement (because
the Open Access database will be the richest source for deriving those
metrics, and will even contribute to increasing some of the metric
values themselves!). (I hope the RCUK will now take a cue from the RAE
and adopt their long-promised and long-awaited proposal to mandate OA
self-archiving for all RCUK-funded research, as recommended by the UK
Select Committee on Science and Technology at long last!)
But which metrics? The RAE outcomes (and their accompanying top-sliced
research funding) are most highly correlated with prior research funding.
But relying on that metric alone, or predominantly, would just make
the RAE into a one-dimensional Matthew Effect and a Self-Fulfilling
Prophecy instead of a semi-independent assessment, supplementing the
research-proposal peer review that already goes into primary research
funding. If the UK does that, it may as well scrap the RAE altogether
and just crank up the amount of the primary grants it awards
But that would be foolish, and throwing out the baby with the
bathwater. The dual funding system should be retained. The sensible
way to use metrics is to have a rich, diverse, multiple-regression
equation of weighted assessment metrics, and to adjust the weights
of each according to field and further analysis and experience. For
this there are many other candidate metrics over and above prior grant
funding, such as citation counts, download counts, co-citation counts,
hub/authority counts, semantic-web measures, endogamy/exogamy indices
and many, many other rich, new harvestable metrics that will be spawned
by the Open Access digital database of all UK research output (and many
countable metrics, such as doctoral student counts, patents, invited
keynotes, can be listed in and harvested from a standardised RAE CV linked
to each researcher's OA Institutional Repository):
Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open
Research Web: A Preview of the Optimal and the Inevitable, in Jacobs,
N., Eds. Open Access: Key Strategic, Technical and Economic Aspects,
chapter 21. Chandos.
http://eprints.ecs.soton.ac.uk/12453/
Harnad, S., Carr, L., Brody, T. and Oppenheim, C. (2003) Mandated
online RAE CVs Linked to University Eprint Archives. Ariadne 35.
http://eprints.ecs.soton.ac.uk/7725/
One last word about "peer review": Research grant proposals are
peer-reviewed; journal articles are peer-reviewed. But RAE submissions
are not, and never were "peer-reviewed": The submissions (4 already
peer-reviewed articles plus a congeries of other evaluables) were
"assessed" by an RAE panel of peers in each field. But skimming, reading
or re-reviewing already peer-reviewed articles is not only not peer
review, but it is a waste of the RAE panel's time. Research proposal
submissions and journal paper submissions have each been peer-reviewed
already by content-specific custom selection among the most relevant
and best qualified experts in the world, not just one small RAE panel
(although this of course depends on the quality standards of the grant
funding council and especially the journal that peer-reviews the journal
article -- hence journal parameters such as the journal's impact factor
should be among the metrics in the RAE weighted metric equation).
Hence the needless and blunted RAE re-review never made much sense,
and it is about time it was replaced by the objective post-hoc metrics
that already predict its outcome. Peer-reviewing once -- properly -- at the
research proposal stage, and once again -- properly -- at the journal
publication stage -- is enough. The rest is far better assessed by post-hoc
metrics.
"Future UK RAEs to be Metrics-Based"
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#5275
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#5251
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#5238
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#5122
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#4455
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2326
Stevan Harnad
AMERICAN SCIENTIST OPEN ACCESS FORUM:
A complete Hypermail archive of the ongoing discussion of providing
open access to the peer-reviewed research literature online (1998-2005)
is available at:
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/
To join or leave the Forum or change your subscription address:
http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html
Post discussion to:
american-scientist-open-access-forum_at_amsci.org
UNIVERSITIES: If you have adopted or plan to adopt an institutional
policy of providing Open Access to your own research article output,
please describe your policy at:
http://www.eprints.org/signup/sign.php
UNIFIED DUAL OPEN-ACCESS-PROVISION POLICY:
BOAI-1 ("green"): Publish your article in a suitable toll-access journal
http://romeo.eprints.org/
OR
BOAI-2 ("gold"): Publish your article in a open-access journal if/when
a suitable one exists.
http://www.doaj.org/
AND
in BOTH cases self-archive a supplementary version of your article
in your institutional repository.
http://www.eprints.org/self-faq/
http://archives.eprints.org/
http://openaccess.eprints.org/
Received on Wed Jun 14 2006 - 14:26:04 BST