On Mon, 25 Nov 2002, Jan Velterop wrote:
> A propos of the Research Assessment Exercise, the policy director
> (Bahram Bekhradnia) of the Higher Education Funding Council, which
> carries out the RAE, recently sent me this response to a question some
> of our authors are asking and worrying about the possible significance
> of a journal's Impact Factor in the context of the RAE:
>
> "Where an article is published is an irrelevant issue. A top
> quality piece of work, in a freely available medium, should get
> top marks. The issue is really that many assessment panels use
> the medium of publication, and in particular the difficulty of
> getting accepted after peer review, as a proxy for quality. But
> that absolutely does not mean that an academic who chooses to
> publish his work in an unorthodox medium should be marked down.
> At worst it should mean that the panel will have to take rather
> more care in assessing it."
A rather complicated statement, but meaning, I guess, that the RAE, is
assessing quality, and does not give greater weight to paper journal
publications than to online journal publications. This is nothing new;
it has been its announced policy since at least 1995:
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Theschat/0033.html
HEFCE Circular RAE96 1/94 para 25c states:
"In the light of the recommendations of the Joint Funding Councils'
Libraries Review Group Report (published in December 1993) refereed
journal articles published through electronic means will be treated
on the same basis as those appearing in printed journals."
This is the result of adopting the following recommendation in Librev
Chapter 7:
"289. To help promote the status and acceptability of electronic
journals, the Review Group also recommends that the funding councils
should make it clear that refereed articles published electronically
will be accepted in the next Research Assessment Exercise on the
same basis as those appearing in printed journals."
But I would be more skeptical about the implication that it is the RAE
assessors who review the quality of the submissions, rather than the
peer-reviewers of the journals in which they were published. Some
spot-checking there might occasionally be, but the lion's share of the
assessment burden is borne by the journals' quality levels and impact
factors, not the direct review of the papers by the RAE panel!
(So the *quality* of the journal still matters: it is the *medium* of
the journal -- on-paper or online -- that is rightly discounted by
the RAE as irrelevant.)
(Hence the suggestion that a "top-quality" work risks nothing in being
submitted to an "unorthodox medium" -- apart from reiterating that
the medium of the peer-reviewed journal, whether on-line or on-paper,
is immaterial -- should certainly not be interpreted by authors as RAE
license to bypass peer-review, and trust that the RAE panel will review
all (or most, or even more than the tiniest proportion of submissions for
spot-checking) directly! Not only would that be prohibitively expensive
and time-consuming, but it would be an utter waste, given that peer
review has already performed that chore once already!
> HEFCE clearly recognises the flaws of the RAE methodology used
> hitherto, which is the first step towards a more satisfactory
> assessment system. What is not clear to me is the question whether
> your suggested reform will indeed be saving time and money. It seems to
> me that just adding Impact Factors of articles is indeed the shortcut
> (proxy for quality) that Bahram refers to, and that anything else will
> take more effort. I don't pretend to have any contribution to make
> to that discussion on efficiency of the assessment methodology, though.
I couldn't quite follow this. Right now, most of the variance in the RAE
rankings is predictable from the journal impact factors of the submitted
papers. That, in exchange for each university department's preparing
a monstrously large portfolio at great time and expense (including
photocopies of each paper!).
Since I seriously doubt that Bahram meant replacing impact ranking by
direct re-review of the all the papers by RAE assessors, I am not quite
sure what you think he had in mind! (You say "just adding Impact Factors
of articles is indeed the shortcut" but adding them to what, how? If
those impact factors currently do most of the work, it is not clear
that they need to be *added* to the current wasteful portfolio! Rather,
they, or, better still, even richer and more accurate scientometric
measures, need to be derived directly. Directly from what?
One possibility would be for the RAE to directly data-mine, say ISI's
Web of Science:
http://wos.mimas.ac.uk/. For that, the UK would need
a license to trawl, but that's no problem (we already have one). One
problem might be that ISI's coverage is incomplete -- only about 7500 of
the planet's 20,000 peer-reviewed are currently covered: in most cases
these are the top journals, but not all of them, and some fields are not
as well covered as others. But even apart from that, the RAE would still
need those online CVs I mentioned, in order to be able to find and analyze
the ISI citation data for each author and institution. And then we would
be restricted to ISI's current collection and scientometric measures.
My own proposal (no less of a shortcut for RAE) is to link the CVs
instead to researchers' universities' own Eprint Archives, in which
*all* of their peer-reviewed full-texts would be deposited, not just those
currently covered by ISI, and on which not only the ISI scientometrics,
but many richer, enhanced scientometrics could be done.
The burden of self-archiving all the university
peer-reviewed research output would not be RAE's --
http://www.eprints.org/self-faq/#research-funders-do -- but the
distributed burden of the universities themselves (to make
sure their staff self-archive all their peer-reviewed
research output in the university Eprint Archive) --
http://www.eprints.org/self-faq/#institution-facilitate-filling
-- but I'll bet that that burden would not only be lighter on
universities than the present RAE paperwork burden, but that they will
find it multiply recompensed by the many other benefits of open-access
it will bring, not the least among them being enhanced impact of their
own research output, enhanced access to the research output of others,
and perhaps even eventual relief from their serials budget burdens!
My own recommendation is accordingly this: since impact factors already
bear the lion's share of the assessment/ranking burden, the rest of
the complicated and time-consuming RAE submission can be jettisoned,
and replaced by online RAE-standardized CVs linked to the
online peer-reviewed articles (in the researchers' institutions'
Eprint Archives). Scientometric harvesters and analyzers (like
http://citebase.eprints.org/ or better) could then do the much richer
and more accurate scientometric analysis automatically. Those full-text
articles all have reference lists, which then provide the individual
papers' and authors' citation counts, plus many, many other potential
scientometric measures.
That would be a fruitful shortcut indeed.
At Southampton we are harvesting the RAE 2001 returns
http://www.hero.ac.uk/rae/ into a demo -- RAEprints -- to give a taste
of what having a global national open-access research archive would
be like, and what possibilities it would open up for research access,
impact, and assessment. (For a preview, see:
http://www.hyphen.info/ )
Stevan Harnad
Received on Mon Nov 25 2002 - 15:04:52 GMT