Re: UK Research Assessment Exercise (RAE) review
On Wed, 20 Nov 2002, David Goodman wrote:
> I consider the impact factor (IF) properly used as a valid measure
> in comparing of journals; I also consider the IF properly used as a
> possibly valid measure of article quality. But either use has many
> possible interfering factors to consider, and these measurements have
> been used in highly inappropriate ways in the past, most notoriously in
> previous UK RAEs.
With all due respect, comparing journals is part of the librarian's art,
but comparing departments and assessing research seems to me to fall
within another artform...
> Stevan mentions one of the problems. Certainly the measure of the impact
> of an individual article is more rational for assessing the quality of
> the article than measuring merely the impact of the journal in which
> it appears. This can be sufficiently demonstrated by recalling that any
> journal necessarily contains articles of a range of quality.
The direct, exact measure is preferable to the indirect, approximate one
for all the reasons that direct, exact estimates, when available, are
preferable to indirect, approximate ones. But the situation is not as
simple as that. The right scientometric model here (at least to a first
approximation) is multiple regression: We have many different kinds of
estimates of impact; each might be informative, when weighted by its
degree of predictiveness (i.e., the percentage of the total variance
that it can account for).
Yes, on the face of it, a specific paper's citation count seems a better
estimate of impact than the average citation count of the journal in
which it appeared. But journals establish their quality standards across
time and a broad sample, and the reasons for one particular paper's
popularity (or unpopularity) might be idiosyncratic.
So in the increasingly long, rich and diverse regression equation for
impact, direct individual paper impact should certainly be given the
greatest prima facie weight, but the impact factor of the journal it
appears in should not be sneezed at either. It (like the impact factor
of the author himself) might add more useful and predictive information
to the regression equation.
The real point is that none of this can be pre-judged a priori. As in
psychometrics, where the psychological tests must be "validated" against
the criterion they allegedly measure, all of the factors contributing to
the scientometric regression equation for impact need to be independently
validated.
> More attention is needed to the comparison of fields. The citation
> patterns in different subject fields varies, not just between broad
> subject fields but within them.
Of course. And field-based (and subfield-based) weightings and patterns
would be among the first ones one would look to validate and adjust: Not
only so as not to compare apples with oranges, but again to get maximum
predictiveness and validity out of the regression. None of this argues
against scientometric regression equations for impact; it merely argues
for making them rich, diverse, and analytic.
> In the past, UK RAEs used a single
> criterion of journal impact factor in ALL academic fields; this was
> patently absurd (just compare the impact factors of journals in math
> with those in physics, or those in ecology with those in biochemistry).
> To the best of my knowledge they have long stopped this. (This incorrect
> use did much to decrease the repute of this measure, even when correctly
> used.)
I doubt this was ever quite true of the RAE. But in any case, it does not
militate against scientometric analysis of impact in any way? It merely
underscores that naive, unidimensional analyses are unsatisfactory. It
is precisely this impoverished, unidimensional approach for which an
online open-access, full-text, citation-interlinked refereed literature
across all disciplines would be the antidote!
> In comparing different departments, the small scale variation between
> subjects specialisms can yield irrelevant comparisons, because few
> departments have such a large number of individuals that they cover the
> entire range of their subject field.
But this is a scientometric point you are making, and the remedy is
better scientometrics (not something else, such as having Socrates read
and weigh everything for us!): To vary the saying about critics of
metaphysics: "Show me someone who wishes to destroy scientometrics and
I'll show you a scientometrician with a rival system."
> I'll use ecology as an example:
> essentially all the members of my university's department [Ecology and
> Evolutionary Biology] work in mathematical ecology, and we think we are
> the leading department in the world. Most ecologists work in more applied
> areas. The leading journals of mathematical ecology have relatively lower
> impact factors, as this is a very small field. This can be taken into
> account, but in a relatively small geopolitical area like the UK, there
> may be very few truly comparable departments in many fields. It certainly
> cannot be taken into account in a mechanical fashion, and the available
> scientometric techniques are not adequate to this level of analysis.
But one (not the only) goal of the RAE is to rank UK ecology departments
against one another (to allocate the finite amounts of research funding
that are available)! Now I'm all for not being "mechanical," but what
non-socratic means would you recommend for comparing the research output
of all UK ecology departments if you eschew scientometric analyses?
Re-do the peer-review, all over again, in-house, for everything?
> The importance of a paper is certainly reflected in its impact, but not
> directly in its impact factor. It is not the number of publications that
> cite it which is the measure, but the importance of the publications that
> cite it. This is inherently not a process that can be analyzed on a
> current basis.
I would be interested to hear David's candidate for this "true" impact
analysis, if it is not to be scientometric (apart from the socratic, or
peer-review-redux).
> There is a purpose in looking at four papers only: in some fields of
> the biomedical sciences in particular, it is intended to discourage
> the deliberate splitting of papers into many very small publications,
> with the consequence that in some fields of biomedicine a single person
> might have dozens in a year, adding to the noise in the literature.
Salami-sliced research is certainly to be discouraged, but surely it is
more rational to discourage it by rewarding ("true") research impact,
rather than by ignoring published articles, and what they might have
contributed to the validity of the regression equation for impact.
Otherwise this is the same kind of needless approximation that came from
ignoring paper impact and considering only journal impact!
> One could also argue that a researcher should be judged by
> the researcher's best work, because the best work is what primarily
> contributes to the progress of science.
But why not try for a better approximation, if the data are available?
And the point is that it would be far easier, cheaper, and less
effortful to implement the RAE online, with all refereed publications
digitally linked, and analyzed scientometrically online, than to submit
4 hard copies along with the rest of the (mostly irrelevant and ignored)
paperwork every four years!
> In most other respects I agree with Stevan. I will emphasize that the
> publication of scientific papers in the manner he has long advocated will
> lead to the possibility of more sophisticated scientometrics. This will
> provide data appropriate for analysis by those who know the techniques,
> the subject, and the academic organization. The data obtainable from
> the current publication system are of questionable usefulness for this.
I think I agree with this, but I'm not quite sure what it is that I
recommended we DO that David is recommending we NOT DO, and what David
recommends that we DO that I recommended that we NOT DO! It's
scientometric analysis all the way down; the trick is just to make it is
rich, powerful, predictive and valid as we can.
Stevan Harnad
Received on Wed Nov 20 2002 - 21:46:00 GMT
This archive was generated by hypermail 2.3.0
: Fri Dec 10 2010 - 19:46:42 GMT