On Sat, 9 Dec 2006, Yorick Wilks wrote:
> Stevan
> Now that the future of the RAE is going your way (!)for the sciences,
> it would be very helpful--certainly for general fairness and for
> disciplines like Computer Science in particular, if you could at the
> right moment add your lobbying (probably on HEFCE) to try to stop the
> metrics applying in science only through the narrow channel of the ISI
> rated journals---but rather in some wider OA way like that below (just
> open Google citations would be a lot better for CS than the ISI
> constraint).
But, Dear Yorick, that's what I (and others) have been preaching all
along! The ISI Journal Impact Factor (JIF) is not only an incomplete but
a blunt instrument (not covering all journals, and not giving and exact
citation count for an article or author, but merely the average citation
count for the journals in which the article appeared: rather like not
giving a student a mark, but giving him the average mark for his school!).
Harnad, S., Carr, L., Brody, T. and Oppenheim, C. (2003) Mandated
online RAE CVs Linked to University Eprint Archives. Ariadne 35.
http://eprints.ecs.soton.ac.uk/7725/
Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open
Research Web: A Preview of the Optimal and the Inevitable, in Jacobs,
N., Eds. Open Access: Key Strategic, Technical and Economic Aspects,
chapter 20. Chandos.
http://eprints.ecs.soton.ac.uk/12453/
The JIF has its place, but only one among a large and rich battery of
metrics, to be derived from a 100% Open Access Corpus.
At Southampton, we are already building a provisional, approximate
para-ISI citation metric, triangulating from the citation counts
provided by Google Scholar, Citebase and Citeseer, exactly along the
lines you suggest! See the AmSci references at the end of this posting,
and stay tuned!
> The reason is simply that in CS/AI many of the best
> publications are in the high-prestige strongly peer-reviewed
> conferences, which are not ISI rated, and that there are probably too
> few journals that are rated to carry any shift of publication
> consequent upon any very tight citation strategy from HEFCE/Treasury.
I agree completely (though ISI does cover some conferences!). So it's
Google-Scholar, Citeseer and Citebase for now, and once we approach 100%
OA, many more OA scientometric services will be spawned.
> This would simply mean that a great chunk of good CS publication would
> then be ineligible for the metrics under the sort of ISI-based scheme
> that many are expecting.
OA metrics-based, not just ISI-based!
> This is quite different from many sciences of
> course, where conferences are low-rated and journals are everything.
But OA metrics covers all forms of online performance indicators (which in
turn includes all performance indicators we choose to put online: funding,
student counts, awards, exhibits, -- plus those derived from the online
corpus itself: downloads, citations, co-citations, growth/decay rates,
endogamy/exogamy scores, hub/authority scores, book-citation counts,
reviews, comments, "semantic" metrics, etc. etc.)
http://www.ecs.soton.ac.uk/~harnad/Temp/bookcite.htm
> The UK RAE sub-panel for computing know all this and, in 2008 ,as
> previously, are agreed in treating all good forms of publication
> equally. I am on that subpanel and hoping they will lobby in the same
> way for what comes later.
I urge you to encourage the RAE panels (not just in CS but all
disciplines) to start testing and validating metrics as of now, in
advance of RAE 2008; the parallel panel/metric data in 2008 can then be
used to calibrate and customise the beta weights in the metric regression
equation discipline by discipline).
UK "RAE" Evaluations (began Nov 2000)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#1018
Digitometrics (May 2001)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1300.html
Scientometric OAI Search Engines (began Aug 2002)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2238
UK Research Assessment Exercise (RAE) review (Oct 2002)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2326
Australia stirs on metrics (June 2006)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/5417.html
Big Brother and Digitometrics (began May 2001)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#1298
UK Research Assessment Exercise (RAE) review (began Oct 2002)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2326
Need for systematic scientometric analyses of open-access
data (began Dec 2002)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2522
Potential Metric Abuses (and their Potential Metric Antidotes)
(began Jan 2003)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2643
Future UK RAEs to be Metrics-Based (began Mar 2006)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#5251
Australia stirs on metrics (Jun 2006)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/5417.html
Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as
Self-Fulfilling Prophecy (Jun 2006)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/5418.html
Australia's RQF (Nov 2006)
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/5806.html
Stevan Harnad
> On 9 Dec 2006, at 11:37, Stevan Harnad wrote:
>
> > On Fri, 8 Dec 2006, Peter Suber wrote:
> >
> >> If the metrics have a stronger OA connection, can you say something
> >> short (by email or on the blog) that I could quote for readers who
> >> aren't clued in, esp. readers outside the UK?
> >
> > Dear Peter,
> >
> > Sure (and I'll blog this too, hyperlinked):
> >
> > (1) In the UK (Research Assessment Exercise, RAE) and Australia
> > (Research
> > Quality Framework, RQF) all researchers and institutions are evaluated
> > for
> > "top-sliced" funding, over and above competitive research proposals.
> >
> > (2) Everywhere in the world, researchers and research institutions have
> > research performance evaluations, on which careers/salaries, research
> > funding
> > and institutional/departmental ratings depend.
> >
> > (3) There is now a natural synergy growing between OA self-archiving,
> > Institutional Repositories (IRs), OA self-archiving mandates, and the
> > online "metrics" toward which both the RAE/RQF and research evaluation
> > in
> > general are moving.
> >
> > (4) Each institution's IR is the natural place from which to derive and
> > display research performance indicators: publication counts, citation
> > counts, download counts, and many new metrics, rich and diverse ones,
> > that will be mined from the OA corpus, making research evaluation much
> > more open, sensitive to diversity, adapted to each discipline,
> > predictive,
> > and equitable.
> >
> > (5) OA Self-Archiving not only allows performance indicators (metrics)
> > to be collected and displayed, and new metrics to be developed, but OA
> > also enhances metrics (research impact), both competitively (OA vs.
> > NOA)
> > and absolutely (Quality Advantage: OA benefits the best work the most,
> > and Early Advantage), as well as making possible the data-mining of the
> > OA corpus for research purposes. (Research Evaluation, Research
> > Navigation, and Research Data-Mining are also very closely related.)
> >
> > (6) This powerful and promising synergy between Open Research and Open
> > Metrics is hence also a strong incentive for institutional and funder
> > OA mandates, which will in turn hasten 100% OA: Their connection needs
> > to be made clear, and the message needs to be spread to researchers,
> > their institutions, and their funders.
> >
> > Best wishes,
> >
> > Stevan
> >
> > PS Needless to say, closed, internal, non-displayed metrics are also
> > feasible, where appropriate.
>
>
Received on Sat Dec 09 2006 - 13:50:26 GMT