-- Charles Finley, Executive Director Project Open Source | Open Access Knowledge Media Design Institute University of Toronto charles.finley_at_utoronto.ca Phone: 416.978.3778 http://open.utoronto.ca C.Oppenheim wrote: > My latest study is in the field of music, where journal articles are most > certainly not the primary means of dissemination. I also understand > that in > arhcaeology, reports and monographs are as important, if not more > important, > than journal articles. > > However, the comment misses the point that citation counting counts > citations to all media, not just to journal articles. > > Charles > > Professor Charles Oppenheim > Head > Department of Information Science > Loughborough University > Loughborough > Leics LE11 3TU > > Tel 01509-223065 > Fax 01509-223053 > e mail C.Oppenheim_at_lboro.ac.uk > ----- Original Message ----- > From: <l.hurtado_at_ED.AC.UK> > To: <AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM_at_LISTSERVER.SIGMAXI.ORG> > Sent: Tuesday, September 19, 2006 9:56 AM > Subject: Re: Future UK RAEs to be Metrics-Based > > > I've done a quick check of the publications by Openheim supposedly > showing a strong correlation of RAE standings and citations in > journals, and it seems to me that all I can find are studies to do with > psychology, anatomy, archaeology, etc., ALL OF WHICH use > journal-articles as the prized mode of research > productivity/publication. > Can Openheim or STevan point me to studies of, e.g., English Lit, > History, Religion, with similar results?? > I know that this list is not about this issue primarily, but it's the > (over?)confidence of Stevan on this that puzzles me . . . in the > apparent absence of the empirical proof that he so values. Or please > correct me by pointing to the publications I request (preferably > online, of course!). > Larry > > Quoting Stevan Harnad <harnad_at_ECS.SOTON.AC.UK>: > >> On Mon, 18 Sep 2006 l.hurtado_at_ED.AC.UK wrote: >> >>> Well, I'm all for empirically-based views in these matters. So, if >>> Oppenheim or others have actually soundly based studies showing what >>> Stevan and Oppenheim claim, then that's to be noted. I'll have to see >>> the stuff when it's published. In the meanwhile, a couple of further >>> questions: >> >> Many studies are already published. In fact many are cited in: >> >> Harnad, S., Carr, L., Brody, T. and Oppenheim, C. (2003) Mandated >> online RAE CVs Linked to University Eprint Archives. Ariadne 35. >> http://eprints.ecs.soton.ac.uk/7725/ >> >>> --Pardon me for being out of touch, perhaps, but more precisely what is >>> being measured? What does journal "citation counts" refer to? >> >> The total number of citations to the articles by submitted authors >> (and not just those >> for their 4 submitted articles!) >> >>> Citation of journal articles? Or citation of various things in journal >>> articles (and why privilege this medium?)? Or . . . what? >> >> Citation of the articles, but that usually means citing things in the >> articles! >> >> Journal articles are privileged in many disciplines because they are the >> main means of reporting research. In book-based disciplines the balance >> is otherwise, but the interesting thing is that even in book-based >> disciplines there is a journal article citation correlation with the RAE >> rankings. One would expect it to be somewhat weaker than in >> article-based >> disciplines, but more data are needed to be exact about this. >> >>> --What does "correlation" between RAE results and "citation counts" >>> actually comprise? >> >> The RAE ranks the departments of the c. 73 UK research universities, >> with ranks from 1 to 5*. Correlation is the measure of the degree to >> which values on one variable co-vary with, hence predict, values on >> another variable (e.g. height is correlated with weight, the higher on >> one, the higher on the other, and vice versa). >> >> When two variables are correlated, you can predict one from the >> other. How >> accurately you can predict is reflected by the square of the correlation >> coefficient: If there is a correlation of 0.8, then the predictivity >> (the >> percentage of the variation in one of the variables that you can already >> predict from the other) is 64%. For a correlation of 0.9 it's 81% etc. >> >> Well, as you will see in the reference list of the above-cited article, >> Smith & Eysenck found a correlation of about 0.9 between the RAE ranks >> and the total citation counts for the submitted researchers in >> Psychology. >> >> Looking at Charles Oppenheim's studies, you will see that the >> correlations >> varied from about 0.6 to 0.9, depending on field and year, which is all >> quite high, but *especially* give that the RAE does not count citations! >> >> The correlation is even higher with another metric, in science and >> biology: prior research funding. There it can be as high as 0.99, but >> that is not so good, because (1) prior funding *is* directly seen and >> counted by the panel, so that high correlation could be an effect of >> direct influence. Worse, using prior funding as a criterion generates a >> Matthew Effect, with the already-highly-funded getting richer and >> richer, >> and the less-funded getting poorer and poorer. >> >> That is why a multiple regression equation is best, with many predictor >> metrics, each one weighted according to the desiderata and particulars >> of each discipline, and validated against further criteria, to make sure >> they are measuring what we want to measure. There will be many candidate >> metrics in the OA era. >> >>> Let me lay out further reasons for some skepticism. In my own field >>> (biblical studies/theology), I'd say most senior-level scholars >>> actually publish very infrequently in refereed journals. We do perhaps >>> more in earlier years, but as we get to senior levels we tend (a) to >>> get requests for papers for multi-author volumes, and (b) we devote >>> ourselves to projects that best issue in book-length publications. >> >> That happens in other fields too, and as metric equations are calibrated >> and optimised, factors like seniority will enter into the weightings >> too. (Book chapter citations can and will of course be cited too -- >> and are, to a limited degree, already being counted by ISI and others, >> because journal articles cite books and book chapters too, and those >> citations are caught by ISI.) >> >>> So if my own productivity and impact were assessed by how many journal >>> articles I've published in the last five years, I'd look poor (even >>> though . . . well, let's say that I rather suspect that wouldn't be the >>> way I'm perceived by peers in the field). >> >> The RAE ranks departments via individuals, and a department needs >> a blend of junior and senior people, with their different style of >> publication. And remember that RAE is comparing like with like. So >> you might be interested in checking how your own journal article >> and book chapter citation counts compare with those of your peers (or >> juniors). You might be (pleasantly) surprised! >> >> And of course in the (soon-to-hand) OA era, other metrics will be >> available too, such as download counts ("hits"), which happen much >> earlier, yet are correlated with later citations -- and are of course >> maximized by self-archiving your papers in your institutional IR to make >> them OA. >> >> Odd new metrics will also include endogamy/exogamy scores (their >> preferred >> polarity depending on field!), depending on the degree of self-citing, >> co-author citing, co-citation circle citing, within/outside specialty >> citing, intra/interdisciplinary citing, both for the citing >> article/author >> and the cited article/author. Then there's text-proximity scores (of >> which >> an extreme would be plagiarism), latency/longevity metrics, co-citation >> to/from, CiteRank (where the weight of each citation is recursively >> ranked, google style, by the degree of citedness of the citer), etc. >> etc. >> >>> Or is the metric to comprise how many times I'm *cited* in journals? >> >> It's how many times you're cited, which means how many times your >> articles are >> cited -- in journals, but in principle also in book chapters, >> conferences and books. >> And whether what is *being* cited is articles, chapters or books. >> >>> If so, is there some proven correlation between a scholar's impact or >>> significance of publications in the field and how many times he happens >>> to be cited in this one genre of publication? I'm just a bit >>> suspicious of the assumptions, which I still suspect are drawn (all >>> quite innocently, but naively) from disciplines in which journal >>> publication is much more the main and significant venue for scholarly >>> publication. >> >> I don't know of systematic genre comparisons (journals vs book chapters, >> even empirical vs theoretical journals, reviews, etc.) but they no doubt >> exist. I will branch this to the sigmetrics list where the experts >> are! I >> am just an amateur... >> >>> And, as we all know, "empirical" studies depend entirely on the >>> assumptions that lie at their base. So their value is heavily framed >>> by the validity and adequacy of the governing assumptions. No >>> accusations, just concerns. >> >> Interpretations may be influenced by assumptions, but the empirical fact >> that atmospheric pressure predicts RAE ranking would be an empirical >> datum >> (and, if it predicted it with a correlation of, say, 0.9) that would be >> a reason for scrapping RAE panels for barometers >> theory-independently.... >> >> Stevan Harnad >> >>> Quoting Stevan Harnad <harnad_at_ECS.SOTON.AC.UK>: >>> >>> > On Mon, 18 Sep 2006, Larry Hurtado wrote: >>> > >>> >> Stevan and I have exchanged views on the *feasibility* of a metrics >>> >> approach to assessing research strength in the Humanities, and he's >>> >> impressed me that something such *might well* be feasible *when/if* >>> >> certain as-yet untested and undeveloped things fall into place. I >>> >> note, >>> >> e.g., in Stevan's addendum to Oppenheim's comment that a way of >>> >> handling >>> >> book-based disciplines "has not yet been looked at", and that a >>> number >>> >> of other matters are as yet "untested". >>> > >>> > Larry is quite right that the (rather obvious and straightforward) >>> > procedure of self-archiving books' metadata and cited references in >>> > order to derive a comprehensive book-citation index (which would >>> > of course include journal articles citing books, books citing books, >>> > and books citing journal articles) had not yet been implemented or >>> > tested. >>> > >>> > However, the way to go about it is quite clear, and awaits only OA >>> > self-archiving mandates (to which a mandate to self-archive one's >>> book >>> > metadata and reference list should be added as a matter of course). >>> > >>> > But please recall that I am an evangelist for OA self-archiving, >>> > because >>> > I *know* it can be done, that it works, and that it confers >>> substantial >>> > benefits in terms of research access, usage and impact. >>> > >>> > Insofar as metrics are concerned, I am not an evangelist, but >>> merely an >>> > enthusiast: The evidence is there, almost as clearly as it is with >>> the >>> > OA impact-advantage, that citation counts are strongly correlated >>> with >>> > RAE rankings in every discipline so far tested. Larry seems to pass >>> > over >>> > evidence in his remark about the as yet incomplete book citation data >>> > (ISI has some, but they are only partial). But what does he have >>> to say >>> > about the correlation between RAE rankings and *journal article >>> > citation >>> > counts* in the humanities (i.e., in the "book-based" disciplines)? >>> > Charles will, for example, soon be reporting strong correlations in >>> > Music. Even without having to wait for a book-impact index, it seems >>> > clear that there are as yet no reported empirical exceptions to the >>> > correlation between journal article citation metrics and RAE >>> outcomes. >>> > >>> > (I hope Charles will reply directly, posting some references to >>> his and >>> > others' studies.) >>> > >>> >> This being the case, it is certainly not so a priori to say that a >>> >> metrics approach is not now really feasible for some disciplines. >>> > >>> > Nothing a priori about it: A posteriori, every discipline so far >>> tested >>> > has shown positive correlations between its journal citation >>> counts and its >>> > RAE rankings, including several Humanities disciplines. >>> > >>> > The advantage of having one last profligate panel-based RAE in >>> parallel >>> > with the metric one in 2008 is that not a stone will be left >>> unturned. >>> > If there prove to be any disciplines having small or non-existent >>> > correlations with metrics, they can and should be evaluated >>> otherwise. >>> > But let us not assume, a priori, that there will be any such >>> > disciplines. >>> > >>> >> I emphasize that my point is not a philosophical one, but strictly >>> >> whether as yet a worked out scheme for handling all Humanities >>> >> disciplines rightly is in place, or capable of being mounted without >>> >> some significant further developments, or even thought out >>> adequately. >>> > >>> > It depends entirely on the size of the metric correlations with the >>> > present RAE rankings. Some disciplines may need some supplementary >>> > forms >>> > of (non-metric) evaluation if their correlations are too weak. >>> That is >>> > an >>> > empirical question. Meanwhile, the metrics will also be growing in >>> > power >>> > and diversity. >>> > >>> >> That's not an antagonistic question, simply someone asking for the >>> >> basis for the evangelistic stance of Stevan and some others. >>> > >>> > I evangelize for OA self-archiving of research and merely advocate >>> > further development, testing and use of metrics in research >>> performance >>> > assessment, in all disciplines, until/unless evidence appears that >>> > there >>> > are exceptions. So far, the objections I know of are all only in the >>> > form of a priori preconceptions and habits, not objective data. >>> > >>> > Stevan Harnad >>> > >>> >> > Charles Oppenheim has authorised me to post this on his behalf: >>> >> > >>> >> > "Research I have done indicates that the same correlations >>> >> > between >>> >> > RAE scores and citation counts already noted in the sciences >>> >> > and >>> >> > social sciences apply just as strongly (sometimes more >>> strongly) >>> >> > in the humanities! But you are right, Richard, that >>> metrics are >>> >> > PERCEIVED to be inappropriate for the humanities and a lot of >>> >> > educating is needed on this topic." >>> > >>> >>> >>> >>> L. W. Hurtado, Professor of New Testament Language, Literature & >>> Theology >>> Director of Postgraduate Studies >>> School of Divinity, New College >>> University of Edinburgh >>> Mound Place >>> Edinburgh, UK. EH1 2LX >>> Office Phone: (0)131 650 8920. FAX: (0)131 650 7952 >>> >> > > > > L. W. Hurtado, Professor of New Testament Language, Literature & Theology > Director of Postgraduate Studies > School of Divinity, New College > University of Edinburgh > Mound Place > Edinburgh, UK. EH1 2LX > Office Phone: (0)131 650 8920. FAX: (0)131 650 7952 -- Charles Finley, Executive Director Project Open Source | Open Access Knowledge Media Design Institute University of Toronto charles.finley_at_utoronto.ca Phone: 416.978.3778 http://open.utoronto.caReceived on Tue Sep 19 2006 - 20:27:20 BST
This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:48:30 GMT