Re: New ways of measuring research
[ The following text is in the "utf-8" character set. ]
[ Your display is set for the "iso-8859-1" character set. ]
[ Some characters may be displayed incorrectly. ]
Further to my previous message on this topic, I've already had some offline responses. So, some things I had already noted, plus some sent offline after my first request to this list (including some tongue-in-cheek ones) are:
Individuals˙˙ efforts can result in:
- Medals and prizes awarded to you
- Having a prize named after you (Nobel˙˙)
- Having a building named after you (not uncommon)
- Having an institution named after you (Salk˙˙)
- Having a 5 billion euro international project built on your work (Higgs)
But on a more mundane note, other methodologies I know of that are being developed for measuring research outcomes are:
- Ways to measure long-term outcomes of research in the area of health sciences (for example, leading to or incorporated into treatments or techniques in use 20 years down the line)
- Something akin to this for looking at long-term impact of research in the social sciences
Specific examples would be useful if anyone can point me towards any.
I am also appealing to provosts/rectors/VCs or those involved in the administration of research-based institutions/programmes to tell us what sort of measures you would like to have (offline if you wish). These need not only be for the rather specific purpose of research evaluation, but for any institutional purpose (such as new measures of ROI).
Alma Swan
Key Perspectives Ltd
Truro, UK
--- On Wed, 8/10/08, Subbiah Arunachalam <subbiah_a_at_YAHOO.COM> wrote:
> From: Subbiah Arunachalam <subbiah_a_at_YAHOO.COM>
> Subject: New ways of measuring research
> To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM_at_LISTSERVER.SIGMAXI.ORG
> Date: Wednesday, 8 October, 2008, 1:01 AM
> Dear Members of the List:
>
> One of the key concerns of the Open Access movement is how
> will the transition from traditional toll-access publishing
> to scientific papers becoming freely accessible through open
> access channels (both OA repositories and OA journals)
> affect the way we evaluate science..
>
> In the days of print-only journals, ISI (now Thomson
> Reuters) came up with impact factors and other
> citation-based indicators. People like Gene Garfield and
> Henry Small of ISI and colleagues in neighbouring Drexel
> University in Philadelphia, Derek de Solla Price at Yale,
> Mike Moravcsik in Oregon, Fran Narin and Colleagues at CHI,
> Tibor Braun and the team in Hungary, Ton van Raan and his
> colleagues at CWTS, Loet Leydesdorff in Amsterdam, Ben
> Martin and John Irvine of Sussex, Leo Egghe in Belgium and a
> large number of others too numerous to list here took
> advantage of the voluminous data put together by ISI to
> develop bibliometric indicators. Respected organizations
> such as the NSF in USA and the European Union's
> Directorate of Research (which brought out the European
> Report on S&T INdicators similar to the NSF S&T
> Indicators) recognised bibliometrics as a legitimate tool. A
> number of scientomtrics researchers built citation networks;
> David pendlebury at
> ISI started trying to predict Nobel Prize winners using
> ISI citation data.
>
> When the transition from print to electronics started
> taking palce the scientometrics community came up with
> webometrics. When the transition from toll-access to open
> access started taking place we adopted webometrics to
> examine if open access improves visibility and citations.
> But we are basically using bibliometrics.
>
> Now I hear from the Washington Research Evaluation Network
> that
>
> ˙˙The traditional tools of R&D evaluation
> (bibliometrics, innovation indices, patent analysis,
> econometric modeling,
> etc.) are seriously flawed and promote seriously flawed
> analyses˙˙ and ˙˙Because
> of the above, reports like the ˙˙Gathering
> Storm˙˙ provide seriously flawed analyses and misguided
> advice to
> science policy decision makers.˙˙
> Should we rethink our approach to evaluation of science?
> Arun
> [Subbiah Arunachalam]
>
>
>
>
>
> ----- Original Message ----
> From: Alma Swan <a.swan_at_TALK21.COM>
> To:
> AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM_at_LISTSERVER.SIGMAXI.ORG
> Sent: Wednesday, 8 October, 2008 2:36:44
> Subject: New ways of measuring research
>
> Barbara Kirsop said:
> > 'This exchange of messages is damaging to the List
> and
> > to OA itself. I would like to suggest that those
> unhappy
> > with any aspect of its operation
> > merely remove themselves from the List. This is the
> normal
> > practice.'
> >
> > A 'vote' is unnecessary and totally
> inappropriate.
>
> Exactly, Barabara. These attempts to undermine Stevan are
> entirely misplaced and exceedingly annoying. The nonsense
> about Stevan resigning, or changing his moderating style,
> should not continue any further. It's taking up
> bandwidth, boring everyone to blazes, and getting us
> precisely nowhere except generating bad blood.
>
> Let those who don't like the way Stevan moderates this
> list resign as is the norm and, if they wish, start their
> own list where they can moderate (or not) and discuss
> exactly as they think fit, if they believe they can handle
> things better. Now that they all know who they are (and so
> do we), let them band together, and get on with it together.
>
> Those who do like the way Stevan moderates this list (his
> list), can stay and continue discussing the things we, and
> he, think are important in the way the list has always been
> handled. Goodbye, all those who wish things differently.
> It's a shame that you're going but we wish you well
> and we will be relieved when you cease despoiling this list
> with your carping.
>
> Can I now appeal to those who opt to stay to start a new
> thread on something important - and I suggest that the issue
> of research metrics is a prime candidate. I particularly
> don't want to be too precise about that term
> 'metrics'. Arun (Subbiah Arunachalam) has just sent
> out to various people the summary that the Washington
> Research Evaluation Network has published about - er -
> research evaluation. One of the conclusions is that
> bibliometrics are 'flawed'. Many people would agree
> with that, but with conditions.
>
> It is important to me in the context of a current project I
> am doing that I understand what possibilities there are for
> measuring (not assessing or evaluating, necessarily, but
> measuring) THINGS related to research. Measurements may be
> such a thing as immediate impact, perhaps measured as usual
> by citations, but I am also interested in other approaches,
> including long-term ones, for measuring research activities
> and outcomes. We need not think only in terms of impact but
> also in terms of outputs, effects, benefits, costs, payoffs,
> ROI. I would like to hear about things that could be
> considered as measures of research activity in one form or
> another. They may be quite 'wacky', and they may be
> things that are currently not open to empirical analysis yet
> would seem to be the basis of sensible measures of research
> outcomes. Any ideas you have, bring 'em on. Then the
> challenge is whether, in an OA world, people will be able to
> develop the tools to make the
> measures measurable. That's the next conversation.
>
> Stevan, your incisive input is very welcome as always. And
> you may quote/comment as much as you want. That is the
> unique value that you bring to this list and why the vast
> majority of us are still here, right behind you.
>
> Alma Swan
> Key Perspectives Ltd
> Truro, UK
Received on Wed Oct 08 2008 - 10:18:34 BST
This archive was generated by hypermail 2.3.0
: Fri Dec 10 2010 - 19:49:31 GMT