Miscellaneous links

Nearly every ranking of economics journals uses citations to measure and compare journals’ research impact. Raw citation data, however, include a number of factors that generally are thought to mismeasure impact. For example, under the view that a citation in a top journal represents greater impact than a citation elsewhere, it is usual to weight citations according to their sources. The most common means by which weights are derived is the recursive procedure of Liebowitz and Palmer (1984) (henceforth LP), which handles the simultaneous determination of rank-adjusted weights and the ranks themselves.

Alas, the lists to be used by the ARC in assessing research under the ERA (including the economics list) have eschewed this approach. Instead, they’re using the pre-1984 method of ranking journals by asking senior people in the field wot they reckon.

This entry was posted in Uncategorized. Bookmark the permalink.

4 Responses to Miscellaneous links

  1. conrad says:

    More crazy stuff by the ARC, although it’s better than DEST points — and I think you are too nice just calling them lists given that they are divided into 4 categories (and why they insist on using A*, A, B, C vs. A, B, C, D beats me), especially because the quality between the top and bottom journals within categories is simply enormous. At least in my field, I’d rather have one paper in the top A* journal than 10 papers in the bottom A* journal. Of course, that’s not how anyone’s management sees it which shows that it will still cause the same old problems and God help you if you’re in a field where people don’t publish much, where you expect you’re field to be deleted from possible research to do in universities.

  2. David Stern says:

    My impression was that ERA was going to focus primarily on citations as recorded in Scopus. The downside is that Scopus has rather biased coveraged as some publishers don’t allow them to include their journals or something. e.g. Taylor and Hall journals like Applied Economics are not included. This is unless the field is one where citations are not important – many humanities, creative arts etc.

  3. conrad says:

    Citations are biased within fields too — papers that are in an area that are easy to work on and replicate are generally cited more than those that are not, even if the latter has extremely useful results.
    Here’s an example using ideas just from this blog. If you ran a quick IAT study on something trendy, it might well get cited a lot, because every man and his dog can replicate it and slightly permute it to look at something a bit different. Alternatively, if you ran a 5 year longitudinal study looking at the effect of some educational program on outcomes, then it probably wouldn’t be cited nearly as much, since this is very hard to replicate. Both of these might well fall into the same ERA category (e.g., psychology).
    Given that money, promotions, grant success, and so on now get attached to these things, it’s easy to see what type of research this leads too (I find both types of research fine incidentally — but I don’t find it fair that the second type of research is getting killed off because of silly bureaucrats using mainly metrics to evaluate things).

  4. David Stern says:

    I see now that ERA will not count citations in economics. Psychology is the only social science where they will. I can see no justification for this. Looking at RePEc, Australian economists tend to get published a lot and sometimes in good journals but relative to their global rank in these things they don’t get cited much. So maybe some senior Australian economists persuaded ARC not to count citations?

Comments are closed.