A report on STSM-TD 1210-24192 at GRAPES. Liege, Belgium
Host : Dr. Marcel Ausloos
February 5 – 11, 2015
The adoption of bibliometric criteria are assumed as basis for the evaluation of researchers. Often, they are based on the calculus of impact factors (IF), and can give rise to variety of ranking, that, in turn, may be used for the evaluation of scientic research, projects, and candidates to scientific jobs. The final ranking is usually reached through compromises, the operation of convincing committee members and attracting the convergence to a specific ranking. We already examined the diculties well expressed in the Arrow’s impossibility theorem, and examined the role of the behavioral aspect named blindness to small changes. Such aspects are based on specific hypotheses, and the way to reach consensus has to be further discussed. In this STSM, the work already done has been be further examined, to clarify and prepare the manuscript for the final publication and initiate a complementary paper.
The visit had the main target of discussing developments on the manuscript “On the usage of bibliometric indicators for scientic evaluations”, offering an interdisciplinary perspective to the usage of bibliometric criteria for ranking candidates in an evaluation procedure, where candidates are represented through 10 of their publications. The fact is that the objective measure linked to any bibliometric criteria can be further used to give rise to any ranking. The topics is quite sensitive, since much emphasis is given to quantitative measurement of the quality of published papers. Rankings risk to be highly subjective, in spite of the efforts of making them objective. Therefore, some possible rankings have been considered. The approach links to the discussion on bibliometric criteria the perspective of Economics and Preferences. The starting point is the Arrow’s impossibility theorem, that states that no rank order voting system can convert the ranked preferences of individuals into a community-wide (complete and transitive) ranking, – while also meeting a specific set of natural criteria. Therefore, coalitions drive the final decision, and the key point is understanding their formation. We consider as term of comparison, and adapt to our case, a method proposed in literature for gathering referee reports and subsequently deciding on acceptance/rejection of papers . The main question addressed was to measure how dierent decisions can be taken, by acting on small changes. If
a few rankings are inverted, so called indifference sets can be enlarged. A key issue is on the way in which this enlargement is performed. At the present time, we base on the behavioral trait of “Blindness to small changes”, that is the under-reaction to small changes. Therefore, when the scores are available (and not only rankings), the most close become the ones that are most susceptible to be perceived as equal each to the other, so implying a weakening of the ranking.
The artificial recruiting system has been simulated, in order to get the following information:
1. the distribution of such small changes;
2. their implication in the initial-final rankings;
3. the dominance of the criterion, meaning that the final ranking does not violate the preference expressed through one criterion;
4. how big is the initial-final difference in scoring/ranking.
Further work has been made through Information Theoretic arguments. In fact, the Kullback-Leibler divergence was used to understand the restriction of the space of possibilities under such procedure. Results may be used for detecting outliers and Institutions may get signals of biased ranking or review procedures. This approach is quite relevant because committees are usually composed by 4-6 members, so it is impossible to carry on a statistical analysis on the behavior of committee members, but it is possible to compare the outcome of the committee decision to the output of the simulations.
During the visit the material for the seminar that I delivered in Milano on the topics, held on 2015-02-12, was also prepared. The seminar was well attended, and opened the way to further developments for collaborations.
Rome, February 27, 2015
 J.A. Garcia, R. Rodriguez-Sanchez, J. Fdez-Valdivia, F. de Moya-Anegon, A web application for aggregating conflicting reviewers’ preferences. Scientometrics 99 (2014) 523-539.)